May 13 23:56:40.911462 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:56:40.911489 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:56:40.911502 kernel: BIOS-provided physical RAM map: May 13 23:56:40.911509 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 23:56:40.911516 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 23:56:40.911523 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 23:56:40.911530 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 13 23:56:40.911537 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 13 23:56:40.911544 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:56:40.911550 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 23:56:40.911560 kernel: NX (Execute Disable) protection: active May 13 23:56:40.911566 kernel: APIC: Static calls initialized May 13 23:56:40.911577 kernel: SMBIOS 2.8 present. May 13 23:56:40.911585 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 13 23:56:40.911593 kernel: Hypervisor detected: KVM May 13 23:56:40.911618 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:56:40.911633 kernel: kvm-clock: using sched offset of 3175119940 cycles May 13 23:56:40.911641 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:56:40.911649 kernel: tsc: Detected 2000.000 MHz processor May 13 23:56:40.911657 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:56:40.911664 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:56:40.911672 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 13 23:56:40.911680 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 23:56:40.911687 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:56:40.911695 kernel: ACPI: Early table checksum verification disabled May 13 23:56:40.911705 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 13 23:56:40.911712 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911720 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911728 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911735 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 13 23:56:40.911742 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911750 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911757 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911767 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:56:40.911774 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 13 23:56:40.911781 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 13 23:56:40.911802 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 13 23:56:40.911809 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 13 23:56:40.911816 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 13 23:56:40.911824 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 13 23:56:40.911835 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 13 23:56:40.911845 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:56:40.911853 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:56:40.911861 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 23:56:40.911868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 23:56:40.911880 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 13 23:56:40.911888 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 13 23:56:40.911895 kernel: Zone ranges: May 13 23:56:40.911905 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:56:40.911913 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 13 23:56:40.911921 kernel: Normal empty May 13 23:56:40.911928 kernel: Movable zone start for each node May 13 23:56:40.911936 kernel: Early memory node ranges May 13 23:56:40.911944 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 23:56:40.911951 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 13 23:56:40.911959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 13 23:56:40.911967 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:56:40.911977 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:56:40.911988 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 13 23:56:40.911996 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:56:40.912004 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:56:40.912011 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:56:40.912019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:56:40.912027 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:56:40.912034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:56:40.912042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:56:40.912052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:56:40.912060 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:56:40.912068 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:56:40.912076 kernel: TSC deadline timer available May 13 23:56:40.912083 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:56:40.912091 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:56:40.912099 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 13 23:56:40.912110 kernel: Booting paravirtualized kernel on KVM May 13 23:56:40.912118 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:56:40.912129 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:56:40.912137 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:56:40.912144 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:56:40.912152 kernel: pcpu-alloc: [0] 0 1 May 13 23:56:40.912160 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 23:56:40.912168 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:56:40.912177 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:56:40.912184 kernel: random: crng init done May 13 23:56:40.912194 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:56:40.912202 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:56:40.912210 kernel: Fallback order for Node 0: 0 May 13 23:56:40.912227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 13 23:56:40.912235 kernel: Policy zone: DMA32 May 13 23:56:40.912243 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:56:40.912251 kernel: Memory: 1967108K/2096612K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 129244K reserved, 0K cma-reserved) May 13 23:56:40.912259 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:56:40.912266 kernel: Kernel/User page tables isolation: enabled May 13 23:56:40.912277 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:56:40.912284 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:56:40.912292 kernel: Dynamic Preempt: voluntary May 13 23:56:40.912299 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:56:40.912308 kernel: rcu: RCU event tracing is enabled. May 13 23:56:40.912316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:56:40.912324 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:56:40.912332 kernel: Rude variant of Tasks RCU enabled. May 13 23:56:40.912339 kernel: Tracing variant of Tasks RCU enabled. May 13 23:56:40.912347 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:56:40.912358 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:56:40.912365 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 23:56:40.912373 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:56:40.912384 kernel: Console: colour VGA+ 80x25 May 13 23:56:40.912392 kernel: printk: console [tty0] enabled May 13 23:56:40.912400 kernel: printk: console [ttyS0] enabled May 13 23:56:40.912407 kernel: ACPI: Core revision 20230628 May 13 23:56:40.912415 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:56:40.912423 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:56:40.912433 kernel: x2apic enabled May 13 23:56:40.912441 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:56:40.912449 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:56:40.912457 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 13 23:56:40.912464 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 13 23:56:40.912472 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 23:56:40.912480 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 23:56:40.912498 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:56:40.912506 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:56:40.912514 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:56:40.912522 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 13 23:56:40.912533 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:56:40.912541 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:56:40.912550 kernel: MDS: Mitigation: Clear CPU buffers May 13 23:56:40.912558 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:56:40.912570 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:56:40.912581 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:56:40.912589 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:56:40.914621 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:56:40.914640 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 23:56:40.914649 kernel: Freeing SMP alternatives memory: 32K May 13 23:56:40.914658 kernel: pid_max: default: 32768 minimum: 301 May 13 23:56:40.914667 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:56:40.914675 kernel: landlock: Up and running. May 13 23:56:40.914684 kernel: SELinux: Initializing. May 13 23:56:40.914697 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:56:40.914705 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:56:40.914714 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 13 23:56:40.914723 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:56:40.914732 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:56:40.914740 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:56:40.914749 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 13 23:56:40.914758 kernel: signal: max sigframe size: 1776 May 13 23:56:40.914766 kernel: rcu: Hierarchical SRCU implementation. May 13 23:56:40.914778 kernel: rcu: Max phase no-delay instances is 400. May 13 23:56:40.914786 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:56:40.914795 kernel: smp: Bringing up secondary CPUs ... May 13 23:56:40.914803 kernel: smpboot: x86: Booting SMP configuration: May 13 23:56:40.914812 kernel: .... node #0, CPUs: #1 May 13 23:56:40.914820 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:56:40.914829 kernel: smpboot: Max logical packages: 1 May 13 23:56:40.914841 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 13 23:56:40.914850 kernel: devtmpfs: initialized May 13 23:56:40.914861 kernel: x86/mm: Memory block size: 128MB May 13 23:56:40.914870 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:56:40.914879 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:56:40.914887 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:56:40.914896 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:56:40.914904 kernel: audit: initializing netlink subsys (disabled) May 13 23:56:40.914913 kernel: audit: type=2000 audit(1747180599.938:1): state=initialized audit_enabled=0 res=1 May 13 23:56:40.914921 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:56:40.914930 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:56:40.914941 kernel: cpuidle: using governor menu May 13 23:56:40.914949 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:56:40.914958 kernel: dca service started, version 1.12.1 May 13 23:56:40.914967 kernel: PCI: Using configuration type 1 for base access May 13 23:56:40.914975 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:56:40.914984 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:56:40.914992 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:56:40.915001 kernel: ACPI: Added _OSI(Module Device) May 13 23:56:40.915014 kernel: ACPI: Added _OSI(Processor Device) May 13 23:56:40.915031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:56:40.915042 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:56:40.915050 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:56:40.915059 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:56:40.915067 kernel: ACPI: Interpreter enabled May 13 23:56:40.915076 kernel: ACPI: PM: (supports S0 S5) May 13 23:56:40.915084 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:56:40.915093 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:56:40.915101 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:56:40.915112 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 23:56:40.915121 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:56:40.915328 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 23:56:40.915441 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 23:56:40.915539 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 23:56:40.915551 kernel: acpiphp: Slot [3] registered May 13 23:56:40.915560 kernel: acpiphp: Slot [4] registered May 13 23:56:40.915572 kernel: acpiphp: Slot [5] registered May 13 23:56:40.915580 kernel: acpiphp: Slot [6] registered May 13 23:56:40.915588 kernel: acpiphp: Slot [7] registered May 13 23:56:40.916650 kernel: acpiphp: Slot [8] registered May 13 23:56:40.916668 kernel: acpiphp: Slot [9] registered May 13 23:56:40.916677 kernel: acpiphp: Slot [10] registered May 13 23:56:40.916686 kernel: acpiphp: Slot [11] registered May 13 23:56:40.916694 kernel: acpiphp: Slot [12] registered May 13 23:56:40.916703 kernel: acpiphp: Slot [13] registered May 13 23:56:40.916715 kernel: acpiphp: Slot [14] registered May 13 23:56:40.916723 kernel: acpiphp: Slot [15] registered May 13 23:56:40.916731 kernel: acpiphp: Slot [16] registered May 13 23:56:40.916740 kernel: acpiphp: Slot [17] registered May 13 23:56:40.916748 kernel: acpiphp: Slot [18] registered May 13 23:56:40.916757 kernel: acpiphp: Slot [19] registered May 13 23:56:40.916765 kernel: acpiphp: Slot [20] registered May 13 23:56:40.916773 kernel: acpiphp: Slot [21] registered May 13 23:56:40.916782 kernel: acpiphp: Slot [22] registered May 13 23:56:40.916790 kernel: acpiphp: Slot [23] registered May 13 23:56:40.916801 kernel: acpiphp: Slot [24] registered May 13 23:56:40.916809 kernel: acpiphp: Slot [25] registered May 13 23:56:40.916817 kernel: acpiphp: Slot [26] registered May 13 23:56:40.916826 kernel: acpiphp: Slot [27] registered May 13 23:56:40.916834 kernel: acpiphp: Slot [28] registered May 13 23:56:40.916843 kernel: acpiphp: Slot [29] registered May 13 23:56:40.916851 kernel: acpiphp: Slot [30] registered May 13 23:56:40.916860 kernel: acpiphp: Slot [31] registered May 13 23:56:40.916868 kernel: PCI host bridge to bus 0000:00 May 13 23:56:40.917012 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:56:40.917104 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:56:40.917191 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:56:40.917277 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 13 23:56:40.917371 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 13 23:56:40.917457 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:56:40.918628 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 23:56:40.918791 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 23:56:40.918910 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 23:56:40.919009 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 13 23:56:40.919107 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 23:56:40.919203 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 23:56:40.919297 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 23:56:40.919396 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 23:56:40.919502 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 13 23:56:40.920632 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 13 23:56:40.920775 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 23:56:40.920875 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 23:56:40.920970 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 23:56:40.921086 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 23:56:40.921193 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 23:56:40.921290 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 13 23:56:40.921391 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 13 23:56:40.921488 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 13 23:56:40.921583 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:56:40.922728 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 23:56:40.922835 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 13 23:56:40.922932 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 13 23:56:40.923027 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 13 23:56:40.923139 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:56:40.923236 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 13 23:56:40.923332 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 13 23:56:40.923429 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 13 23:56:40.923535 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 13 23:56:40.924791 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 13 23:56:40.924915 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 13 23:56:40.925013 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 13 23:56:40.925156 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 13 23:56:40.925299 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 13 23:56:40.925454 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 13 23:56:40.929737 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 13 23:56:40.929903 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 13 23:56:40.930008 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 13 23:56:40.930108 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 13 23:56:40.930208 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 13 23:56:40.930317 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 13 23:56:40.930425 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 13 23:56:40.930542 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 13 23:56:40.930553 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:56:40.930563 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:56:40.930572 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:56:40.930580 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:56:40.930589 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 23:56:40.930625 kernel: iommu: Default domain type: Translated May 13 23:56:40.930638 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:56:40.930647 kernel: PCI: Using ACPI for IRQ routing May 13 23:56:40.930655 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:56:40.930664 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 23:56:40.930673 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 13 23:56:40.930772 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 23:56:40.930867 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 23:56:40.930962 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:56:40.930973 kernel: vgaarb: loaded May 13 23:56:40.930984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:56:40.930993 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:56:40.931002 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:56:40.931010 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:56:40.931020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:56:40.931028 kernel: pnp: PnP ACPI init May 13 23:56:40.931037 kernel: pnp: PnP ACPI: found 4 devices May 13 23:56:40.931046 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:56:40.931054 kernel: NET: Registered PF_INET protocol family May 13 23:56:40.931066 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:56:40.931074 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 23:56:40.931083 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:56:40.931092 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:56:40.931101 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:56:40.931109 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 23:56:40.931118 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:56:40.931127 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:56:40.931135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:56:40.931146 kernel: NET: Registered PF_XDP protocol family May 13 23:56:40.931237 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:56:40.931323 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:56:40.931409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:56:40.931495 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 13 23:56:40.931581 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 13 23:56:40.931707 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 23:56:40.931822 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 23:56:40.931839 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 23:56:40.931938 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 31378 usecs May 13 23:56:40.931949 kernel: PCI: CLS 0 bytes, default 64 May 13 23:56:40.931958 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:56:40.931967 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 13 23:56:40.931976 kernel: Initialise system trusted keyrings May 13 23:56:40.931984 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 23:56:40.931993 kernel: Key type asymmetric registered May 13 23:56:40.932004 kernel: Asymmetric key parser 'x509' registered May 13 23:56:40.932013 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:56:40.932022 kernel: io scheduler mq-deadline registered May 13 23:56:40.932031 kernel: io scheduler kyber registered May 13 23:56:40.932039 kernel: io scheduler bfq registered May 13 23:56:40.932048 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:56:40.932057 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 23:56:40.932065 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 23:56:40.932074 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 23:56:40.932082 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:56:40.932094 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:56:40.932103 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:56:40.932112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:56:40.932120 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:56:40.932242 kernel: rtc_cmos 00:03: RTC can wake from S4 May 13 23:56:40.932255 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:56:40.932343 kernel: rtc_cmos 00:03: registered as rtc0 May 13 23:56:40.932435 kernel: rtc_cmos 00:03: setting system clock to 2025-05-13T23:56:40 UTC (1747180600) May 13 23:56:40.932524 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 13 23:56:40.932535 kernel: intel_pstate: CPU model not supported May 13 23:56:40.932543 kernel: NET: Registered PF_INET6 protocol family May 13 23:56:40.932552 kernel: Segment Routing with IPv6 May 13 23:56:40.932561 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:56:40.932569 kernel: NET: Registered PF_PACKET protocol family May 13 23:56:40.932578 kernel: Key type dns_resolver registered May 13 23:56:40.932587 kernel: IPI shorthand broadcast: enabled May 13 23:56:40.932634 kernel: sched_clock: Marking stable (921003329, 131188378)->(1150897538, -98705831) May 13 23:56:40.932643 kernel: registered taskstats version 1 May 13 23:56:40.932652 kernel: Loading compiled-in X.509 certificates May 13 23:56:40.932661 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:56:40.932669 kernel: Key type .fscrypt registered May 13 23:56:40.932677 kernel: Key type fscrypt-provisioning registered May 13 23:56:40.932686 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:56:40.932695 kernel: ima: Allocated hash algorithm: sha1 May 13 23:56:40.932703 kernel: ima: No architecture policies found May 13 23:56:40.932714 kernel: clk: Disabling unused clocks May 13 23:56:40.932723 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:56:40.932731 kernel: Write protecting the kernel read-only data: 40960k May 13 23:56:40.932740 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:56:40.932764 kernel: Run /init as init process May 13 23:56:40.932775 kernel: with arguments: May 13 23:56:40.932784 kernel: /init May 13 23:56:40.932792 kernel: with environment: May 13 23:56:40.932801 kernel: HOME=/ May 13 23:56:40.932812 kernel: TERM=linux May 13 23:56:40.932820 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:56:40.932831 systemd[1]: Successfully made /usr/ read-only. May 13 23:56:40.932844 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:56:40.932854 systemd[1]: Detected virtualization kvm. May 13 23:56:40.932863 systemd[1]: Detected architecture x86-64. May 13 23:56:40.932872 systemd[1]: Running in initrd. May 13 23:56:40.932881 systemd[1]: No hostname configured, using default hostname. May 13 23:56:40.932893 systemd[1]: Hostname set to . May 13 23:56:40.932902 systemd[1]: Initializing machine ID from VM UUID. May 13 23:56:40.932911 systemd[1]: Queued start job for default target initrd.target. May 13 23:56:40.932920 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:56:40.932946 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:56:40.932956 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:56:40.932966 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:56:40.932975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:56:40.932988 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:56:40.932998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:56:40.933016 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:56:40.933025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:56:40.933034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:56:40.933044 systemd[1]: Reached target paths.target - Path Units. May 13 23:56:40.933056 systemd[1]: Reached target slices.target - Slice Units. May 13 23:56:40.933065 systemd[1]: Reached target swap.target - Swaps. May 13 23:56:40.933077 systemd[1]: Reached target timers.target - Timer Units. May 13 23:56:40.933086 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:56:40.933095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:56:40.933105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:56:40.933121 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:56:40.933135 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:56:40.933149 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:56:40.933166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:56:40.933180 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:56:40.933193 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:56:40.933206 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:56:40.933220 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:56:40.933239 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:56:40.933253 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:56:40.933268 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:56:40.933278 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:56:40.933288 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:56:40.933331 systemd-journald[184]: Collecting audit messages is disabled. May 13 23:56:40.933361 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:56:40.933371 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:56:40.933381 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:56:40.933394 systemd-journald[184]: Journal started May 13 23:56:40.933416 systemd-journald[184]: Runtime Journal (/run/log/journal/dd58941498214a93a02b1871d5a11601) is 4.9M, max 39.3M, 34.3M free. May 13 23:56:40.934845 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:56:40.938025 systemd-modules-load[185]: Inserted module 'overlay' May 13 23:56:40.973256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:56:40.973290 kernel: Bridge firewalling registered May 13 23:56:40.971315 systemd-modules-load[185]: Inserted module 'br_netfilter' May 13 23:56:40.973177 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:56:40.973928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:40.974877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:56:40.982778 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:56:40.984767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:56:40.987709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:56:40.990178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:56:41.009194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:56:41.014246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:56:41.015905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:56:41.018751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:56:41.022036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:56:41.024725 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:56:41.041631 dracut-cmdline[219]: dracut-dracut-053 May 13 23:56:41.045284 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:56:41.068922 systemd-resolved[217]: Positive Trust Anchors: May 13 23:56:41.069720 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:56:41.069760 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:56:41.076030 systemd-resolved[217]: Defaulting to hostname 'linux'. May 13 23:56:41.077230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:56:41.077987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:56:41.131668 kernel: SCSI subsystem initialized May 13 23:56:41.142632 kernel: Loading iSCSI transport class v2.0-870. May 13 23:56:41.154633 kernel: iscsi: registered transport (tcp) May 13 23:56:41.178654 kernel: iscsi: registered transport (qla4xxx) May 13 23:56:41.178734 kernel: QLogic iSCSI HBA Driver May 13 23:56:41.220007 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:56:41.222045 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:56:41.261711 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:56:41.261784 kernel: device-mapper: uevent: version 1.0.3 May 13 23:56:41.262788 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:56:41.308650 kernel: raid6: avx2x4 gen() 26534 MB/s May 13 23:56:41.324653 kernel: raid6: avx2x2 gen() 27903 MB/s May 13 23:56:41.341768 kernel: raid6: avx2x1 gen() 22155 MB/s May 13 23:56:41.341820 kernel: raid6: using algorithm avx2x2 gen() 27903 MB/s May 13 23:56:41.359835 kernel: raid6: .... xor() 18658 MB/s, rmw enabled May 13 23:56:41.359911 kernel: raid6: using avx2x2 recovery algorithm May 13 23:56:41.384643 kernel: xor: automatically using best checksumming function avx May 13 23:56:41.533643 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:56:41.546088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:56:41.548332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:56:41.574885 systemd-udevd[404]: Using default interface naming scheme 'v255'. May 13 23:56:41.580030 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:56:41.583404 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:56:41.611136 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation May 13 23:56:41.644193 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:56:41.646358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:56:41.703063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:56:41.708720 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:56:41.737015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:56:41.740201 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:56:41.741728 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:56:41.742261 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:56:41.746942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:56:41.773544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:56:41.787628 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:56:41.798161 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 13 23:56:41.805086 kernel: ACPI: bus type USB registered May 13 23:56:41.805145 kernel: usbcore: registered new interface driver usbfs May 13 23:56:41.814224 kernel: usbcore: registered new interface driver hub May 13 23:56:41.814291 kernel: usbcore: registered new device driver usb May 13 23:56:41.817721 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 13 23:56:41.819618 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:56:41.820613 kernel: AES CTR mode by8 optimization enabled May 13 23:56:41.826753 kernel: scsi host0: Virtio SCSI HBA May 13 23:56:41.845462 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:56:41.845520 kernel: GPT:9289727 != 125829119 May 13 23:56:41.845533 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:56:41.845544 kernel: GPT:9289727 != 125829119 May 13 23:56:41.845865 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:56:41.847661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:56:41.850440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:56:41.851382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:56:41.853469 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:56:41.854992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:56:41.855136 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:41.856396 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:56:41.861663 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 13 23:56:41.861837 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) May 13 23:56:41.862143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:56:41.886626 kernel: libata version 3.00 loaded. May 13 23:56:41.892613 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 23:56:41.897859 kernel: scsi host1: ata_piix May 13 23:56:41.898075 kernel: scsi host2: ata_piix May 13 23:56:41.898202 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 13 23:56:41.898223 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 13 23:56:41.921632 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) May 13 23:56:41.926622 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (452) May 13 23:56:41.952486 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:56:41.989778 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 13 23:56:41.989981 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 13 23:56:41.990107 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 13 23:56:41.990236 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 13 23:56:41.990354 kernel: hub 1-0:1.0: USB hub found May 13 23:56:41.990497 kernel: hub 1-0:1.0: 2 ports detected May 13 23:56:41.989303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:41.998793 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:56:42.005792 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:56:42.006424 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:56:42.015301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:56:42.027996 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:56:42.029718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:56:42.048162 disk-uuid[543]: Primary Header is updated. May 13 23:56:42.048162 disk-uuid[543]: Secondary Entries is updated. May 13 23:56:42.048162 disk-uuid[543]: Secondary Header is updated. May 13 23:56:42.053635 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:56:42.059116 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:56:42.066958 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:56:43.060634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:56:43.060704 disk-uuid[544]: The operation has completed successfully. May 13 23:56:43.102967 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:56:43.103091 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:56:43.138356 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:56:43.156497 sh[564]: Success May 13 23:56:43.171629 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:56:43.213836 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:56:43.216734 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:56:43.223845 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:56:43.235944 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:56:43.235987 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:56:43.238334 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:56:43.238354 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:56:43.239795 kernel: BTRFS info (device dm-0): using free space tree May 13 23:56:43.246429 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:56:43.247448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:56:43.249748 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:56:43.251787 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:56:43.281399 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:56:43.281457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:56:43.281470 kernel: BTRFS info (device vda6): using free space tree May 13 23:56:43.286638 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:56:43.291639 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:56:43.294739 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:56:43.297764 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:56:43.423375 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:56:43.424086 ignition[660]: Ignition 2.20.0 May 13 23:56:43.424093 ignition[660]: Stage: fetch-offline May 13 23:56:43.424123 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 13 23:56:43.424132 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:43.428006 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:56:43.424239 ignition[660]: parsed url from cmdline: "" May 13 23:56:43.424243 ignition[660]: no config URL provided May 13 23:56:43.424249 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:56:43.424258 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 13 23:56:43.430890 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:56:43.424264 ignition[660]: failed to fetch config: resource requires networking May 13 23:56:43.424933 ignition[660]: Ignition finished successfully May 13 23:56:43.465930 systemd-networkd[749]: lo: Link UP May 13 23:56:43.465942 systemd-networkd[749]: lo: Gained carrier May 13 23:56:43.468265 systemd-networkd[749]: Enumeration completed May 13 23:56:43.468700 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 13 23:56:43.468704 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 13 23:56:43.469624 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:56:43.469629 systemd-networkd[749]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:56:43.470728 systemd-networkd[749]: eth0: Link UP May 13 23:56:43.470734 systemd-networkd[749]: eth0: Gained carrier May 13 23:56:43.470746 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 13 23:56:43.471016 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:56:43.472869 systemd-networkd[749]: eth1: Link UP May 13 23:56:43.472874 systemd-networkd[749]: eth1: Gained carrier May 13 23:56:43.472885 systemd-networkd[749]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:56:43.474317 systemd[1]: Reached target network.target - Network. May 13 23:56:43.478667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:56:43.485677 systemd-networkd[749]: eth0: DHCPv4 address 24.199.96.208/20, gateway 24.199.96.1 acquired from 169.254.169.253 May 13 23:56:43.489681 systemd-networkd[749]: eth1: DHCPv4 address 10.124.0.33/20 acquired from 169.254.169.253 May 13 23:56:43.502541 ignition[753]: Ignition 2.20.0 May 13 23:56:43.502554 ignition[753]: Stage: fetch May 13 23:56:43.502744 ignition[753]: no configs at "/usr/lib/ignition/base.d" May 13 23:56:43.502755 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:43.502853 ignition[753]: parsed url from cmdline: "" May 13 23:56:43.502857 ignition[753]: no config URL provided May 13 23:56:43.502862 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:56:43.502872 ignition[753]: no config at "/usr/lib/ignition/user.ign" May 13 23:56:43.502897 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 13 23:56:43.518422 ignition[753]: GET result: OK May 13 23:56:43.518584 ignition[753]: parsing config with SHA512: 54f98f2b2452273e5277035e3d804cb871ec2348b52f6d8b6e36bd48cc68efe49ccaaecf8bc9fd8d667f35e8732c584d17abc9652e4700efe24a410cd2a192f6 May 13 23:56:43.525190 unknown[753]: fetched base config from "system" May 13 23:56:43.525209 unknown[753]: fetched base config from "system" May 13 23:56:43.525557 ignition[753]: fetch: fetch complete May 13 23:56:43.525218 unknown[753]: fetched user config from "digitalocean" May 13 23:56:43.525562 ignition[753]: fetch: fetch passed May 13 23:56:43.527167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:56:43.525628 ignition[753]: Ignition finished successfully May 13 23:56:43.529751 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:56:43.571397 ignition[760]: Ignition 2.20.0 May 13 23:56:43.572133 ignition[760]: Stage: kargs May 13 23:56:43.572395 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 13 23:56:43.574352 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:56:43.572421 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:43.573336 ignition[760]: kargs: kargs passed May 13 23:56:43.573384 ignition[760]: Ignition finished successfully May 13 23:56:43.576722 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:56:43.607473 ignition[766]: Ignition 2.20.0 May 13 23:56:43.608295 ignition[766]: Stage: disks May 13 23:56:43.608497 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 23:56:43.608512 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:43.610783 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:56:43.609374 ignition[766]: disks: disks passed May 13 23:56:43.612326 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:56:43.609421 ignition[766]: Ignition finished successfully May 13 23:56:43.617927 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:56:43.618737 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:56:43.619468 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:56:43.620536 systemd[1]: Reached target basic.target - Basic System. May 13 23:56:43.622722 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:56:43.646557 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:56:43.648949 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:56:43.651306 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:56:43.769640 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:56:43.769474 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:56:43.770526 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:56:43.772946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:56:43.775690 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:56:43.789769 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 13 23:56:43.793743 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (782) May 13 23:56:43.793790 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:56:43.796122 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:56:43.796170 kernel: BTRFS info (device vda6): using free space tree May 13 23:56:43.797218 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:56:43.801013 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:56:43.802149 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:56:43.803254 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:56:43.807881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:56:43.808665 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:56:43.815749 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:56:43.875413 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:56:43.890227 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory May 13 23:56:43.896571 coreos-metadata[784]: May 13 23:56:43.896 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:56:43.900048 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:56:43.901128 coreos-metadata[785]: May 13 23:56:43.900 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:56:43.905157 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:56:43.907821 coreos-metadata[784]: May 13 23:56:43.907 INFO Fetch successful May 13 23:56:43.910869 coreos-metadata[785]: May 13 23:56:43.908 INFO Fetch successful May 13 23:56:43.915301 coreos-metadata[785]: May 13 23:56:43.915 INFO wrote hostname ci-4284.0.0-n-a2f5fd92b0 to /sysroot/etc/hostname May 13 23:56:43.916699 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:56:43.919115 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 13 23:56:43.919305 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 13 23:56:43.996681 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:56:43.999328 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:56:44.001735 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:56:44.022627 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:56:44.040174 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:56:44.048051 ignition[904]: INFO : Ignition 2.20.0 May 13 23:56:44.048051 ignition[904]: INFO : Stage: mount May 13 23:56:44.049307 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:56:44.049307 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:44.049307 ignition[904]: INFO : mount: mount passed May 13 23:56:44.049307 ignition[904]: INFO : Ignition finished successfully May 13 23:56:44.050753 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:56:44.053440 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:56:44.235377 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:56:44.237949 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:56:44.258630 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (916) May 13 23:56:44.260922 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:56:44.260943 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:56:44.262785 kernel: BTRFS info (device vda6): using free space tree May 13 23:56:44.265913 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:56:44.267232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:56:44.293663 ignition[933]: INFO : Ignition 2.20.0 May 13 23:56:44.293663 ignition[933]: INFO : Stage: files May 13 23:56:44.293663 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:56:44.293663 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:44.296969 ignition[933]: DEBUG : files: compiled without relabeling support, skipping May 13 23:56:44.297811 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:56:44.297811 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:56:44.300207 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:56:44.300971 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:56:44.300971 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:56:44.300630 unknown[933]: wrote ssh authorized keys file for user: core May 13 23:56:44.303053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:56:44.303053 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:56:44.336786 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:56:44.476240 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:56:44.476240 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:56:44.478125 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 23:56:44.528853 systemd-networkd[749]: eth1: Gained IPv6LL May 13 23:56:44.995089 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:56:45.169837 systemd-networkd[749]: eth0: Gained IPv6LL May 13 23:56:45.261097 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 23:56:45.261097 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:56:45.263784 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:56:45.263784 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:56:45.263784 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:56:45.263784 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:56:45.263784 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:56:45.263784 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:56:45.263784 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:56:45.263784 ignition[933]: INFO : files: files passed May 13 23:56:45.263784 ignition[933]: INFO : Ignition finished successfully May 13 23:56:45.264620 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:56:45.266732 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:56:45.269011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:56:45.286439 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:56:45.286544 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:56:45.292550 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:56:45.292550 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:56:45.294433 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:56:45.294692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:56:45.296018 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:56:45.298750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:56:45.351872 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:56:45.351987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:56:45.353530 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:56:45.354308 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:56:45.355341 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:56:45.356743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:56:45.374595 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:56:45.377415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:56:45.399853 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:56:45.400496 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:56:45.402506 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:56:45.403055 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:56:45.403178 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:56:45.404831 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:56:45.406224 systemd[1]: Stopped target basic.target - Basic System. May 13 23:56:45.406721 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:56:45.407257 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:56:45.409813 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:56:45.410780 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:56:45.411715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:56:45.412840 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:56:45.413897 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:56:45.414916 systemd[1]: Stopped target swap.target - Swaps. May 13 23:56:45.415831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:56:45.415957 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:56:45.417147 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:56:45.417790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:56:45.418849 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:56:45.419140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:56:45.420032 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:56:45.420150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:56:45.421561 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:56:45.421684 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:56:45.423012 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:56:45.423111 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:56:45.424038 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:56:45.424196 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:56:45.427845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:56:45.429256 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:56:45.429768 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:56:45.429877 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:56:45.432853 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:56:45.432994 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:56:45.440929 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:56:45.441548 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:56:45.464702 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:56:45.465858 ignition[987]: INFO : Ignition 2.20.0 May 13 23:56:45.465858 ignition[987]: INFO : Stage: umount May 13 23:56:45.466935 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:56:45.466935 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:56:45.468829 ignition[987]: INFO : umount: umount passed May 13 23:56:45.468829 ignition[987]: INFO : Ignition finished successfully May 13 23:56:45.468420 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:56:45.468516 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:56:45.470235 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:56:45.470324 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:56:45.471351 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:56:45.471390 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:56:45.472322 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:56:45.472361 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:56:45.477398 systemd[1]: Stopped target network.target - Network. May 13 23:56:45.478276 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:56:45.478322 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:56:45.479274 systemd[1]: Stopped target paths.target - Path Units. May 13 23:56:45.480178 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:56:45.480491 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:56:45.481192 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:56:45.482091 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:56:45.483032 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:56:45.483070 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:56:45.483988 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:56:45.484042 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:56:45.485016 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:56:45.485067 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:56:45.486219 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:56:45.486277 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:56:45.487552 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:56:45.488250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:56:45.490195 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:56:45.490295 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:56:45.491241 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:56:45.491334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:56:45.493964 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:56:45.494061 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:56:45.497960 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:56:45.498511 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:56:45.498593 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:56:45.501983 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:56:45.503276 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:56:45.503537 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:56:45.505920 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:56:45.506122 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:56:45.506153 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:56:45.508707 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:56:45.509218 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:56:45.509275 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:56:45.510889 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:56:45.510935 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:56:45.514036 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:56:45.514088 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:56:45.514928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:56:45.518501 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:56:45.531398 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:56:45.532176 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:56:45.533266 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:56:45.533331 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:56:45.536057 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:56:45.536094 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:56:45.537758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:56:45.537804 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:56:45.538376 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:56:45.538413 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:56:45.539967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:56:45.540021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:56:45.545067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:56:45.545813 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:56:45.545866 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:56:45.547974 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:56:45.548017 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:56:45.548991 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:56:45.549030 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:56:45.550432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:56:45.550475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:45.557168 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:56:45.557274 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:56:45.563010 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:56:45.563781 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:56:45.564832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:56:45.566478 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:56:45.581062 systemd[1]: Switching root. May 13 23:56:45.661608 systemd-journald[184]: Journal stopped May 13 23:56:46.838841 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 13 23:56:46.838921 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:56:46.838938 kernel: SELinux: policy capability open_perms=1 May 13 23:56:46.838951 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:56:46.838962 kernel: SELinux: policy capability always_check_network=0 May 13 23:56:46.838974 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:56:46.838996 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:56:46.839008 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:56:46.839020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:56:46.839031 kernel: audit: type=1403 audit(1747180605.789:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:56:46.839044 systemd[1]: Successfully loaded SELinux policy in 38.972ms. May 13 23:56:46.839072 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.624ms. May 13 23:56:46.839086 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:56:46.839098 systemd[1]: Detected virtualization kvm. May 13 23:56:46.839113 systemd[1]: Detected architecture x86-64. May 13 23:56:46.839124 systemd[1]: Detected first boot. May 13 23:56:46.839136 systemd[1]: Hostname set to . May 13 23:56:46.839148 systemd[1]: Initializing machine ID from VM UUID. May 13 23:56:46.839160 kernel: Guest personality initialized and is inactive May 13 23:56:46.839173 zram_generator::config[1032]: No configuration found. May 13 23:56:46.839187 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:56:46.839211 kernel: Initialized host personality May 13 23:56:46.839225 kernel: NET: Registered PF_VSOCK protocol family May 13 23:56:46.839237 systemd[1]: Populated /etc with preset unit settings. May 13 23:56:46.839250 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:56:46.839263 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:56:46.839275 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:56:46.839286 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:56:46.839298 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:56:46.839310 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:56:46.839321 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:56:46.839335 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:56:46.839347 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:56:46.839359 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:56:46.839371 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:56:46.839382 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:56:46.839401 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:56:46.839413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:56:46.839425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:56:46.839437 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:56:46.839452 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:56:46.839465 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:56:46.839484 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:56:46.839496 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:56:46.839508 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:56:46.839520 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:56:46.839535 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:56:46.839547 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:56:46.839558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:56:46.839575 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:56:46.839587 systemd[1]: Reached target slices.target - Slice Units. May 13 23:56:46.847666 systemd[1]: Reached target swap.target - Swaps. May 13 23:56:46.847706 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:56:46.847720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:56:46.847734 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:56:46.847765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:56:46.847777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:56:46.847790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:56:46.847802 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:56:46.847814 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:56:46.847826 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:56:46.847838 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:56:46.847850 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:46.847863 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:56:46.847879 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:56:46.847899 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:56:46.847911 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:56:46.847923 systemd[1]: Reached target machines.target - Containers. May 13 23:56:46.847954 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:56:46.847966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:46.847978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:56:46.847990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:56:46.848005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:56:46.848018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:56:46.848031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:56:46.848042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:56:46.848054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:56:46.848067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:56:46.848079 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:56:46.848091 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:56:46.848106 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:56:46.848118 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:56:46.848130 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:46.848142 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:56:46.848154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:56:46.848165 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:56:46.848177 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:56:46.848189 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:56:46.848201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:56:46.848215 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:56:46.848227 systemd[1]: Stopped verity-setup.service. May 13 23:56:46.848241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:46.848256 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:56:46.848267 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:56:46.848279 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:56:46.848291 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:56:46.848304 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:56:46.848316 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:56:46.848328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:56:46.848342 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:56:46.848355 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:56:46.848367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:56:46.848379 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:56:46.848391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:56:46.848404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:56:46.848415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:56:46.848428 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:56:46.848439 kernel: loop: module loaded May 13 23:56:46.848455 kernel: fuse: init (API version 7.39) May 13 23:56:46.848466 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:56:46.848478 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:56:46.848489 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:56:46.848501 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:56:46.848513 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:56:46.848525 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:56:46.848537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:46.848551 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:56:46.848585 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:56:46.848612 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:56:46.848628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:56:46.848676 systemd-journald[1113]: Collecting audit messages is disabled. May 13 23:56:46.848706 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:56:46.848719 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:56:46.848732 systemd-journald[1113]: Journal started May 13 23:56:46.848760 systemd-journald[1113]: Runtime Journal (/run/log/journal/dd58941498214a93a02b1871d5a11601) is 4.9M, max 39.3M, 34.3M free. May 13 23:56:46.859067 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:56:46.859143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:56:46.859160 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:56:46.859176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:56:46.453957 systemd[1]: Queued start job for default target multi-user.target. May 13 23:56:46.465186 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:56:46.465653 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:56:46.868698 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:56:46.867712 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:56:46.868624 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:56:46.870530 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:56:46.886727 kernel: loop0: detected capacity change from 0 to 8 May 13 23:56:46.886821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:56:46.889164 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:56:46.890822 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:56:46.907958 kernel: ACPI: bus type drm_connector registered May 13 23:56:46.908060 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:56:46.915976 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:56:46.916198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:56:46.925216 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:56:46.925825 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:56:46.935558 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:56:46.944808 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:56:46.949803 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:56:46.950397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:56:46.951941 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:56:46.952835 kernel: loop1: detected capacity change from 0 to 205544 May 13 23:56:47.000287 systemd-journald[1113]: Time spent on flushing to /var/log/journal/dd58941498214a93a02b1871d5a11601 is 39.561ms for 1004 entries. May 13 23:56:47.000287 systemd-journald[1113]: System Journal (/var/log/journal/dd58941498214a93a02b1871d5a11601) is 8M, max 195.6M, 187.6M free. May 13 23:56:47.058000 systemd-journald[1113]: Received client request to flush runtime journal. May 13 23:56:47.058068 kernel: loop2: detected capacity change from 0 to 109808 May 13 23:56:47.003766 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:56:47.017009 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:56:47.034138 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. May 13 23:56:47.034157 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. May 13 23:56:47.057135 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:56:47.068521 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:56:47.070194 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:56:47.071279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:56:47.077925 kernel: loop3: detected capacity change from 0 to 151640 May 13 23:56:47.077872 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:56:47.137331 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 23:56:47.154633 kernel: loop4: detected capacity change from 0 to 8 May 13 23:56:47.161630 kernel: loop5: detected capacity change from 0 to 205544 May 13 23:56:47.194293 kernel: loop6: detected capacity change from 0 to 109808 May 13 23:56:47.198921 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:56:47.206756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:56:47.212719 kernel: loop7: detected capacity change from 0 to 151640 May 13 23:56:47.240916 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 13 23:56:47.241548 (sd-merge)[1182]: Merged extensions into '/usr'. May 13 23:56:47.249973 systemd[1]: Reload requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:56:47.251264 systemd[1]: Reloading... May 13 23:56:47.256642 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. May 13 23:56:47.256657 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. May 13 23:56:47.379628 zram_generator::config[1214]: No configuration found. May 13 23:56:47.571335 ldconfig[1131]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:56:47.620334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:56:47.683153 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:56:47.683505 systemd[1]: Reloading finished in 431 ms. May 13 23:56:47.707528 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:56:47.708462 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:56:47.709317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:56:47.721761 systemd[1]: Starting ensure-sysext.service... May 13 23:56:47.723735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:56:47.755709 systemd[1]: Reload requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... May 13 23:56:47.755728 systemd[1]: Reloading... May 13 23:56:47.759510 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:56:47.760091 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:56:47.760953 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:56:47.761228 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 13 23:56:47.761324 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 13 23:56:47.765040 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:56:47.765168 systemd-tmpfiles[1259]: Skipping /boot May 13 23:56:47.777709 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:56:47.777826 systemd-tmpfiles[1259]: Skipping /boot May 13 23:56:47.852646 zram_generator::config[1288]: No configuration found. May 13 23:56:47.989647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:56:48.052237 systemd[1]: Reloading finished in 296 ms. May 13 23:56:48.066017 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:56:48.072771 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:56:48.085008 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:56:48.088727 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:56:48.096181 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:56:48.100980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:56:48.105192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:56:48.108809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:56:48.116883 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.117064 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:48.124938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:56:48.132226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:56:48.136006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:56:48.136736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:48.136871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:48.136972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.141322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.141498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:48.141704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:48.141793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:48.147137 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:56:48.147870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.160222 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:56:48.161581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:56:48.162676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:56:48.167836 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:56:48.168058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:56:48.172629 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:56:48.174188 systemd[1]: Finished ensure-sysext.service. May 13 23:56:48.185553 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.185925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:48.191289 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:56:48.191997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:48.192037 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:48.192096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:56:48.194014 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:56:48.195021 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:56:48.195051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.195411 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:56:48.195651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:56:48.197502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:56:48.203050 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:56:48.209762 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:56:48.224388 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:56:48.225290 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:56:48.241688 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:56:48.245402 augenrules[1375]: No rules May 13 23:56:48.245542 systemd-udevd[1337]: Using default interface naming scheme 'v255'. May 13 23:56:48.248260 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:56:48.249036 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:56:48.262289 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:56:48.276451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:56:48.281770 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:56:48.373343 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:56:48.374082 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:56:48.404380 systemd-resolved[1336]: Positive Trust Anchors: May 13 23:56:48.404751 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:56:48.404854 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:56:48.410939 systemd-resolved[1336]: Using system hostname 'ci-4284.0.0-n-a2f5fd92b0'. May 13 23:56:48.414283 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:56:48.415902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:56:48.422708 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:56:48.432247 systemd-networkd[1389]: lo: Link UP May 13 23:56:48.432258 systemd-networkd[1389]: lo: Gained carrier May 13 23:56:48.434298 systemd-networkd[1389]: Enumeration completed May 13 23:56:48.434442 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:56:48.435751 systemd[1]: Reached target network.target - Network. May 13 23:56:48.439921 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:56:48.443968 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:56:48.457168 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 13 23:56:48.462695 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 13 23:56:48.465665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.465791 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:56:48.471007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:56:48.475103 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:56:48.488128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:56:48.488951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:56:48.488989 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:56:48.489022 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:56:48.489041 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:56:48.493204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:56:48.493416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:56:48.505703 kernel: ISO 9660 Extensions: RRIP_1991A May 13 23:56:48.511032 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:56:48.518290 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 13 23:56:48.519035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:56:48.519221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:56:48.520666 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:56:48.520831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:56:48.530403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:56:48.533927 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:56:48.541682 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1397) May 13 23:56:48.551380 systemd-networkd[1389]: eth0: Configuring with /run/systemd/network/10-32:cc:da:f4:58:c7.network. May 13 23:56:48.553134 systemd-networkd[1389]: eth0: Link UP May 13 23:56:48.553144 systemd-networkd[1389]: eth0: Gained carrier May 13 23:56:48.556888 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:48.600897 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 23:56:48.616746 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 23:56:48.620757 systemd-networkd[1389]: eth1: Configuring with /run/systemd/network/10-0a:36:78:95:49:8b.network. May 13 23:56:48.621565 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:48.621688 systemd-networkd[1389]: eth1: Link UP May 13 23:56:48.621693 systemd-networkd[1389]: eth1: Gained carrier May 13 23:56:48.624176 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:48.625209 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:48.631260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:56:48.635405 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:56:48.638836 kernel: ACPI: button: Power Button [PWRF] May 13 23:56:48.667637 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 23:56:48.674766 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:56:48.705348 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:56:48.706993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:56:48.764437 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 23:56:48.764567 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 23:56:48.764934 kernel: Console: switching to colour dummy device 80x25 May 13 23:56:48.764955 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 23:56:48.764969 kernel: [drm] features: -context_init May 13 23:56:48.771931 kernel: [drm] number of scanouts: 1 May 13 23:56:48.772021 kernel: [drm] number of cap sets: 0 May 13 23:56:48.772036 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 23:56:48.782233 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 23:56:48.782311 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:56:48.794874 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 23:56:48.805389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:56:48.805847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:48.828855 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:56:48.843220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:56:48.898650 kernel: EDAC MC: Ver: 3.0.0 May 13 23:56:48.920060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:56:48.927112 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:56:48.930523 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:56:48.955662 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:56:48.987619 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:56:48.988483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:56:48.988652 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:56:48.988893 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:56:48.988990 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:56:48.989473 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:56:48.991509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:56:48.992773 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:56:48.993551 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:56:48.993635 systemd[1]: Reached target paths.target - Path Units. May 13 23:56:48.993739 systemd[1]: Reached target timers.target - Timer Units. May 13 23:56:48.995449 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:56:48.997897 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:56:49.002588 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:56:49.003056 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:56:49.003143 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:56:49.007913 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:56:49.009536 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:56:49.012032 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:56:49.014941 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:56:49.016259 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:56:49.017678 systemd[1]: Reached target basic.target - Basic System. May 13 23:56:49.019095 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:56:49.019170 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:56:49.023711 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:56:49.027934 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:56:49.028301 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:56:49.032750 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:56:49.035813 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:56:49.045994 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:56:49.046423 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:56:49.049203 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:56:49.057715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:56:49.065368 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:56:49.071321 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:56:49.082917 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:56:49.085233 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:56:49.085850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:56:49.092057 jq[1457]: false May 13 23:56:49.092852 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:56:49.095091 coreos-metadata[1455]: May 13 23:56:49.094 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:56:49.101368 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:56:49.103850 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:56:49.110650 coreos-metadata[1455]: May 13 23:56:49.110 INFO Fetch successful May 13 23:56:49.115190 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:56:49.115402 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:56:49.159236 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:56:49.161878 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:56:49.162423 dbus-daemon[1456]: [system] SELinux support is enabled May 13 23:56:49.164506 jq[1467]: true May 13 23:56:49.163112 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:56:49.179671 update_engine[1466]: I20250513 23:56:49.178409 1466 main.cc:92] Flatcar Update Engine starting May 13 23:56:49.183464 update_engine[1466]: I20250513 23:56:49.183286 1466 update_check_scheduler.cc:74] Next update check in 2m23s May 13 23:56:49.185692 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:56:49.185926 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:56:49.196223 extend-filesystems[1458]: Found loop4 May 13 23:56:49.196223 extend-filesystems[1458]: Found loop5 May 13 23:56:49.196223 extend-filesystems[1458]: Found loop6 May 13 23:56:49.196223 extend-filesystems[1458]: Found loop7 May 13 23:56:49.196223 extend-filesystems[1458]: Found vda May 13 23:56:49.196223 extend-filesystems[1458]: Found vda1 May 13 23:56:49.196223 extend-filesystems[1458]: Found vda2 May 13 23:56:49.196223 extend-filesystems[1458]: Found vda3 May 13 23:56:49.196223 extend-filesystems[1458]: Found usr May 13 23:56:49.196223 extend-filesystems[1458]: Found vda4 May 13 23:56:49.196223 extend-filesystems[1458]: Found vda6 May 13 23:56:49.196223 extend-filesystems[1458]: Found vda7 May 13 23:56:49.196223 extend-filesystems[1458]: Found vda9 May 13 23:56:49.196223 extend-filesystems[1458]: Checking size of /dev/vda9 May 13 23:56:49.283386 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 13 23:56:49.283511 jq[1488]: true May 13 23:56:49.198775 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:56:49.283839 extend-filesystems[1458]: Resized partition /dev/vda9 May 13 23:56:49.284193 tar[1470]: linux-amd64/helm May 13 23:56:49.198815 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:56:49.284553 extend-filesystems[1500]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:56:49.211455 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:56:49.211578 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 13 23:56:49.211624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:56:49.231130 systemd[1]: Started update-engine.service - Update Engine. May 13 23:56:49.236359 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:56:49.251096 systemd-logind[1465]: New seat seat0. May 13 23:56:49.258644 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:56:49.268902 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:56:49.270285 systemd-logind[1465]: Watching system buttons on /dev/input/event1 (Power Button) May 13 23:56:49.270302 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:56:49.280815 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:56:49.281477 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:56:49.374203 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 13 23:56:49.390733 extend-filesystems[1500]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:56:49.390733 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 8 May 13 23:56:49.390733 extend-filesystems[1500]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 13 23:56:49.403162 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1401) May 13 23:56:49.393752 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:56:49.403281 extend-filesystems[1458]: Resized filesystem in /dev/vda9 May 13 23:56:49.403281 extend-filesystems[1458]: Found vdb May 13 23:56:49.393971 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:56:49.437935 bash[1519]: Updated "/home/core/.ssh/authorized_keys" May 13 23:56:49.439362 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:56:49.448844 systemd[1]: Starting sshkeys.service... May 13 23:56:49.562257 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:56:49.566895 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:56:49.651241 coreos-metadata[1529]: May 13 23:56:49.651 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:56:49.664015 coreos-metadata[1529]: May 13 23:56:49.663 INFO Fetch successful May 13 23:56:49.666896 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:56:49.702889 unknown[1529]: wrote ssh authorized keys file for user: core May 13 23:56:49.712846 systemd-networkd[1389]: eth0: Gained IPv6LL May 13 23:56:49.713443 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:49.717778 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:56:49.720761 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:56:49.732022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:56:49.742076 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:56:49.752338 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" May 13 23:56:49.754539 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:56:49.760105 systemd[1]: Finished sshkeys.service. May 13 23:56:49.781988 containerd[1489]: time="2025-05-13T23:56:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:56:49.791656 containerd[1489]: time="2025-05-13T23:56:49.787463670Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:56:49.817003 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:56:49.837280 containerd[1489]: time="2025-05-13T23:56:49.837221733Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.949µs" May 13 23:56:49.837280 containerd[1489]: time="2025-05-13T23:56:49.837264615Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:56:49.837280 containerd[1489]: time="2025-05-13T23:56:49.837290514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:56:49.838289 containerd[1489]: time="2025-05-13T23:56:49.837509714Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:56:49.838289 containerd[1489]: time="2025-05-13T23:56:49.837539155Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:56:49.838289 containerd[1489]: time="2025-05-13T23:56:49.837574879Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:56:49.842127 containerd[1489]: time="2025-05-13T23:56:49.842085938Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:56:49.842127 containerd[1489]: time="2025-05-13T23:56:49.842119270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842428298Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842452590Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842466387Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842477184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842574684Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842819701Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842849172Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842859570Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.842890514Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.843139900Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:56:49.845620 containerd[1489]: time="2025-05-13T23:56:49.843214342Z" level=info msg="metadata content store policy set" policy=shared May 13 23:56:49.852848 containerd[1489]: time="2025-05-13T23:56:49.852800765Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:56:49.852920 containerd[1489]: time="2025-05-13T23:56:49.852867866Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:56:49.852920 containerd[1489]: time="2025-05-13T23:56:49.852885674Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:56:49.852920 containerd[1489]: time="2025-05-13T23:56:49.852900158Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:56:49.852999 containerd[1489]: time="2025-05-13T23:56:49.852940848Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:56:49.852999 containerd[1489]: time="2025-05-13T23:56:49.852961597Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:56:49.852999 containerd[1489]: time="2025-05-13T23:56:49.852978962Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:56:49.853062 containerd[1489]: time="2025-05-13T23:56:49.852997634Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:56:49.853062 containerd[1489]: time="2025-05-13T23:56:49.853017057Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:56:49.853062 containerd[1489]: time="2025-05-13T23:56:49.853030133Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:56:49.853062 containerd[1489]: time="2025-05-13T23:56:49.853040507Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:56:49.853062 containerd[1489]: time="2025-05-13T23:56:49.853056757Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853219433Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853249837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853266133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853285598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853300305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853312415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853324530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853339086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:56:49.853354 containerd[1489]: time="2025-05-13T23:56:49.853352872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:56:49.853552 containerd[1489]: time="2025-05-13T23:56:49.853365431Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:56:49.853552 containerd[1489]: time="2025-05-13T23:56:49.853377766Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:56:49.853552 containerd[1489]: time="2025-05-13T23:56:49.853459082Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:56:49.853552 containerd[1489]: time="2025-05-13T23:56:49.853498488Z" level=info msg="Start snapshots syncer" May 13 23:56:49.853552 containerd[1489]: time="2025-05-13T23:56:49.853542626Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:56:49.855136 containerd[1489]: time="2025-05-13T23:56:49.853898567Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:56:49.855136 containerd[1489]: time="2025-05-13T23:56:49.853997051Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854110436Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854240932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854273707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854291609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854309077Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854327855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854341828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854353861Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854385078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854400146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854410674Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854448442Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854466082Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:56:49.855443 containerd[1489]: time="2025-05-13T23:56:49.854476615Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854488587Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854504299Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854517242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854533730Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854553557Z" level=info msg="runtime interface created" May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854560362Z" level=info msg="created NRI interface" May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854571718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.854587104Z" level=info msg="Connect containerd service" May 13 23:56:49.857572 containerd[1489]: time="2025-05-13T23:56:49.857430476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:56:49.861423 containerd[1489]: time="2025-05-13T23:56:49.861059810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148181129Z" level=info msg="Start subscribing containerd event" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148259341Z" level=info msg="Start recovering state" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148392657Z" level=info msg="Start event monitor" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148407901Z" level=info msg="Start cni network conf syncer for default" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148419113Z" level=info msg="Start streaming server" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148439554Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148449966Z" level=info msg="runtime interface starting up..." May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148458471Z" level=info msg="starting plugins..." May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148480455Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148629611Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:56:50.150700 containerd[1489]: time="2025-05-13T23:56:50.148736193Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:56:50.148941 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:56:50.154717 containerd[1489]: time="2025-05-13T23:56:50.154504199Z" level=info msg="containerd successfully booted in 0.373000s" May 13 23:56:50.197200 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:56:50.242147 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:56:50.251534 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:56:50.291295 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:56:50.291681 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:56:50.300069 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:56:50.356681 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:56:50.361180 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:56:50.365901 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:56:50.367500 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:56:50.403261 tar[1470]: linux-amd64/LICENSE May 13 23:56:50.403261 tar[1470]: linux-amd64/README.md May 13 23:56:50.422576 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:56:50.545750 systemd-networkd[1389]: eth1: Gained IPv6LL May 13 23:56:50.548068 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:50.932111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:56:50.933845 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:56:50.935675 systemd[1]: Startup finished in 1.069s (kernel) + 5.077s (initrd) + 5.183s (userspace) = 11.330s. May 13 23:56:50.941272 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:56:51.512172 kubelet[1589]: E0513 23:56:51.512063 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:56:51.514894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:56:51.515076 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:56:51.515747 systemd[1]: kubelet.service: Consumed 1.204s CPU time, 236.2M memory peak. May 13 23:56:55.170374 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:56:55.171858 systemd[1]: Started sshd@0-24.199.96.208:22-147.75.109.163:34656.service - OpenSSH per-connection server daemon (147.75.109.163:34656). May 13 23:56:55.263397 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 34656 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:56:55.264720 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:55.271965 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:56:55.273376 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:56:55.281916 systemd-logind[1465]: New session 1 of user core. May 13 23:56:55.303272 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:56:55.306409 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:56:55.318945 (systemd)[1605]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:56:55.322703 systemd-logind[1465]: New session c1 of user core. May 13 23:56:55.469835 systemd[1605]: Queued start job for default target default.target. May 13 23:56:55.476433 systemd[1605]: Created slice app.slice - User Application Slice. May 13 23:56:55.476481 systemd[1605]: Reached target paths.target - Paths. May 13 23:56:55.476546 systemd[1605]: Reached target timers.target - Timers. May 13 23:56:55.478740 systemd[1605]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:56:55.494282 systemd[1605]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:56:55.494461 systemd[1605]: Reached target sockets.target - Sockets. May 13 23:56:55.494534 systemd[1605]: Reached target basic.target - Basic System. May 13 23:56:55.494591 systemd[1605]: Reached target default.target - Main User Target. May 13 23:56:55.494637 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:56:55.495131 systemd[1605]: Startup finished in 162ms. May 13 23:56:55.505872 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:56:55.561791 systemd[1]: Started sshd@1-24.199.96.208:22-206.168.34.35:54034.service - OpenSSH per-connection server daemon (206.168.34.35:54034). May 13 23:56:55.578983 systemd[1]: Started sshd@2-24.199.96.208:22-147.75.109.163:34660.service - OpenSSH per-connection server daemon (147.75.109.163:34660). May 13 23:56:55.643262 sshd[1618]: Accepted publickey for core from 147.75.109.163 port 34660 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:56:55.645183 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:55.651739 systemd-logind[1465]: New session 2 of user core. May 13 23:56:55.662831 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:56:55.723351 sshd[1620]: Connection closed by 147.75.109.163 port 34660 May 13 23:56:55.724005 sshd-session[1618]: pam_unix(sshd:session): session closed for user core May 13 23:56:55.736717 systemd[1]: sshd@2-24.199.96.208:22-147.75.109.163:34660.service: Deactivated successfully. May 13 23:56:55.738493 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:56:55.740712 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. May 13 23:56:55.742887 systemd[1]: Started sshd@3-24.199.96.208:22-147.75.109.163:34674.service - OpenSSH per-connection server daemon (147.75.109.163:34674). May 13 23:56:55.744154 systemd-logind[1465]: Removed session 2. May 13 23:56:55.815065 sshd[1625]: Accepted publickey for core from 147.75.109.163 port 34674 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:56:55.816851 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:55.822429 systemd-logind[1465]: New session 3 of user core. May 13 23:56:55.830794 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:56:55.887843 sshd[1628]: Connection closed by 147.75.109.163 port 34674 May 13 23:56:55.888510 sshd-session[1625]: pam_unix(sshd:session): session closed for user core May 13 23:56:55.904952 systemd[1]: sshd@3-24.199.96.208:22-147.75.109.163:34674.service: Deactivated successfully. May 13 23:56:55.907811 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:56:55.910091 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. May 13 23:56:55.912853 systemd[1]: Started sshd@4-24.199.96.208:22-147.75.109.163:34676.service - OpenSSH per-connection server daemon (147.75.109.163:34676). May 13 23:56:55.914783 systemd-logind[1465]: Removed session 3. May 13 23:56:55.976240 sshd[1634]: Accepted publickey for core from 147.75.109.163 port 34676 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:56:55.977710 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:55.984466 systemd-logind[1465]: New session 4 of user core. May 13 23:56:55.991955 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:56:56.053412 sshd[1637]: Connection closed by 147.75.109.163 port 34676 May 13 23:56:56.054156 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 13 23:56:56.068807 systemd[1]: sshd@4-24.199.96.208:22-147.75.109.163:34676.service: Deactivated successfully. May 13 23:56:56.071030 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:56:56.073850 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. May 13 23:56:56.075959 systemd[1]: Started sshd@5-24.199.96.208:22-147.75.109.163:34692.service - OpenSSH per-connection server daemon (147.75.109.163:34692). May 13 23:56:56.077196 systemd-logind[1465]: Removed session 4. May 13 23:56:56.131949 sshd[1642]: Accepted publickey for core from 147.75.109.163 port 34692 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:56:56.133511 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:56:56.139942 systemd-logind[1465]: New session 5 of user core. May 13 23:56:56.148905 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:56:56.216466 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:56:56.216792 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:56:56.683296 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:56:56.698058 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:56:57.100115 dockerd[1664]: time="2025-05-13T23:56:57.099626526Z" level=info msg="Starting up" May 13 23:56:57.103788 dockerd[1664]: time="2025-05-13T23:56:57.103689035Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:56:57.141815 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2974352448-merged.mount: Deactivated successfully. May 13 23:56:57.171456 dockerd[1664]: time="2025-05-13T23:56:57.171254787Z" level=info msg="Loading containers: start." May 13 23:56:57.347643 kernel: Initializing XFRM netlink socket May 13 23:56:57.348622 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:57.348878 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:57.359757 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:57.447131 systemd-networkd[1389]: docker0: Link UP May 13 23:56:57.447727 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. May 13 23:56:57.523272 dockerd[1664]: time="2025-05-13T23:56:57.523233278Z" level=info msg="Loading containers: done." May 13 23:56:57.543165 dockerd[1664]: time="2025-05-13T23:56:57.542749514Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:56:57.543165 dockerd[1664]: time="2025-05-13T23:56:57.542851421Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:56:57.543165 dockerd[1664]: time="2025-05-13T23:56:57.542965634Z" level=info msg="Daemon has completed initialization" May 13 23:56:57.577086 dockerd[1664]: time="2025-05-13T23:56:57.576581360Z" level=info msg="API listen on /run/docker.sock" May 13 23:56:57.576983 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:56:57.701873 systemd[1]: Started sshd@6-24.199.96.208:22-218.92.0.188:32415.service - OpenSSH per-connection server daemon (218.92.0.188:32415). May 13 23:56:58.398465 containerd[1489]: time="2025-05-13T23:56:58.398422499Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 23:56:58.797220 sshd-session[1873]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root May 13 23:56:58.974836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4071228115.mount: Deactivated successfully. May 13 23:57:00.120583 containerd[1489]: time="2025-05-13T23:57:00.120498180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:00.121537 containerd[1489]: time="2025-05-13T23:57:00.121409335Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 23:57:00.122018 containerd[1489]: time="2025-05-13T23:57:00.121988941Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:00.124653 containerd[1489]: time="2025-05-13T23:57:00.124622165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:00.126198 containerd[1489]: time="2025-05-13T23:57:00.126158052Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.727688513s" May 13 23:57:00.126330 containerd[1489]: time="2025-05-13T23:57:00.126311222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 23:57:00.128690 containerd[1489]: time="2025-05-13T23:57:00.128648554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 23:57:00.910837 sshd[1870]: PAM: Permission denied for root from 218.92.0.188 May 13 23:57:01.216351 sshd-session[1931]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root May 13 23:57:01.597279 containerd[1489]: time="2025-05-13T23:57:01.597060400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:01.598094 containerd[1489]: time="2025-05-13T23:57:01.598037338Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 23:57:01.598702 containerd[1489]: time="2025-05-13T23:57:01.598676082Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:01.602271 containerd[1489]: time="2025-05-13T23:57:01.602213239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:01.603234 containerd[1489]: time="2025-05-13T23:57:01.602731065Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.474048272s" May 13 23:57:01.603234 containerd[1489]: time="2025-05-13T23:57:01.602766063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 23:57:01.603345 containerd[1489]: time="2025-05-13T23:57:01.603260093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 23:57:01.765747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:57:01.767453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:01.906020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:01.912960 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:57:01.972425 kubelet[1940]: E0513 23:57:01.972357 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:57:01.976121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:57:01.976274 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:57:01.977082 systemd[1]: kubelet.service: Consumed 175ms CPU time, 97M memory peak. May 13 23:57:02.742717 sshd[1870]: PAM: Permission denied for root from 218.92.0.188 May 13 23:57:02.753562 containerd[1489]: time="2025-05-13T23:57:02.753499350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:02.754533 containerd[1489]: time="2025-05-13T23:57:02.754460095Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 23:57:02.755302 containerd[1489]: time="2025-05-13T23:57:02.755255775Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:02.758234 containerd[1489]: time="2025-05-13T23:57:02.758178344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:02.762588 containerd[1489]: time="2025-05-13T23:57:02.761541159Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.158245959s" May 13 23:57:02.762588 containerd[1489]: time="2025-05-13T23:57:02.761584772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 23:57:02.763980 containerd[1489]: time="2025-05-13T23:57:02.763956540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 23:57:03.049673 sshd-session[1951]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root May 13 23:57:03.827751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334191788.mount: Deactivated successfully. May 13 23:57:04.363479 containerd[1489]: time="2025-05-13T23:57:04.363410111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:04.365282 containerd[1489]: time="2025-05-13T23:57:04.365226926Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 23:57:04.367000 containerd[1489]: time="2025-05-13T23:57:04.366154848Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:04.368160 containerd[1489]: time="2025-05-13T23:57:04.368127826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:04.369430 containerd[1489]: time="2025-05-13T23:57:04.369405289Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.605099516s" May 13 23:57:04.369544 containerd[1489]: time="2025-05-13T23:57:04.369529825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 23:57:04.370228 containerd[1489]: time="2025-05-13T23:57:04.370199069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:57:04.712852 sshd[1870]: PAM: Permission denied for root from 218.92.0.188 May 13 23:57:04.754240 systemd-resolved[1336]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 13 23:57:04.859229 sshd[1870]: Received disconnect from 218.92.0.188 port 32415:11: [preauth] May 13 23:57:04.859229 sshd[1870]: Disconnected from authenticating user root 218.92.0.188 port 32415 [preauth] May 13 23:57:04.861724 systemd[1]: sshd@6-24.199.96.208:22-218.92.0.188:32415.service: Deactivated successfully. May 13 23:57:04.886209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262546771.mount: Deactivated successfully. May 13 23:57:05.746011 containerd[1489]: time="2025-05-13T23:57:05.745880847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:05.746971 containerd[1489]: time="2025-05-13T23:57:05.746907978Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 23:57:05.747653 containerd[1489]: time="2025-05-13T23:57:05.747458520Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:05.749857 containerd[1489]: time="2025-05-13T23:57:05.749808832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:05.750989 containerd[1489]: time="2025-05-13T23:57:05.750809013Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.380498537s" May 13 23:57:05.750989 containerd[1489]: time="2025-05-13T23:57:05.750856070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:57:05.751516 containerd[1489]: time="2025-05-13T23:57:05.751451393Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:57:06.230727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035074102.mount: Deactivated successfully. May 13 23:57:06.236701 containerd[1489]: time="2025-05-13T23:57:06.236058278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:06.237326 containerd[1489]: time="2025-05-13T23:57:06.237189825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:57:06.238210 containerd[1489]: time="2025-05-13T23:57:06.238186626Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:06.241690 containerd[1489]: time="2025-05-13T23:57:06.240903292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:57:06.241690 containerd[1489]: time="2025-05-13T23:57:06.241548847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 490.067076ms" May 13 23:57:06.241690 containerd[1489]: time="2025-05-13T23:57:06.241574416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:57:06.242391 containerd[1489]: time="2025-05-13T23:57:06.242369842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 23:57:06.732803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755741950.mount: Deactivated successfully. May 13 23:57:07.824903 systemd-resolved[1336]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 13 23:57:08.510404 containerd[1489]: time="2025-05-13T23:57:08.510306700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:08.511831 containerd[1489]: time="2025-05-13T23:57:08.511782135Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 23:57:08.512741 containerd[1489]: time="2025-05-13T23:57:08.512687614Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:08.514658 containerd[1489]: time="2025-05-13T23:57:08.514595589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:08.515730 containerd[1489]: time="2025-05-13T23:57:08.515564612Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.273164183s" May 13 23:57:08.515730 containerd[1489]: time="2025-05-13T23:57:08.515623890Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 23:57:10.928448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:10.929098 systemd[1]: kubelet.service: Consumed 175ms CPU time, 97M memory peak. May 13 23:57:10.931467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:10.961722 systemd[1]: Reload requested from client PID 2091 ('systemctl') (unit session-5.scope)... May 13 23:57:10.961741 systemd[1]: Reloading... May 13 23:57:11.077662 zram_generator::config[2137]: No configuration found. May 13 23:57:11.202695 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:11.302940 systemd[1]: Reloading finished in 340 ms. May 13 23:57:11.358243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:11.361394 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:57:11.361646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:11.361706 systemd[1]: kubelet.service: Consumed 99ms CPU time, 83.6M memory peak. May 13 23:57:11.363853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:11.367695 sshd[1616]: Connection closed by 206.168.34.35 port 54034 [preauth] May 13 23:57:11.370105 systemd[1]: sshd@1-24.199.96.208:22-206.168.34.35:54034.service: Deactivated successfully. May 13 23:57:11.488156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:11.500310 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:57:11.550458 kubelet[2195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:11.551061 kubelet[2195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:57:11.551061 kubelet[2195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:11.553536 kubelet[2195]: I0513 23:57:11.551955 2195 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:57:12.167519 kubelet[2195]: I0513 23:57:12.167468 2195 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:57:12.167519 kubelet[2195]: I0513 23:57:12.167508 2195 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:57:12.167897 kubelet[2195]: I0513 23:57:12.167870 2195 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:57:12.194675 kubelet[2195]: E0513 23:57:12.194627 2195 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.199.96.208:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.96.208:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:12.197130 kubelet[2195]: I0513 23:57:12.196825 2195 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:57:12.210648 kubelet[2195]: I0513 23:57:12.210592 2195 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:57:12.215815 kubelet[2195]: I0513 23:57:12.215790 2195 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:57:12.216916 kubelet[2195]: I0513 23:57:12.216891 2195 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:57:12.217091 kubelet[2195]: I0513 23:57:12.217056 2195 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:57:12.217308 kubelet[2195]: I0513 23:57:12.217095 2195 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-a2f5fd92b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:57:12.217398 kubelet[2195]: I0513 23:57:12.217311 2195 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:57:12.217398 kubelet[2195]: I0513 23:57:12.217321 2195 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:57:12.217448 kubelet[2195]: I0513 23:57:12.217439 2195 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:12.219373 kubelet[2195]: I0513 23:57:12.219096 2195 kubelet.go:408] "Attempting to sync node with API server" May 13 23:57:12.219373 kubelet[2195]: I0513 23:57:12.219129 2195 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:57:12.219373 kubelet[2195]: I0513 23:57:12.219165 2195 kubelet.go:314] "Adding apiserver pod source" May 13 23:57:12.219373 kubelet[2195]: I0513 23:57:12.219185 2195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:57:12.224966 kubelet[2195]: W0513 23:57:12.224920 2195 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.96.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-a2f5fd92b0&limit=500&resourceVersion=0": dial tcp 24.199.96.208:6443: connect: connection refused May 13 23:57:12.225096 kubelet[2195]: E0513 23:57:12.225080 2195 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.96.208:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-a2f5fd92b0&limit=500&resourceVersion=0\": dial tcp 24.199.96.208:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:12.225519 kubelet[2195]: W0513 23:57:12.225488 2195 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.96.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.96.208:6443: connect: connection refused May 13 23:57:12.225648 kubelet[2195]: E0513 23:57:12.225590 2195 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.96.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.96.208:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:12.226111 kubelet[2195]: I0513 23:57:12.225985 2195 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:57:12.227820 kubelet[2195]: I0513 23:57:12.227712 2195 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:57:12.228415 kubelet[2195]: W0513 23:57:12.228396 2195 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:57:12.231629 kubelet[2195]: I0513 23:57:12.230283 2195 server.go:1269] "Started kubelet" May 13 23:57:12.231936 kubelet[2195]: I0513 23:57:12.231921 2195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:57:12.232240 kubelet[2195]: I0513 23:57:12.232220 2195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:57:12.233627 kubelet[2195]: I0513 23:57:12.232627 2195 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:57:12.236581 kubelet[2195]: I0513 23:57:12.236556 2195 server.go:460] "Adding debug handlers to kubelet server" May 13 23:57:12.239926 kubelet[2195]: I0513 23:57:12.239025 2195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:57:12.239926 kubelet[2195]: I0513 23:57:12.239291 2195 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:57:12.242619 kubelet[2195]: I0513 23:57:12.242062 2195 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:57:12.242619 kubelet[2195]: E0513 23:57:12.242222 2195 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-a2f5fd92b0\" not found" May 13 23:57:12.242619 kubelet[2195]: I0513 23:57:12.242490 2195 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:57:12.242619 kubelet[2195]: I0513 23:57:12.242541 2195 reconciler.go:26] "Reconciler: start to sync state" May 13 23:57:12.251955 kubelet[2195]: E0513 23:57:12.246973 2195 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.96.208:6443/api/v1/namespaces/default/events\": dial tcp 24.199.96.208:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-a2f5fd92b0.183f3b872e86dec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-a2f5fd92b0,UID:ci-4284.0.0-n-a2f5fd92b0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-a2f5fd92b0,},FirstTimestamp:2025-05-13 23:57:12.230256323 +0000 UTC m=+0.726140382,LastTimestamp:2025-05-13 23:57:12.230256323 +0000 UTC m=+0.726140382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-a2f5fd92b0,}" May 13 23:57:12.252143 kubelet[2195]: W0513 23:57:12.252019 2195 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.96.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.96.208:6443: connect: connection refused May 13 23:57:12.252143 kubelet[2195]: E0513 23:57:12.252070 2195 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.96.208:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.96.208:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:12.252201 kubelet[2195]: E0513 23:57:12.252155 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.96.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-a2f5fd92b0?timeout=10s\": dial tcp 24.199.96.208:6443: connect: connection refused" interval="200ms" May 13 23:57:12.254013 kubelet[2195]: I0513 23:57:12.253844 2195 factory.go:221] Registration of the systemd container factory successfully May 13 23:57:12.254013 kubelet[2195]: I0513 23:57:12.253926 2195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:57:12.261490 kubelet[2195]: E0513 23:57:12.261469 2195 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:57:12.261776 kubelet[2195]: I0513 23:57:12.261762 2195 factory.go:221] Registration of the containerd container factory successfully May 13 23:57:12.271431 kubelet[2195]: I0513 23:57:12.271296 2195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:57:12.272564 kubelet[2195]: I0513 23:57:12.272545 2195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:57:12.272676 kubelet[2195]: I0513 23:57:12.272668 2195 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:57:12.272736 kubelet[2195]: I0513 23:57:12.272729 2195 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:57:12.272829 kubelet[2195]: E0513 23:57:12.272811 2195 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:57:12.281834 kubelet[2195]: W0513 23:57:12.281717 2195 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.96.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.96.208:6443: connect: connection refused May 13 23:57:12.282057 kubelet[2195]: E0513 23:57:12.281814 2195 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.96.208:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.96.208:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:12.287693 kubelet[2195]: I0513 23:57:12.287623 2195 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:57:12.287693 kubelet[2195]: I0513 23:57:12.287643 2195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:57:12.287693 kubelet[2195]: I0513 23:57:12.287665 2195 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:12.291338 kubelet[2195]: I0513 23:57:12.291315 2195 policy_none.go:49] "None policy: Start" May 13 23:57:12.292124 kubelet[2195]: I0513 23:57:12.292106 2195 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:57:12.292196 kubelet[2195]: I0513 23:57:12.292131 2195 state_mem.go:35] "Initializing new in-memory state store" May 13 23:57:12.300379 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:57:12.310834 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:57:12.314340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:57:12.325702 kubelet[2195]: I0513 23:57:12.325667 2195 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:57:12.325871 kubelet[2195]: I0513 23:57:12.325856 2195 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:57:12.326097 kubelet[2195]: I0513 23:57:12.325874 2195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:57:12.326993 kubelet[2195]: I0513 23:57:12.326322 2195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:57:12.330258 kubelet[2195]: E0513 23:57:12.330237 2195 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-a2f5fd92b0\" not found" May 13 23:57:12.381729 systemd[1]: Created slice kubepods-burstable-podbd3bd8382dc4c906db9bbb5a162d0413.slice - libcontainer container kubepods-burstable-podbd3bd8382dc4c906db9bbb5a162d0413.slice. May 13 23:57:12.404140 systemd[1]: Created slice kubepods-burstable-podd17e3f568a7adfc0aad5edcf4d1b3cef.slice - libcontainer container kubepods-burstable-podd17e3f568a7adfc0aad5edcf4d1b3cef.slice. May 13 23:57:12.427937 systemd[1]: Created slice kubepods-burstable-pod6a1871b69c07de619cc57f6eeb54d4a9.slice - libcontainer container kubepods-burstable-pod6a1871b69c07de619cc57f6eeb54d4a9.slice. May 13 23:57:12.432184 kubelet[2195]: I0513 23:57:12.432100 2195 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.432481 kubelet[2195]: E0513 23:57:12.432459 2195 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.96.208:6443/api/v1/nodes\": dial tcp 24.199.96.208:6443: connect: connection refused" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.453095 kubelet[2195]: E0513 23:57:12.453053 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.96.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-a2f5fd92b0?timeout=10s\": dial tcp 24.199.96.208:6443: connect: connection refused" interval="400ms" May 13 23:57:12.544489 kubelet[2195]: I0513 23:57:12.544220 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd3bd8382dc4c906db9bbb5a162d0413-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"bd3bd8382dc4c906db9bbb5a162d0413\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544489 kubelet[2195]: I0513 23:57:12.544275 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544489 kubelet[2195]: I0513 23:57:12.544299 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544489 kubelet[2195]: I0513 23:57:12.544316 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544489 kubelet[2195]: I0513 23:57:12.544332 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a1871b69c07de619cc57f6eeb54d4a9-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"6a1871b69c07de619cc57f6eeb54d4a9\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544793 kubelet[2195]: I0513 23:57:12.544350 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd3bd8382dc4c906db9bbb5a162d0413-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"bd3bd8382dc4c906db9bbb5a162d0413\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544793 kubelet[2195]: I0513 23:57:12.544367 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd3bd8382dc4c906db9bbb5a162d0413-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"bd3bd8382dc4c906db9bbb5a162d0413\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544793 kubelet[2195]: I0513 23:57:12.544413 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.544793 kubelet[2195]: I0513 23:57:12.544440 2195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.634840 kubelet[2195]: I0513 23:57:12.634425 2195 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.634840 kubelet[2195]: E0513 23:57:12.634794 2195 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.96.208:6443/api/v1/nodes\": dial tcp 24.199.96.208:6443: connect: connection refused" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:12.702008 kubelet[2195]: E0513 23:57:12.701742 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:12.702669 containerd[1489]: time="2025-05-13T23:57:12.702488189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-a2f5fd92b0,Uid:bd3bd8382dc4c906db9bbb5a162d0413,Namespace:kube-system,Attempt:0,}" May 13 23:57:12.725718 kubelet[2195]: E0513 23:57:12.725362 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:12.726413 containerd[1489]: time="2025-05-13T23:57:12.726134841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0,Uid:d17e3f568a7adfc0aad5edcf4d1b3cef,Namespace:kube-system,Attempt:0,}" May 13 23:57:12.733154 kubelet[2195]: E0513 23:57:12.733126 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:12.734159 containerd[1489]: time="2025-05-13T23:57:12.733961154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-a2f5fd92b0,Uid:6a1871b69c07de619cc57f6eeb54d4a9,Namespace:kube-system,Attempt:0,}" May 13 23:57:12.746883 containerd[1489]: time="2025-05-13T23:57:12.746821790Z" level=info msg="connecting to shim 0038a21a9511326df81b0297d397149562bb9e00d2e41a8120fa3cab1b4352b3" address="unix:///run/containerd/s/c05cfe9aa2ecd15487a980df842364cff826ddcea71f5a93e36edc16b97c24d9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:12.798497 containerd[1489]: time="2025-05-13T23:57:12.798415418Z" level=info msg="connecting to shim 2eb25ac843c02d9fb9c87a6cf999019eca4a44110e77e7c8b73914794a52299a" address="unix:///run/containerd/s/c04baacf60b54f21ebe8cc08ec45e2317a36ca818135b5ddebd7bc0e8d1a4bbd" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:12.799149 containerd[1489]: time="2025-05-13T23:57:12.799007688Z" level=info msg="connecting to shim d4ed1accb9d7b13f9c81f1d5bbe64b80084d35a850a1dda1e69540a9f29124d6" address="unix:///run/containerd/s/8864409a5eabcdcc9888a46090a9b2f1e8596bc1a47653c48c591449e7b564ac" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:12.854219 kubelet[2195]: E0513 23:57:12.854157 2195 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.96.208:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-a2f5fd92b0?timeout=10s\": dial tcp 24.199.96.208:6443: connect: connection refused" interval="800ms" May 13 23:57:12.893864 systemd[1]: Started cri-containerd-0038a21a9511326df81b0297d397149562bb9e00d2e41a8120fa3cab1b4352b3.scope - libcontainer container 0038a21a9511326df81b0297d397149562bb9e00d2e41a8120fa3cab1b4352b3. May 13 23:57:12.895562 systemd[1]: Started cri-containerd-2eb25ac843c02d9fb9c87a6cf999019eca4a44110e77e7c8b73914794a52299a.scope - libcontainer container 2eb25ac843c02d9fb9c87a6cf999019eca4a44110e77e7c8b73914794a52299a. May 13 23:57:12.897895 systemd[1]: Started cri-containerd-d4ed1accb9d7b13f9c81f1d5bbe64b80084d35a850a1dda1e69540a9f29124d6.scope - libcontainer container d4ed1accb9d7b13f9c81f1d5bbe64b80084d35a850a1dda1e69540a9f29124d6. May 13 23:57:13.019625 containerd[1489]: time="2025-05-13T23:57:13.019069662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-a2f5fd92b0,Uid:6a1871b69c07de619cc57f6eeb54d4a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4ed1accb9d7b13f9c81f1d5bbe64b80084d35a850a1dda1e69540a9f29124d6\"" May 13 23:57:13.023477 kubelet[2195]: E0513 23:57:13.023446 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:13.028093 containerd[1489]: time="2025-05-13T23:57:13.028037913Z" level=info msg="CreateContainer within sandbox \"d4ed1accb9d7b13f9c81f1d5bbe64b80084d35a850a1dda1e69540a9f29124d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:57:13.029376 containerd[1489]: time="2025-05-13T23:57:13.029349653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-a2f5fd92b0,Uid:bd3bd8382dc4c906db9bbb5a162d0413,Namespace:kube-system,Attempt:0,} returns sandbox id \"0038a21a9511326df81b0297d397149562bb9e00d2e41a8120fa3cab1b4352b3\"" May 13 23:57:13.030964 containerd[1489]: time="2025-05-13T23:57:13.030939544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0,Uid:d17e3f568a7adfc0aad5edcf4d1b3cef,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eb25ac843c02d9fb9c87a6cf999019eca4a44110e77e7c8b73914794a52299a\"" May 13 23:57:13.031800 kubelet[2195]: E0513 23:57:13.031774 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:13.040866 containerd[1489]: time="2025-05-13T23:57:13.040817342Z" level=info msg="Container 458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:13.042747 kubelet[2195]: E0513 23:57:13.042698 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:13.043472 kubelet[2195]: I0513 23:57:13.043297 2195 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:13.044327 kubelet[2195]: E0513 23:57:13.043971 2195 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://24.199.96.208:6443/api/v1/nodes\": dial tcp 24.199.96.208:6443: connect: connection refused" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:13.045394 containerd[1489]: time="2025-05-13T23:57:13.045362198Z" level=info msg="CreateContainer within sandbox \"0038a21a9511326df81b0297d397149562bb9e00d2e41a8120fa3cab1b4352b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:57:13.045911 containerd[1489]: time="2025-05-13T23:57:13.045884885Z" level=info msg="CreateContainer within sandbox \"2eb25ac843c02d9fb9c87a6cf999019eca4a44110e77e7c8b73914794a52299a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:57:13.052561 containerd[1489]: time="2025-05-13T23:57:13.052519397Z" level=info msg="CreateContainer within sandbox \"d4ed1accb9d7b13f9c81f1d5bbe64b80084d35a850a1dda1e69540a9f29124d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb\"" May 13 23:57:13.054074 containerd[1489]: time="2025-05-13T23:57:13.054047930Z" level=info msg="StartContainer for \"458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb\"" May 13 23:57:13.060625 containerd[1489]: time="2025-05-13T23:57:13.059083018Z" level=info msg="connecting to shim 458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb" address="unix:///run/containerd/s/8864409a5eabcdcc9888a46090a9b2f1e8596bc1a47653c48c591449e7b564ac" protocol=ttrpc version=3 May 13 23:57:13.061504 containerd[1489]: time="2025-05-13T23:57:13.061407879Z" level=info msg="Container a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:13.062903 containerd[1489]: time="2025-05-13T23:57:13.062562874Z" level=info msg="Container 1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:13.071445 containerd[1489]: time="2025-05-13T23:57:13.071319810Z" level=info msg="CreateContainer within sandbox \"2eb25ac843c02d9fb9c87a6cf999019eca4a44110e77e7c8b73914794a52299a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a\"" May 13 23:57:13.072315 containerd[1489]: time="2025-05-13T23:57:13.072281495Z" level=info msg="StartContainer for \"a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a\"" May 13 23:57:13.074207 containerd[1489]: time="2025-05-13T23:57:13.074166764Z" level=info msg="connecting to shim a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a" address="unix:///run/containerd/s/c04baacf60b54f21ebe8cc08ec45e2317a36ca818135b5ddebd7bc0e8d1a4bbd" protocol=ttrpc version=3 May 13 23:57:13.077473 containerd[1489]: time="2025-05-13T23:57:13.077436495Z" level=info msg="CreateContainer within sandbox \"0038a21a9511326df81b0297d397149562bb9e00d2e41a8120fa3cab1b4352b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be\"" May 13 23:57:13.080637 containerd[1489]: time="2025-05-13T23:57:13.080566285Z" level=info msg="StartContainer for \"1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be\"" May 13 23:57:13.081798 systemd[1]: Started cri-containerd-458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb.scope - libcontainer container 458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb. May 13 23:57:13.085962 containerd[1489]: time="2025-05-13T23:57:13.085810911Z" level=info msg="connecting to shim 1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be" address="unix:///run/containerd/s/c05cfe9aa2ecd15487a980df842364cff826ddcea71f5a93e36edc16b97c24d9" protocol=ttrpc version=3 May 13 23:57:13.112887 systemd[1]: Started cri-containerd-a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a.scope - libcontainer container a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a. May 13 23:57:13.118100 systemd[1]: Started cri-containerd-1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be.scope - libcontainer container 1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be. May 13 23:57:13.175084 kubelet[2195]: W0513 23:57:13.174859 2195 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.96.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.96.208:6443: connect: connection refused May 13 23:57:13.175084 kubelet[2195]: E0513 23:57:13.174948 2195 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.96.208:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.96.208:6443: connect: connection refused" logger="UnhandledError" May 13 23:57:13.177698 containerd[1489]: time="2025-05-13T23:57:13.177657635Z" level=info msg="StartContainer for \"458011010976f2b0e552bbf41f521c6625cb59c6994174a24264769a6363c1bb\" returns successfully" May 13 23:57:13.221209 containerd[1489]: time="2025-05-13T23:57:13.221164774Z" level=info msg="StartContainer for \"1d327aaccdbf0568b7bb5997db17d16a5b2cd6f2da6ec9a8f924d50d7cb341be\" returns successfully" May 13 23:57:13.240530 containerd[1489]: time="2025-05-13T23:57:13.240485307Z" level=info msg="StartContainer for \"a02499999d8d6f23ba106e1dc6e7dec36808797bd8cc8a618567b295d47eff9a\" returns successfully" May 13 23:57:13.294562 kubelet[2195]: E0513 23:57:13.294438 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:13.296654 kubelet[2195]: E0513 23:57:13.296593 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:13.300655 kubelet[2195]: E0513 23:57:13.299886 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:13.846523 kubelet[2195]: I0513 23:57:13.846470 2195 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:14.302628 kubelet[2195]: E0513 23:57:14.301436 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:14.504641 kubelet[2195]: E0513 23:57:14.503509 2195 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:14.833064 kubelet[2195]: E0513 23:57:14.833009 2195 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-a2f5fd92b0\" not found" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:14.853791 kubelet[2195]: I0513 23:57:14.853478 2195 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:15.226830 kubelet[2195]: I0513 23:57:15.226473 2195 apiserver.go:52] "Watching apiserver" May 13 23:57:15.243640 kubelet[2195]: I0513 23:57:15.243531 2195 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:57:16.955042 systemd[1]: Reload requested from client PID 2464 ('systemctl') (unit session-5.scope)... May 13 23:57:16.955409 systemd[1]: Reloading... May 13 23:57:17.057657 zram_generator::config[2508]: No configuration found. May 13 23:57:17.179660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:57:17.317909 systemd[1]: Reloading finished in 362 ms. May 13 23:57:17.345720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:17.356624 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:57:17.356883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:17.356955 systemd[1]: kubelet.service: Consumed 1.080s CPU time, 110.6M memory peak. May 13 23:57:17.359054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:57:17.507750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:57:17.522239 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:57:17.589062 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:17.589062 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:57:17.589062 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:57:17.590304 kubelet[2559]: I0513 23:57:17.590242 2559 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:57:17.599141 kubelet[2559]: I0513 23:57:17.599096 2559 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 23:57:17.599141 kubelet[2559]: I0513 23:57:17.599127 2559 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:57:17.599388 kubelet[2559]: I0513 23:57:17.599370 2559 server.go:929] "Client rotation is on, will bootstrap in background" May 13 23:57:17.602184 kubelet[2559]: I0513 23:57:17.602158 2559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:57:17.610790 kubelet[2559]: I0513 23:57:17.610555 2559 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:57:17.614874 kubelet[2559]: I0513 23:57:17.614834 2559 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:57:17.618299 kubelet[2559]: I0513 23:57:17.618261 2559 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:57:17.618414 kubelet[2559]: I0513 23:57:17.618383 2559 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 23:57:17.618524 kubelet[2559]: I0513 23:57:17.618491 2559 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:57:17.618724 kubelet[2559]: I0513 23:57:17.618523 2559 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-a2f5fd92b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:57:17.618824 kubelet[2559]: I0513 23:57:17.618733 2559 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:57:17.618824 kubelet[2559]: I0513 23:57:17.618744 2559 container_manager_linux.go:300] "Creating device plugin manager" May 13 23:57:17.618824 kubelet[2559]: I0513 23:57:17.618776 2559 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:17.618900 kubelet[2559]: I0513 23:57:17.618891 2559 kubelet.go:408] "Attempting to sync node with API server" May 13 23:57:17.618929 kubelet[2559]: I0513 23:57:17.618902 2559 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:57:17.618953 kubelet[2559]: I0513 23:57:17.618931 2559 kubelet.go:314] "Adding apiserver pod source" May 13 23:57:17.618981 kubelet[2559]: I0513 23:57:17.618952 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:57:17.631049 kubelet[2559]: I0513 23:57:17.631024 2559 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:57:17.632625 kubelet[2559]: I0513 23:57:17.632148 2559 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:57:17.634206 kubelet[2559]: I0513 23:57:17.634149 2559 server.go:1269] "Started kubelet" May 13 23:57:17.637974 kubelet[2559]: I0513 23:57:17.636838 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:57:17.645164 kubelet[2559]: I0513 23:57:17.644374 2559 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:57:17.646313 kubelet[2559]: I0513 23:57:17.646293 2559 server.go:460] "Adding debug handlers to kubelet server" May 13 23:57:17.647017 kubelet[2559]: I0513 23:57:17.646985 2559 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 23:57:17.650371 kubelet[2559]: I0513 23:57:17.647685 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:57:17.650977 kubelet[2559]: I0513 23:57:17.650943 2559 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:57:17.651058 kubelet[2559]: I0513 23:57:17.648679 2559 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 23:57:17.651144 kubelet[2559]: I0513 23:57:17.651130 2559 reconciler.go:26] "Reconciler: start to sync state" May 13 23:57:17.651185 kubelet[2559]: I0513 23:57:17.647992 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:57:17.652754 kubelet[2559]: E0513 23:57:17.652734 2559 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:57:17.657055 kubelet[2559]: I0513 23:57:17.657031 2559 factory.go:221] Registration of the containerd container factory successfully May 13 23:57:17.657055 kubelet[2559]: I0513 23:57:17.657053 2559 factory.go:221] Registration of the systemd container factory successfully May 13 23:57:17.657175 kubelet[2559]: I0513 23:57:17.657125 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:57:17.662745 kubelet[2559]: I0513 23:57:17.662717 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:57:17.664069 kubelet[2559]: I0513 23:57:17.664050 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:57:17.665478 kubelet[2559]: I0513 23:57:17.665465 2559 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:57:17.666307 kubelet[2559]: I0513 23:57:17.666286 2559 kubelet.go:2321] "Starting kubelet main sync loop" May 13 23:57:17.666402 kubelet[2559]: E0513 23:57:17.666344 2559 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:57:17.712907 kubelet[2559]: I0513 23:57:17.712882 2559 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:57:17.713203 kubelet[2559]: I0513 23:57:17.713187 2559 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:57:17.713266 kubelet[2559]: I0513 23:57:17.713260 2559 state_mem.go:36] "Initialized new in-memory state store" May 13 23:57:17.713461 kubelet[2559]: I0513 23:57:17.713448 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:57:17.713524 kubelet[2559]: I0513 23:57:17.713505 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:57:17.713566 kubelet[2559]: I0513 23:57:17.713561 2559 policy_none.go:49] "None policy: Start" May 13 23:57:17.714234 kubelet[2559]: I0513 23:57:17.714224 2559 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:57:17.714325 kubelet[2559]: I0513 23:57:17.714314 2559 state_mem.go:35] "Initializing new in-memory state store" May 13 23:57:17.715551 kubelet[2559]: I0513 23:57:17.715520 2559 state_mem.go:75] "Updated machine memory state" May 13 23:57:17.722246 kubelet[2559]: I0513 23:57:17.722226 2559 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:57:17.722649 kubelet[2559]: I0513 23:57:17.722635 2559 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:57:17.723266 kubelet[2559]: I0513 23:57:17.723227 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:57:17.723727 kubelet[2559]: I0513 23:57:17.723704 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:57:17.776423 kubelet[2559]: W0513 23:57:17.776108 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:57:17.777634 kubelet[2559]: W0513 23:57:17.777525 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:57:17.778504 kubelet[2559]: W0513 23:57:17.778426 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:57:17.829570 kubelet[2559]: I0513 23:57:17.829514 2559 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.838525 kubelet[2559]: I0513 23:57:17.838491 2559 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.838772 kubelet[2559]: I0513 23:57:17.838586 2559 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.852961 kubelet[2559]: I0513 23:57:17.852185 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.852961 kubelet[2559]: I0513 23:57:17.852221 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.852961 kubelet[2559]: I0513 23:57:17.852251 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a1871b69c07de619cc57f6eeb54d4a9-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"6a1871b69c07de619cc57f6eeb54d4a9\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.852961 kubelet[2559]: I0513 23:57:17.852270 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd3bd8382dc4c906db9bbb5a162d0413-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"bd3bd8382dc4c906db9bbb5a162d0413\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.852961 kubelet[2559]: I0513 23:57:17.852291 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.853194 kubelet[2559]: I0513 23:57:17.852309 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.853194 kubelet[2559]: I0513 23:57:17.852327 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d17e3f568a7adfc0aad5edcf4d1b3cef-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"d17e3f568a7adfc0aad5edcf4d1b3cef\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.853194 kubelet[2559]: I0513 23:57:17.852345 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd3bd8382dc4c906db9bbb5a162d0413-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"bd3bd8382dc4c906db9bbb5a162d0413\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:17.853194 kubelet[2559]: I0513 23:57:17.852360 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd3bd8382dc4c906db9bbb5a162d0413-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" (UID: \"bd3bd8382dc4c906db9bbb5a162d0413\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:18.078769 kubelet[2559]: E0513 23:57:18.078074 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:18.078769 kubelet[2559]: E0513 23:57:18.077587 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:18.080212 kubelet[2559]: E0513 23:57:18.079497 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:18.621144 kubelet[2559]: I0513 23:57:18.620809 2559 apiserver.go:52] "Watching apiserver" May 13 23:57:18.651683 kubelet[2559]: I0513 23:57:18.651578 2559 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 23:57:18.700677 kubelet[2559]: E0513 23:57:18.697485 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:18.707107 kubelet[2559]: W0513 23:57:18.707074 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:57:18.707356 kubelet[2559]: W0513 23:57:18.707298 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:57:18.707464 kubelet[2559]: E0513 23:57:18.707449 2559 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-n-a2f5fd92b0\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:18.707902 kubelet[2559]: E0513 23:57:18.707712 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:18.708675 kubelet[2559]: E0513 23:57:18.708655 2559 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" May 13 23:57:18.710529 kubelet[2559]: E0513 23:57:18.710396 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:18.743670 kubelet[2559]: I0513 23:57:18.742788 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-a2f5fd92b0" podStartSLOduration=1.742768115 podStartE2EDuration="1.742768115s" podCreationTimestamp="2025-05-13 23:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:18.738284309 +0000 UTC m=+1.201071359" watchObservedRunningTime="2025-05-13 23:57:18.742768115 +0000 UTC m=+1.205555157" May 13 23:57:18.751993 sudo[1646]: pam_unix(sudo:session): session closed for user root May 13 23:57:18.752953 kubelet[2559]: I0513 23:57:18.752674 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-a2f5fd92b0" podStartSLOduration=1.750559523 podStartE2EDuration="1.750559523s" podCreationTimestamp="2025-05-13 23:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:18.750277834 +0000 UTC m=+1.213064880" watchObservedRunningTime="2025-05-13 23:57:18.750559523 +0000 UTC m=+1.213346573" May 13 23:57:18.757650 sshd[1645]: Connection closed by 147.75.109.163 port 34692 May 13 23:57:18.760752 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 13 23:57:18.763560 kubelet[2559]: I0513 23:57:18.763426 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-a2f5fd92b0" podStartSLOduration=1.763407125 podStartE2EDuration="1.763407125s" podCreationTimestamp="2025-05-13 23:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:18.763322114 +0000 UTC m=+1.226109166" watchObservedRunningTime="2025-05-13 23:57:18.763407125 +0000 UTC m=+1.226194166" May 13 23:57:18.764657 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. May 13 23:57:18.765400 systemd[1]: sshd@5-24.199.96.208:22-147.75.109.163:34692.service: Deactivated successfully. May 13 23:57:18.767411 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:57:18.767589 systemd[1]: session-5.scope: Consumed 3.675s CPU time, 168M memory peak. May 13 23:57:18.772134 systemd-logind[1465]: Removed session 5. May 13 23:57:19.699657 kubelet[2559]: E0513 23:57:19.698772 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:19.699657 kubelet[2559]: E0513 23:57:19.698991 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:20.700904 kubelet[2559]: E0513 23:57:20.700868 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:21.702839 kubelet[2559]: E0513 23:57:21.702804 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:22.794873 kubelet[2559]: E0513 23:57:22.794816 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:23.485396 kubelet[2559]: I0513 23:57:23.485339 2559 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:57:23.485828 containerd[1489]: time="2025-05-13T23:57:23.485778874Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:57:23.486349 kubelet[2559]: I0513 23:57:23.486016 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:57:24.392777 kubelet[2559]: I0513 23:57:24.392139 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14f123e9-0d62-49dc-a645-ab43d1b59feb-xtables-lock\") pod \"kube-proxy-q8qbs\" (UID: \"14f123e9-0d62-49dc-a645-ab43d1b59feb\") " pod="kube-system/kube-proxy-q8qbs" May 13 23:57:24.392777 kubelet[2559]: I0513 23:57:24.392174 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/c16fafdc-d797-4ee9-a09c-977933fde684-flannel-cfg\") pod \"kube-flannel-ds-89rx9\" (UID: \"c16fafdc-d797-4ee9-a09c-977933fde684\") " pod="kube-flannel/kube-flannel-ds-89rx9" May 13 23:57:24.392777 kubelet[2559]: I0513 23:57:24.392191 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c16fafdc-d797-4ee9-a09c-977933fde684-xtables-lock\") pod \"kube-flannel-ds-89rx9\" (UID: \"c16fafdc-d797-4ee9-a09c-977933fde684\") " pod="kube-flannel/kube-flannel-ds-89rx9" May 13 23:57:24.392777 kubelet[2559]: I0513 23:57:24.392208 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfcl\" (UniqueName: \"kubernetes.io/projected/c16fafdc-d797-4ee9-a09c-977933fde684-kube-api-access-qdfcl\") pod \"kube-flannel-ds-89rx9\" (UID: \"c16fafdc-d797-4ee9-a09c-977933fde684\") " pod="kube-flannel/kube-flannel-ds-89rx9" May 13 23:57:24.392777 kubelet[2559]: I0513 23:57:24.392245 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14f123e9-0d62-49dc-a645-ab43d1b59feb-kube-proxy\") pod \"kube-proxy-q8qbs\" (UID: \"14f123e9-0d62-49dc-a645-ab43d1b59feb\") " pod="kube-system/kube-proxy-q8qbs" May 13 23:57:24.393305 kubelet[2559]: I0513 23:57:24.392263 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gnsm\" (UniqueName: \"kubernetes.io/projected/14f123e9-0d62-49dc-a645-ab43d1b59feb-kube-api-access-4gnsm\") pod \"kube-proxy-q8qbs\" (UID: \"14f123e9-0d62-49dc-a645-ab43d1b59feb\") " pod="kube-system/kube-proxy-q8qbs" May 13 23:57:24.393305 kubelet[2559]: I0513 23:57:24.392279 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c16fafdc-d797-4ee9-a09c-977933fde684-run\") pod \"kube-flannel-ds-89rx9\" (UID: \"c16fafdc-d797-4ee9-a09c-977933fde684\") " pod="kube-flannel/kube-flannel-ds-89rx9" May 13 23:57:24.393305 kubelet[2559]: I0513 23:57:24.392292 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/c16fafdc-d797-4ee9-a09c-977933fde684-cni\") pod \"kube-flannel-ds-89rx9\" (UID: \"c16fafdc-d797-4ee9-a09c-977933fde684\") " pod="kube-flannel/kube-flannel-ds-89rx9" May 13 23:57:24.393305 kubelet[2559]: I0513 23:57:24.392310 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/c16fafdc-d797-4ee9-a09c-977933fde684-cni-plugin\") pod \"kube-flannel-ds-89rx9\" (UID: \"c16fafdc-d797-4ee9-a09c-977933fde684\") " pod="kube-flannel/kube-flannel-ds-89rx9" May 13 23:57:24.393305 kubelet[2559]: I0513 23:57:24.392330 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14f123e9-0d62-49dc-a645-ab43d1b59feb-lib-modules\") pod \"kube-proxy-q8qbs\" (UID: \"14f123e9-0d62-49dc-a645-ab43d1b59feb\") " pod="kube-system/kube-proxy-q8qbs" May 13 23:57:24.394633 kubelet[2559]: W0513 23:57:24.394251 2559 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4284.0.0-n-a2f5fd92b0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object May 13 23:57:24.394633 kubelet[2559]: E0513 23:57:24.394502 2559 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4284.0.0-n-a2f5fd92b0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object" logger="UnhandledError" May 13 23:57:24.394633 kubelet[2559]: W0513 23:57:24.394431 2559 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284.0.0-n-a2f5fd92b0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object May 13 23:57:24.394633 kubelet[2559]: E0513 23:57:24.394545 2559 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284.0.0-n-a2f5fd92b0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object" logger="UnhandledError" May 13 23:57:24.398863 kubelet[2559]: W0513 23:57:24.398792 2559 reflector.go:561] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284.0.0-n-a2f5fd92b0" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object May 13 23:57:24.398863 kubelet[2559]: E0513 23:57:24.398830 2559 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284.0.0-n-a2f5fd92b0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object" logger="UnhandledError" May 13 23:57:24.398863 kubelet[2559]: W0513 23:57:24.398836 2559 reflector.go:561] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-4284.0.0-n-a2f5fd92b0" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object May 13 23:57:24.399052 kubelet[2559]: E0513 23:57:24.398870 2559 reflector.go:158] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:ci-4284.0.0-n-a2f5fd92b0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node 'ci-4284.0.0-n-a2f5fd92b0' and this object" logger="UnhandledError" May 13 23:57:24.403991 systemd[1]: Created slice kubepods-burstable-podc16fafdc_d797_4ee9_a09c_977933fde684.slice - libcontainer container kubepods-burstable-podc16fafdc_d797_4ee9_a09c_977933fde684.slice. May 13 23:57:24.421190 systemd[1]: Created slice kubepods-besteffort-pod14f123e9_0d62_49dc_a645_ab43d1b59feb.slice - libcontainer container kubepods-besteffort-pod14f123e9_0d62_49dc_a645_ab43d1b59feb.slice. May 13 23:57:25.493744 kubelet[2559]: E0513 23:57:25.493589 2559 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.493744 kubelet[2559]: E0513 23:57:25.493589 2559 configmap.go:193] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.493744 kubelet[2559]: E0513 23:57:25.493742 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14f123e9-0d62-49dc-a645-ab43d1b59feb-kube-proxy podName:14f123e9-0d62-49dc-a645-ab43d1b59feb nodeName:}" failed. No retries permitted until 2025-05-13 23:57:25.993717154 +0000 UTC m=+8.456504200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/14f123e9-0d62-49dc-a645-ab43d1b59feb-kube-proxy") pod "kube-proxy-q8qbs" (UID: "14f123e9-0d62-49dc-a645-ab43d1b59feb") : failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.493744 kubelet[2559]: E0513 23:57:25.493759 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c16fafdc-d797-4ee9-a09c-977933fde684-flannel-cfg podName:c16fafdc-d797-4ee9-a09c-977933fde684 nodeName:}" failed. No retries permitted until 2025-05-13 23:57:25.993752214 +0000 UTC m=+8.456539238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/c16fafdc-d797-4ee9-a09c-977933fde684-flannel-cfg") pod "kube-flannel-ds-89rx9" (UID: "c16fafdc-d797-4ee9-a09c-977933fde684") : failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.500510 kubelet[2559]: E0513 23:57:25.500338 2559 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.500510 kubelet[2559]: E0513 23:57:25.500371 2559 projected.go:194] Error preparing data for projected volume kube-api-access-4gnsm for pod kube-system/kube-proxy-q8qbs: failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.500510 kubelet[2559]: E0513 23:57:25.500446 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14f123e9-0d62-49dc-a645-ab43d1b59feb-kube-api-access-4gnsm podName:14f123e9-0d62-49dc-a645-ab43d1b59feb nodeName:}" failed. No retries permitted until 2025-05-13 23:57:26.000427052 +0000 UTC m=+8.463214088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4gnsm" (UniqueName: "kubernetes.io/projected/14f123e9-0d62-49dc-a645-ab43d1b59feb-kube-api-access-4gnsm") pod "kube-proxy-q8qbs" (UID: "14f123e9-0d62-49dc-a645-ab43d1b59feb") : failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.501127 kubelet[2559]: E0513 23:57:25.501001 2559 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.501127 kubelet[2559]: E0513 23:57:25.501033 2559 projected.go:194] Error preparing data for projected volume kube-api-access-qdfcl for pod kube-flannel/kube-flannel-ds-89rx9: failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.501127 kubelet[2559]: E0513 23:57:25.501088 2559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c16fafdc-d797-4ee9-a09c-977933fde684-kube-api-access-qdfcl podName:c16fafdc-d797-4ee9-a09c-977933fde684 nodeName:}" failed. No retries permitted until 2025-05-13 23:57:26.001068126 +0000 UTC m=+8.463855174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qdfcl" (UniqueName: "kubernetes.io/projected/c16fafdc-d797-4ee9-a09c-977933fde684-kube-api-access-qdfcl") pod "kube-flannel-ds-89rx9" (UID: "c16fafdc-d797-4ee9-a09c-977933fde684") : failed to sync configmap cache: timed out waiting for the condition May 13 23:57:25.572160 kubelet[2559]: E0513 23:57:25.571929 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:25.710670 kubelet[2559]: E0513 23:57:25.710593 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:26.215557 kubelet[2559]: E0513 23:57:26.215145 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:26.216066 containerd[1489]: time="2025-05-13T23:57:26.216023268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-89rx9,Uid:c16fafdc-d797-4ee9-a09c-977933fde684,Namespace:kube-flannel,Attempt:0,}" May 13 23:57:26.231621 kubelet[2559]: E0513 23:57:26.231547 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:26.232986 containerd[1489]: time="2025-05-13T23:57:26.232688484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8qbs,Uid:14f123e9-0d62-49dc-a645-ab43d1b59feb,Namespace:kube-system,Attempt:0,}" May 13 23:57:26.245756 containerd[1489]: time="2025-05-13T23:57:26.245621232Z" level=info msg="connecting to shim ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426" address="unix:///run/containerd/s/1cb1bb37dc04a24c79856eff87884c91b924b1634337bf15c3d091d175704bc6" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:26.272145 containerd[1489]: time="2025-05-13T23:57:26.271056702Z" level=info msg="connecting to shim 6aa2aff92311f11c6da01f733e34f1457ebdda4aa56fab558a19dfafed814600" address="unix:///run/containerd/s/e0c678fea64b74ccd9df0fd83565060b631162bbfdfa82149b414213a6e73902" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:26.286957 systemd[1]: Started cri-containerd-ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426.scope - libcontainer container ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426. May 13 23:57:26.311643 systemd[1]: Started cri-containerd-6aa2aff92311f11c6da01f733e34f1457ebdda4aa56fab558a19dfafed814600.scope - libcontainer container 6aa2aff92311f11c6da01f733e34f1457ebdda4aa56fab558a19dfafed814600. May 13 23:57:26.343659 containerd[1489]: time="2025-05-13T23:57:26.343415317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8qbs,Uid:14f123e9-0d62-49dc-a645-ab43d1b59feb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aa2aff92311f11c6da01f733e34f1457ebdda4aa56fab558a19dfafed814600\"" May 13 23:57:26.346521 kubelet[2559]: E0513 23:57:26.345905 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:26.354836 containerd[1489]: time="2025-05-13T23:57:26.354799857Z" level=info msg="CreateContainer within sandbox \"6aa2aff92311f11c6da01f733e34f1457ebdda4aa56fab558a19dfafed814600\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:57:26.361977 containerd[1489]: time="2025-05-13T23:57:26.361928192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-89rx9,Uid:c16fafdc-d797-4ee9-a09c-977933fde684,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\"" May 13 23:57:26.363231 kubelet[2559]: E0513 23:57:26.362980 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:26.364476 containerd[1489]: time="2025-05-13T23:57:26.364446846Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 23:57:26.366780 systemd-resolved[1336]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. May 13 23:57:26.369626 containerd[1489]: time="2025-05-13T23:57:26.369581923Z" level=info msg="Container 64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:26.378040 containerd[1489]: time="2025-05-13T23:57:26.377889446Z" level=info msg="CreateContainer within sandbox \"6aa2aff92311f11c6da01f733e34f1457ebdda4aa56fab558a19dfafed814600\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f\"" May 13 23:57:26.379157 containerd[1489]: time="2025-05-13T23:57:26.378784886Z" level=info msg="StartContainer for \"64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f\"" May 13 23:57:26.380499 containerd[1489]: time="2025-05-13T23:57:26.380466916Z" level=info msg="connecting to shim 64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f" address="unix:///run/containerd/s/e0c678fea64b74ccd9df0fd83565060b631162bbfdfa82149b414213a6e73902" protocol=ttrpc version=3 May 13 23:57:26.402771 systemd[1]: Started cri-containerd-64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f.scope - libcontainer container 64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f. May 13 23:57:26.447312 containerd[1489]: time="2025-05-13T23:57:26.447275874Z" level=info msg="StartContainer for \"64cb4cfb4c232fb715863b14a9de28e1a3ee45f4afeab15b71d09632f9e3cf4f\" returns successfully" May 13 23:57:26.719089 kubelet[2559]: E0513 23:57:26.719049 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:26.732195 kubelet[2559]: I0513 23:57:26.732136 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q8qbs" podStartSLOduration=2.73210313 podStartE2EDuration="2.73210313s" podCreationTimestamp="2025-05-13 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:26.731947714 +0000 UTC m=+9.194734764" watchObservedRunningTime="2025-05-13 23:57:26.73210313 +0000 UTC m=+9.194890170" May 13 23:57:28.048241 systemd-resolved[1336]: Clock change detected. Flushing caches. May 13 23:57:28.048513 systemd-timesyncd[1358]: Contacted time server 64.79.100.196:123 (2.flatcar.pool.ntp.org). May 13 23:57:28.048575 systemd-timesyncd[1358]: Initial clock synchronization to Tue 2025-05-13 23:57:28.048044 UTC. May 13 23:57:28.891124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966448990.mount: Deactivated successfully. May 13 23:57:28.930797 containerd[1489]: time="2025-05-13T23:57:28.930737239Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:28.931747 containerd[1489]: time="2025-05-13T23:57:28.931587272Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" May 13 23:57:28.932462 containerd[1489]: time="2025-05-13T23:57:28.932200101Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:28.933763 containerd[1489]: time="2025-05-13T23:57:28.933736066Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:28.934835 containerd[1489]: time="2025-05-13T23:57:28.934798487Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.106915871s" May 13 23:57:28.934835 containerd[1489]: time="2025-05-13T23:57:28.934834263Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 13 23:57:28.938053 containerd[1489]: time="2025-05-13T23:57:28.937651409Z" level=info msg="CreateContainer within sandbox \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 23:57:28.942903 containerd[1489]: time="2025-05-13T23:57:28.942861519Z" level=info msg="Container e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:28.955789 containerd[1489]: time="2025-05-13T23:57:28.955745876Z" level=info msg="CreateContainer within sandbox \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\"" May 13 23:57:28.958046 containerd[1489]: time="2025-05-13T23:57:28.957993558Z" level=info msg="StartContainer for \"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\"" May 13 23:57:28.959406 containerd[1489]: time="2025-05-13T23:57:28.959331106Z" level=info msg="connecting to shim e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2" address="unix:///run/containerd/s/1cb1bb37dc04a24c79856eff87884c91b924b1634337bf15c3d091d175704bc6" protocol=ttrpc version=3 May 13 23:57:28.984208 systemd[1]: Started cri-containerd-e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2.scope - libcontainer container e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2. May 13 23:57:29.014161 systemd[1]: cri-containerd-e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2.scope: Deactivated successfully. May 13 23:57:29.018033 containerd[1489]: time="2025-05-13T23:57:29.017965155Z" level=info msg="received exit event container_id:\"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\" id:\"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\" pid:2892 exited_at:{seconds:1747180649 nanos:17679547}" May 13 23:57:29.018779 containerd[1489]: time="2025-05-13T23:57:29.018664955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\" id:\"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\" pid:2892 exited_at:{seconds:1747180649 nanos:17679547}" May 13 23:57:29.018779 containerd[1489]: time="2025-05-13T23:57:29.018758634Z" level=info msg="StartContainer for \"e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2\" returns successfully" May 13 23:57:29.040944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5eef7f635213c4c6cc526699b298cd1dd97f373f0f6d272442657314549fce2-rootfs.mount: Deactivated successfully. May 13 23:57:29.188640 kubelet[2559]: E0513 23:57:29.188426 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:29.189341 containerd[1489]: time="2025-05-13T23:57:29.189301716Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 23:57:30.309452 kubelet[2559]: E0513 23:57:30.309325 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:31.192876 kubelet[2559]: E0513 23:57:31.192723 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:31.235710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246412208.mount: Deactivated successfully. May 13 23:57:31.960597 containerd[1489]: time="2025-05-13T23:57:31.960426195Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:31.963044 containerd[1489]: time="2025-05-13T23:57:31.962731715Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 13 23:57:31.964849 containerd[1489]: time="2025-05-13T23:57:31.964539497Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:31.967843 containerd[1489]: time="2025-05-13T23:57:31.967800764Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:57:31.969962 containerd[1489]: time="2025-05-13T23:57:31.969928855Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.780590083s" May 13 23:57:31.970103 containerd[1489]: time="2025-05-13T23:57:31.970084644Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 13 23:57:31.974223 containerd[1489]: time="2025-05-13T23:57:31.974122747Z" level=info msg="CreateContainer within sandbox \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:57:31.983325 containerd[1489]: time="2025-05-13T23:57:31.983067183Z" level=info msg="Container dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:31.987753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657977066.mount: Deactivated successfully. May 13 23:57:31.993606 containerd[1489]: time="2025-05-13T23:57:31.993548894Z" level=info msg="CreateContainer within sandbox \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\"" May 13 23:57:31.995189 containerd[1489]: time="2025-05-13T23:57:31.995142986Z" level=info msg="StartContainer for \"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\"" May 13 23:57:31.996991 containerd[1489]: time="2025-05-13T23:57:31.996947275Z" level=info msg="connecting to shim dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df" address="unix:///run/containerd/s/1cb1bb37dc04a24c79856eff87884c91b924b1634337bf15c3d091d175704bc6" protocol=ttrpc version=3 May 13 23:57:32.019738 systemd[1]: Started cri-containerd-dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df.scope - libcontainer container dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df. May 13 23:57:32.052653 containerd[1489]: time="2025-05-13T23:57:32.052608054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\" id:\"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\" pid:2963 exited_at:{seconds:1747180652 nanos:52206473}" May 13 23:57:32.052832 systemd[1]: cri-containerd-dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df.scope: Deactivated successfully. May 13 23:57:32.055094 containerd[1489]: time="2025-05-13T23:57:32.054309128Z" level=info msg="received exit event container_id:\"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\" id:\"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\" pid:2963 exited_at:{seconds:1747180652 nanos:52206473}" May 13 23:57:32.056392 containerd[1489]: time="2025-05-13T23:57:32.056356627Z" level=info msg="StartContainer for \"dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df\" returns successfully" May 13 23:57:32.067215 kubelet[2559]: I0513 23:57:32.067179 2559 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 23:57:32.101881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd8a5fce272985410d84dbfaf6832b19a8e19ab476c9313daafc2ce9a039d0df-rootfs.mount: Deactivated successfully. May 13 23:57:32.110676 kubelet[2559]: I0513 23:57:32.109125 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfa071a2-cae0-4c56-9efc-38271ff68c09-config-volume\") pod \"coredns-6f6b679f8f-kpnc5\" (UID: \"dfa071a2-cae0-4c56-9efc-38271ff68c09\") " pod="kube-system/coredns-6f6b679f8f-kpnc5" May 13 23:57:32.111151 kubelet[2559]: I0513 23:57:32.111093 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrcl\" (UniqueName: \"kubernetes.io/projected/dfa071a2-cae0-4c56-9efc-38271ff68c09-kube-api-access-pmrcl\") pod \"coredns-6f6b679f8f-kpnc5\" (UID: \"dfa071a2-cae0-4c56-9efc-38271ff68c09\") " pod="kube-system/coredns-6f6b679f8f-kpnc5" May 13 23:57:32.126453 systemd[1]: Created slice kubepods-burstable-poddfa071a2_cae0_4c56_9efc_38271ff68c09.slice - libcontainer container kubepods-burstable-poddfa071a2_cae0_4c56_9efc_38271ff68c09.slice. May 13 23:57:32.137065 systemd[1]: Created slice kubepods-burstable-podd063e5cd_61ed_45ac_b29b_1e7397ca7f11.slice - libcontainer container kubepods-burstable-podd063e5cd_61ed_45ac_b29b_1e7397ca7f11.slice. May 13 23:57:32.196828 kubelet[2559]: E0513 23:57:32.196754 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:32.199652 containerd[1489]: time="2025-05-13T23:57:32.199614908Z" level=info msg="CreateContainer within sandbox \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 23:57:32.217212 kubelet[2559]: I0513 23:57:32.212231 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttcbb\" (UniqueName: \"kubernetes.io/projected/d063e5cd-61ed-45ac-b29b-1e7397ca7f11-kube-api-access-ttcbb\") pod \"coredns-6f6b679f8f-v559j\" (UID: \"d063e5cd-61ed-45ac-b29b-1e7397ca7f11\") " pod="kube-system/coredns-6f6b679f8f-v559j" May 13 23:57:32.217212 kubelet[2559]: I0513 23:57:32.212285 2559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d063e5cd-61ed-45ac-b29b-1e7397ca7f11-config-volume\") pod \"coredns-6f6b679f8f-v559j\" (UID: \"d063e5cd-61ed-45ac-b29b-1e7397ca7f11\") " pod="kube-system/coredns-6f6b679f8f-v559j" May 13 23:57:32.217377 containerd[1489]: time="2025-05-13T23:57:32.216096995Z" level=info msg="Container 7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:32.241486 containerd[1489]: time="2025-05-13T23:57:32.241434705Z" level=info msg="CreateContainer within sandbox \"ae9de7d1a5b652c01afed758fb9d3b3dd591a5951618dfd8735e6aa963c69426\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6\"" May 13 23:57:32.242301 containerd[1489]: time="2025-05-13T23:57:32.242150387Z" level=info msg="StartContainer for \"7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6\"" May 13 23:57:32.244121 containerd[1489]: time="2025-05-13T23:57:32.243256179Z" level=info msg="connecting to shim 7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6" address="unix:///run/containerd/s/1cb1bb37dc04a24c79856eff87884c91b924b1634337bf15c3d091d175704bc6" protocol=ttrpc version=3 May 13 23:57:32.268205 systemd[1]: Started cri-containerd-7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6.scope - libcontainer container 7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6. May 13 23:57:32.300683 containerd[1489]: time="2025-05-13T23:57:32.300512556Z" level=info msg="StartContainer for \"7540a3937aa5993e50f839da14a5b1869f776f567c2da5fd394c243b617283e6\" returns successfully" May 13 23:57:32.430738 kubelet[2559]: E0513 23:57:32.430694 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:32.433059 containerd[1489]: time="2025-05-13T23:57:32.432727195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kpnc5,Uid:dfa071a2-cae0-4c56-9efc-38271ff68c09,Namespace:kube-system,Attempt:0,}" May 13 23:57:32.445501 kubelet[2559]: E0513 23:57:32.443726 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:32.446553 containerd[1489]: time="2025-05-13T23:57:32.446423247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v559j,Uid:d063e5cd-61ed-45ac-b29b-1e7397ca7f11,Namespace:kube-system,Attempt:0,}" May 13 23:57:32.459460 containerd[1489]: time="2025-05-13T23:57:32.459391732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kpnc5,Uid:dfa071a2-cae0-4c56-9efc-38271ff68c09,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e579b90ff7b2ea56bfebb14463853445ab3d1577c6713d1cdf0de84e15d6781f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:57:32.461721 kubelet[2559]: E0513 23:57:32.460970 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e579b90ff7b2ea56bfebb14463853445ab3d1577c6713d1cdf0de84e15d6781f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:57:32.461721 kubelet[2559]: E0513 23:57:32.461130 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e579b90ff7b2ea56bfebb14463853445ab3d1577c6713d1cdf0de84e15d6781f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-kpnc5" May 13 23:57:32.461721 kubelet[2559]: E0513 23:57:32.461157 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e579b90ff7b2ea56bfebb14463853445ab3d1577c6713d1cdf0de84e15d6781f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-kpnc5" May 13 23:57:32.461721 kubelet[2559]: E0513 23:57:32.461236 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kpnc5_kube-system(dfa071a2-cae0-4c56-9efc-38271ff68c09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kpnc5_kube-system(dfa071a2-cae0-4c56-9efc-38271ff68c09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e579b90ff7b2ea56bfebb14463853445ab3d1577c6713d1cdf0de84e15d6781f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-kpnc5" podUID="dfa071a2-cae0-4c56-9efc-38271ff68c09" May 13 23:57:32.468660 containerd[1489]: time="2025-05-13T23:57:32.468553624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v559j,Uid:d063e5cd-61ed-45ac-b29b-1e7397ca7f11,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d36d1dce6bc00d65a100899408689c5c101c1937c22f720b76389c482100017\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:57:32.469125 kubelet[2559]: E0513 23:57:32.469016 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d36d1dce6bc00d65a100899408689c5c101c1937c22f720b76389c482100017\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 13 23:57:32.469125 kubelet[2559]: E0513 23:57:32.469091 2559 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d36d1dce6bc00d65a100899408689c5c101c1937c22f720b76389c482100017\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-v559j" May 13 23:57:32.470673 kubelet[2559]: E0513 23:57:32.469253 2559 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d36d1dce6bc00d65a100899408689c5c101c1937c22f720b76389c482100017\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-v559j" May 13 23:57:32.470808 kubelet[2559]: E0513 23:57:32.469308 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v559j_kube-system(d063e5cd-61ed-45ac-b29b-1e7397ca7f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v559j_kube-system(d063e5cd-61ed-45ac-b29b-1e7397ca7f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d36d1dce6bc00d65a100899408689c5c101c1937c22f720b76389c482100017\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-v559j" podUID="d063e5cd-61ed-45ac-b29b-1e7397ca7f11" May 13 23:57:33.200821 kubelet[2559]: E0513 23:57:33.200681 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:33.213448 kubelet[2559]: I0513 23:57:33.212932 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-89rx9" podStartSLOduration=4.069609074 podStartE2EDuration="9.212887749s" podCreationTimestamp="2025-05-13 23:57:24 +0000 UTC" firstStartedPulling="2025-05-13 23:57:26.364139058 +0000 UTC m=+8.826926091" lastFinishedPulling="2025-05-13 23:57:31.97082213 +0000 UTC m=+13.970204766" observedRunningTime="2025-05-13 23:57:33.212663625 +0000 UTC m=+15.212046258" watchObservedRunningTime="2025-05-13 23:57:33.212887749 +0000 UTC m=+15.212270382" May 13 23:57:33.263627 kubelet[2559]: E0513 23:57:33.263586 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:33.366423 systemd-networkd[1389]: flannel.1: Link UP May 13 23:57:33.366431 systemd-networkd[1389]: flannel.1: Gained carrier May 13 23:57:34.202146 kubelet[2559]: E0513 23:57:34.202109 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:35.072046 update_engine[1466]: I20250513 23:57:35.071859 1466 update_attempter.cc:509] Updating boot flags... May 13 23:57:35.101127 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3152) May 13 23:57:35.168377 systemd-networkd[1389]: flannel.1: Gained IPv6LL May 13 23:57:35.174504 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3153) May 13 23:57:35.246055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3153) May 13 23:57:45.130328 kubelet[2559]: E0513 23:57:45.130285 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:45.131869 containerd[1489]: time="2025-05-13T23:57:45.131692073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v559j,Uid:d063e5cd-61ed-45ac-b29b-1e7397ca7f11,Namespace:kube-system,Attempt:0,}" May 13 23:57:45.147378 systemd-networkd[1389]: cni0: Link UP May 13 23:57:45.147388 systemd-networkd[1389]: cni0: Gained carrier May 13 23:57:45.152773 systemd-networkd[1389]: cni0: Lost carrier May 13 23:57:45.156836 systemd-networkd[1389]: veth4e3c528a: Link UP May 13 23:57:45.158066 kernel: cni0: port 1(veth4e3c528a) entered blocking state May 13 23:57:45.158131 kernel: cni0: port 1(veth4e3c528a) entered disabled state May 13 23:57:45.159137 kernel: veth4e3c528a: entered allmulticast mode May 13 23:57:45.160168 kernel: veth4e3c528a: entered promiscuous mode May 13 23:57:45.161141 kernel: cni0: port 1(veth4e3c528a) entered blocking state May 13 23:57:45.161182 kernel: cni0: port 1(veth4e3c528a) entered forwarding state May 13 23:57:45.162703 kernel: cni0: port 1(veth4e3c528a) entered disabled state May 13 23:57:45.171763 kernel: cni0: port 1(veth4e3c528a) entered blocking state May 13 23:57:45.171862 kernel: cni0: port 1(veth4e3c528a) entered forwarding state May 13 23:57:45.171344 systemd-networkd[1389]: veth4e3c528a: Gained carrier May 13 23:57:45.172873 systemd-networkd[1389]: cni0: Gained carrier May 13 23:57:45.181268 containerd[1489]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} May 13 23:57:45.181268 containerd[1489]: delegateAdd: netconf sent to delegate plugin: May 13 23:57:45.203361 containerd[1489]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:57:45.203314326Z" level=info msg="connecting to shim 11688b73b2f096f66cb4e7996446332e3a7319c2b93f994961021de6c86f4a88" address="unix:///run/containerd/s/6f700e12b0b233347702f8fdd9c975e4ec921c438c2e578f82357a242379d7be" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:45.233350 systemd[1]: Started cri-containerd-11688b73b2f096f66cb4e7996446332e3a7319c2b93f994961021de6c86f4a88.scope - libcontainer container 11688b73b2f096f66cb4e7996446332e3a7319c2b93f994961021de6c86f4a88. May 13 23:57:45.281859 containerd[1489]: time="2025-05-13T23:57:45.281737421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v559j,Uid:d063e5cd-61ed-45ac-b29b-1e7397ca7f11,Namespace:kube-system,Attempt:0,} returns sandbox id \"11688b73b2f096f66cb4e7996446332e3a7319c2b93f994961021de6c86f4a88\"" May 13 23:57:45.282722 kubelet[2559]: E0513 23:57:45.282697 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:45.287114 containerd[1489]: time="2025-05-13T23:57:45.286784055Z" level=info msg="CreateContainer within sandbox \"11688b73b2f096f66cb4e7996446332e3a7319c2b93f994961021de6c86f4a88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:57:45.297943 containerd[1489]: time="2025-05-13T23:57:45.297909237Z" level=info msg="Container c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:45.304122 containerd[1489]: time="2025-05-13T23:57:45.304089676Z" level=info msg="CreateContainer within sandbox \"11688b73b2f096f66cb4e7996446332e3a7319c2b93f994961021de6c86f4a88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e\"" May 13 23:57:45.305522 containerd[1489]: time="2025-05-13T23:57:45.304730206Z" level=info msg="StartContainer for \"c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e\"" May 13 23:57:45.305522 containerd[1489]: time="2025-05-13T23:57:45.305449252Z" level=info msg="connecting to shim c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e" address="unix:///run/containerd/s/6f700e12b0b233347702f8fdd9c975e4ec921c438c2e578f82357a242379d7be" protocol=ttrpc version=3 May 13 23:57:45.324172 systemd[1]: Started cri-containerd-c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e.scope - libcontainer container c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e. May 13 23:57:45.355468 containerd[1489]: time="2025-05-13T23:57:45.355405459Z" level=info msg="StartContainer for \"c906bd80de4aa8ab26116dd1cd81897137142ac1c39691c34a54ab34274fdc5e\" returns successfully" May 13 23:57:46.227377 kubelet[2559]: E0513 23:57:46.227058 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:46.250991 kubelet[2559]: I0513 23:57:46.250521 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v559j" podStartSLOduration=22.250498409 podStartE2EDuration="22.250498409s" podCreationTimestamp="2025-05-13 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:46.239933566 +0000 UTC m=+28.239316200" watchObservedRunningTime="2025-05-13 23:57:46.250498409 +0000 UTC m=+28.249881053" May 13 23:57:46.816236 systemd-networkd[1389]: cni0: Gained IPv6LL May 13 23:57:46.818971 systemd-networkd[1389]: veth4e3c528a: Gained IPv6LL May 13 23:57:47.228856 kubelet[2559]: E0513 23:57:47.228811 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:48.131508 kubelet[2559]: E0513 23:57:48.131000 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:48.132510 containerd[1489]: time="2025-05-13T23:57:48.132155299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kpnc5,Uid:dfa071a2-cae0-4c56-9efc-38271ff68c09,Namespace:kube-system,Attempt:0,}" May 13 23:57:48.147677 systemd-networkd[1389]: veth02a62f99: Link UP May 13 23:57:48.150784 kernel: cni0: port 2(veth02a62f99) entered blocking state May 13 23:57:48.150893 kernel: cni0: port 2(veth02a62f99) entered disabled state May 13 23:57:48.152136 kernel: veth02a62f99: entered allmulticast mode May 13 23:57:48.153367 kernel: veth02a62f99: entered promiscuous mode May 13 23:57:48.163612 kernel: cni0: port 2(veth02a62f99) entered blocking state May 13 23:57:48.163746 kernel: cni0: port 2(veth02a62f99) entered forwarding state May 13 23:57:48.163957 systemd-networkd[1389]: veth02a62f99: Gained carrier May 13 23:57:48.166582 containerd[1489]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} May 13 23:57:48.166582 containerd[1489]: delegateAdd: netconf sent to delegate plugin: May 13 23:57:48.190570 containerd[1489]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:57:48.190516287Z" level=info msg="connecting to shim cc2a203b59bd25af0eb764cefff5018dd938dd191506347ac0f0a862f826e655" address="unix:///run/containerd/s/d187c90402a14a7b40ca3785b7a1976f9b4874a22d0c472a106d24694c3e2cea" namespace=k8s.io protocol=ttrpc version=3 May 13 23:57:48.224211 systemd[1]: Started cri-containerd-cc2a203b59bd25af0eb764cefff5018dd938dd191506347ac0f0a862f826e655.scope - libcontainer container cc2a203b59bd25af0eb764cefff5018dd938dd191506347ac0f0a862f826e655. May 13 23:57:48.233927 kubelet[2559]: E0513 23:57:48.233519 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:48.291198 containerd[1489]: time="2025-05-13T23:57:48.291065686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kpnc5,Uid:dfa071a2-cae0-4c56-9efc-38271ff68c09,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc2a203b59bd25af0eb764cefff5018dd938dd191506347ac0f0a862f826e655\"" May 13 23:57:48.293373 kubelet[2559]: E0513 23:57:48.292325 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:48.297054 containerd[1489]: time="2025-05-13T23:57:48.296991174Z" level=info msg="CreateContainer within sandbox \"cc2a203b59bd25af0eb764cefff5018dd938dd191506347ac0f0a862f826e655\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:57:48.309153 containerd[1489]: time="2025-05-13T23:57:48.309113195Z" level=info msg="Container 10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c: CDI devices from CRI Config.CDIDevices: []" May 13 23:57:48.319392 containerd[1489]: time="2025-05-13T23:57:48.319263581Z" level=info msg="CreateContainer within sandbox \"cc2a203b59bd25af0eb764cefff5018dd938dd191506347ac0f0a862f826e655\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c\"" May 13 23:57:48.320894 containerd[1489]: time="2025-05-13T23:57:48.319792964Z" level=info msg="StartContainer for \"10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c\"" May 13 23:57:48.320894 containerd[1489]: time="2025-05-13T23:57:48.320559711Z" level=info msg="connecting to shim 10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c" address="unix:///run/containerd/s/d187c90402a14a7b40ca3785b7a1976f9b4874a22d0c472a106d24694c3e2cea" protocol=ttrpc version=3 May 13 23:57:48.343279 systemd[1]: Started cri-containerd-10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c.scope - libcontainer container 10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c. May 13 23:57:48.385902 containerd[1489]: time="2025-05-13T23:57:48.385773214Z" level=info msg="StartContainer for \"10b422a73bb2b0a5d511175ef1a6df31cdb1d87f9bb2ab80486dd3b9ce60ab0c\" returns successfully" May 13 23:57:49.238310 kubelet[2559]: E0513 23:57:49.237619 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:49.268174 kubelet[2559]: I0513 23:57:49.267976 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kpnc5" podStartSLOduration=25.267953195 podStartE2EDuration="25.267953195s" podCreationTimestamp="2025-05-13 23:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:57:49.248448441 +0000 UTC m=+31.247831064" watchObservedRunningTime="2025-05-13 23:57:49.267953195 +0000 UTC m=+31.267335830" May 13 23:57:49.824310 systemd-networkd[1389]: veth02a62f99: Gained IPv6LL May 13 23:57:50.240451 kubelet[2559]: E0513 23:57:50.240399 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:51.242378 kubelet[2559]: E0513 23:57:51.242283 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:57:57.655595 systemd[1]: Started sshd@7-24.199.96.208:22-218.92.0.188:37011.service - OpenSSH per-connection server daemon (218.92.0.188:37011). May 13 23:57:58.070369 systemd[1]: Started sshd@8-24.199.96.208:22-147.75.109.163:35000.service - OpenSSH per-connection server daemon (147.75.109.163:35000). May 13 23:57:58.120805 sshd[3477]: Accepted publickey for core from 147.75.109.163 port 35000 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:57:58.122233 sshd-session[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:57:58.127249 systemd-logind[1465]: New session 6 of user core. May 13 23:57:58.133312 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:57:58.300930 sshd[3479]: Connection closed by 147.75.109.163 port 35000 May 13 23:57:58.301548 sshd-session[3477]: pam_unix(sshd:session): session closed for user core May 13 23:57:58.309950 systemd[1]: sshd@8-24.199.96.208:22-147.75.109.163:35000.service: Deactivated successfully. May 13 23:57:58.310021 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. May 13 23:57:58.314323 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:57:58.316602 systemd-logind[1465]: Removed session 6. May 13 23:57:58.814654 sshd-session[3513]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root May 13 23:58:00.833377 sshd[3474]: PAM: Permission denied for root from 218.92.0.188 May 13 23:58:01.144409 sshd-session[3514]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root May 13 23:58:03.103351 sshd[3474]: PAM: Permission denied for root from 218.92.0.188 May 13 23:58:03.316477 systemd[1]: Started sshd@9-24.199.96.208:22-147.75.109.163:35016.service - OpenSSH per-connection server daemon (147.75.109.163:35016). May 13 23:58:03.370451 sshd[3517]: Accepted publickey for core from 147.75.109.163 port 35016 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:03.371537 sshd-session[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:03.378519 systemd-logind[1465]: New session 7 of user core. May 13 23:58:03.385185 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:58:03.414154 sshd-session[3515]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.188 user=root May 13 23:58:03.518425 sshd[3519]: Connection closed by 147.75.109.163 port 35016 May 13 23:58:03.519272 sshd-session[3517]: pam_unix(sshd:session): session closed for user core May 13 23:58:03.524269 systemd[1]: sshd@9-24.199.96.208:22-147.75.109.163:35016.service: Deactivated successfully. May 13 23:58:03.526617 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:58:03.527689 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. May 13 23:58:03.528715 systemd-logind[1465]: Removed session 7. May 13 23:58:05.784227 sshd[3474]: PAM: Permission denied for root from 218.92.0.188 May 13 23:58:05.939687 sshd[3474]: Received disconnect from 218.92.0.188 port 37011:11: [preauth] May 13 23:58:05.939687 sshd[3474]: Disconnected from authenticating user root 218.92.0.188 port 37011 [preauth] May 13 23:58:05.941827 systemd[1]: sshd@7-24.199.96.208:22-218.92.0.188:37011.service: Deactivated successfully. May 13 23:58:08.532228 systemd[1]: Started sshd@10-24.199.96.208:22-147.75.109.163:54872.service - OpenSSH per-connection server daemon (147.75.109.163:54872). May 13 23:58:08.596877 sshd[3560]: Accepted publickey for core from 147.75.109.163 port 54872 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:08.598348 sshd-session[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:08.604273 systemd-logind[1465]: New session 8 of user core. May 13 23:58:08.611184 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:58:08.735337 sshd[3562]: Connection closed by 147.75.109.163 port 54872 May 13 23:58:08.735810 sshd-session[3560]: pam_unix(sshd:session): session closed for user core May 13 23:58:08.747772 systemd[1]: sshd@10-24.199.96.208:22-147.75.109.163:54872.service: Deactivated successfully. May 13 23:58:08.750029 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:58:08.751560 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. May 13 23:58:08.753148 systemd[1]: Started sshd@11-24.199.96.208:22-147.75.109.163:54876.service - OpenSSH per-connection server daemon (147.75.109.163:54876). May 13 23:58:08.755318 systemd-logind[1465]: Removed session 8. May 13 23:58:08.808806 sshd[3588]: Accepted publickey for core from 147.75.109.163 port 54876 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:08.810190 sshd-session[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:08.816021 systemd-logind[1465]: New session 9 of user core. May 13 23:58:08.824236 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:58:09.007170 sshd[3591]: Connection closed by 147.75.109.163 port 54876 May 13 23:58:09.008667 sshd-session[3588]: pam_unix(sshd:session): session closed for user core May 13 23:58:09.019114 systemd[1]: sshd@11-24.199.96.208:22-147.75.109.163:54876.service: Deactivated successfully. May 13 23:58:09.023688 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:58:09.026680 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. May 13 23:58:09.031272 systemd[1]: Started sshd@12-24.199.96.208:22-147.75.109.163:54888.service - OpenSSH per-connection server daemon (147.75.109.163:54888). May 13 23:58:09.036289 systemd-logind[1465]: Removed session 9. May 13 23:58:09.093944 sshd[3599]: Accepted publickey for core from 147.75.109.163 port 54888 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:09.095451 sshd-session[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:09.104413 systemd-logind[1465]: New session 10 of user core. May 13 23:58:09.110234 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:58:09.258264 sshd[3602]: Connection closed by 147.75.109.163 port 54888 May 13 23:58:09.258136 sshd-session[3599]: pam_unix(sshd:session): session closed for user core May 13 23:58:09.262973 systemd[1]: sshd@12-24.199.96.208:22-147.75.109.163:54888.service: Deactivated successfully. May 13 23:58:09.265854 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:58:09.268048 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. May 13 23:58:09.269394 systemd-logind[1465]: Removed session 10. May 13 23:58:14.274382 systemd[1]: Started sshd@13-24.199.96.208:22-147.75.109.163:54900.service - OpenSSH per-connection server daemon (147.75.109.163:54900). May 13 23:58:14.328034 sshd[3635]: Accepted publickey for core from 147.75.109.163 port 54900 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:14.329557 sshd-session[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:14.335325 systemd-logind[1465]: New session 11 of user core. May 13 23:58:14.343195 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:58:14.479594 sshd[3637]: Connection closed by 147.75.109.163 port 54900 May 13 23:58:14.480445 sshd-session[3635]: pam_unix(sshd:session): session closed for user core May 13 23:58:14.491812 systemd[1]: sshd@13-24.199.96.208:22-147.75.109.163:54900.service: Deactivated successfully. May 13 23:58:14.494832 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:58:14.497066 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. May 13 23:58:14.499617 systemd[1]: Started sshd@14-24.199.96.208:22-147.75.109.163:54908.service - OpenSSH per-connection server daemon (147.75.109.163:54908). May 13 23:58:14.500955 systemd-logind[1465]: Removed session 11. May 13 23:58:14.568789 sshd[3648]: Accepted publickey for core from 147.75.109.163 port 54908 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:14.569919 sshd-session[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:14.579821 systemd-logind[1465]: New session 12 of user core. May 13 23:58:14.584236 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:58:14.799564 sshd[3651]: Connection closed by 147.75.109.163 port 54908 May 13 23:58:14.800249 sshd-session[3648]: pam_unix(sshd:session): session closed for user core May 13 23:58:14.822897 systemd[1]: sshd@14-24.199.96.208:22-147.75.109.163:54908.service: Deactivated successfully. May 13 23:58:14.825485 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:58:14.827558 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. May 13 23:58:14.829626 systemd[1]: Started sshd@15-24.199.96.208:22-147.75.109.163:54912.service - OpenSSH per-connection server daemon (147.75.109.163:54912). May 13 23:58:14.830808 systemd-logind[1465]: Removed session 12. May 13 23:58:14.893460 sshd[3659]: Accepted publickey for core from 147.75.109.163 port 54912 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:14.894854 sshd-session[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:14.900799 systemd-logind[1465]: New session 13 of user core. May 13 23:58:14.908191 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:58:16.245438 sshd[3662]: Connection closed by 147.75.109.163 port 54912 May 13 23:58:16.246756 sshd-session[3659]: pam_unix(sshd:session): session closed for user core May 13 23:58:16.260392 systemd[1]: sshd@15-24.199.96.208:22-147.75.109.163:54912.service: Deactivated successfully. May 13 23:58:16.262401 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:58:16.265620 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. May 13 23:58:16.270275 systemd[1]: Started sshd@16-24.199.96.208:22-147.75.109.163:54928.service - OpenSSH per-connection server daemon (147.75.109.163:54928). May 13 23:58:16.272936 systemd-logind[1465]: Removed session 13. May 13 23:58:16.332741 sshd[3679]: Accepted publickey for core from 147.75.109.163 port 54928 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:16.334115 sshd-session[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:16.340067 systemd-logind[1465]: New session 14 of user core. May 13 23:58:16.345203 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:58:16.594953 sshd[3682]: Connection closed by 147.75.109.163 port 54928 May 13 23:58:16.597407 sshd-session[3679]: pam_unix(sshd:session): session closed for user core May 13 23:58:16.609495 systemd[1]: sshd@16-24.199.96.208:22-147.75.109.163:54928.service: Deactivated successfully. May 13 23:58:16.612665 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:58:16.613989 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. May 13 23:58:16.617343 systemd[1]: Started sshd@17-24.199.96.208:22-147.75.109.163:54930.service - OpenSSH per-connection server daemon (147.75.109.163:54930). May 13 23:58:16.618507 systemd-logind[1465]: Removed session 14. May 13 23:58:16.668248 sshd[3691]: Accepted publickey for core from 147.75.109.163 port 54930 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:16.669705 sshd-session[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:16.675189 systemd-logind[1465]: New session 15 of user core. May 13 23:58:16.691262 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:58:16.836081 sshd[3694]: Connection closed by 147.75.109.163 port 54930 May 13 23:58:16.836756 sshd-session[3691]: pam_unix(sshd:session): session closed for user core May 13 23:58:16.839868 systemd[1]: sshd@17-24.199.96.208:22-147.75.109.163:54930.service: Deactivated successfully. May 13 23:58:16.842323 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:58:16.844274 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. May 13 23:58:16.845479 systemd-logind[1465]: Removed session 15. May 13 23:58:21.852110 systemd[1]: Started sshd@18-24.199.96.208:22-147.75.109.163:50800.service - OpenSSH per-connection server daemon (147.75.109.163:50800). May 13 23:58:21.906316 sshd[3729]: Accepted publickey for core from 147.75.109.163 port 50800 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:21.907674 sshd-session[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:21.912728 systemd-logind[1465]: New session 16 of user core. May 13 23:58:21.924238 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:58:22.098956 sshd[3731]: Connection closed by 147.75.109.163 port 50800 May 13 23:58:22.099756 sshd-session[3729]: pam_unix(sshd:session): session closed for user core May 13 23:58:22.103105 systemd[1]: sshd@18-24.199.96.208:22-147.75.109.163:50800.service: Deactivated successfully. May 13 23:58:22.106998 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:58:22.111186 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. May 13 23:58:22.112520 systemd-logind[1465]: Removed session 16. May 13 23:58:27.113097 systemd[1]: Started sshd@19-24.199.96.208:22-147.75.109.163:50810.service - OpenSSH per-connection server daemon (147.75.109.163:50810). May 13 23:58:27.163568 sshd[3769]: Accepted publickey for core from 147.75.109.163 port 50810 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:27.164862 sshd-session[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:27.171302 systemd-logind[1465]: New session 17 of user core. May 13 23:58:27.178198 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:58:27.295829 sshd[3771]: Connection closed by 147.75.109.163 port 50810 May 13 23:58:27.296197 sshd-session[3769]: pam_unix(sshd:session): session closed for user core May 13 23:58:27.299204 systemd[1]: sshd@19-24.199.96.208:22-147.75.109.163:50810.service: Deactivated successfully. May 13 23:58:27.301834 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:58:27.303466 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. May 13 23:58:27.304409 systemd-logind[1465]: Removed session 17. May 13 23:58:32.131374 kubelet[2559]: E0513 23:58:32.130931 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:58:32.314959 systemd[1]: Started sshd@20-24.199.96.208:22-147.75.109.163:33902.service - OpenSSH per-connection server daemon (147.75.109.163:33902). May 13 23:58:32.373585 sshd[3804]: Accepted publickey for core from 147.75.109.163 port 33902 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:32.375203 sshd-session[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:32.382225 systemd-logind[1465]: New session 18 of user core. May 13 23:58:32.385233 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:58:32.510045 sshd[3806]: Connection closed by 147.75.109.163 port 33902 May 13 23:58:32.510634 sshd-session[3804]: pam_unix(sshd:session): session closed for user core May 13 23:58:32.514139 systemd[1]: sshd@20-24.199.96.208:22-147.75.109.163:33902.service: Deactivated successfully. May 13 23:58:32.515947 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:58:32.516730 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. May 13 23:58:32.517997 systemd-logind[1465]: Removed session 18. May 13 23:58:36.133033 kubelet[2559]: E0513 23:58:36.132090 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 13 23:58:37.523318 systemd[1]: Started sshd@21-24.199.96.208:22-147.75.109.163:33918.service - OpenSSH per-connection server daemon (147.75.109.163:33918). May 13 23:58:37.579364 sshd[3838]: Accepted publickey for core from 147.75.109.163 port 33918 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:58:37.580857 sshd-session[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:58:37.587143 systemd-logind[1465]: New session 19 of user core. May 13 23:58:37.597293 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:58:37.720150 sshd[3840]: Connection closed by 147.75.109.163 port 33918 May 13 23:58:37.720833 sshd-session[3838]: pam_unix(sshd:session): session closed for user core May 13 23:58:37.725282 systemd[1]: sshd@21-24.199.96.208:22-147.75.109.163:33918.service: Deactivated successfully. May 13 23:58:37.727212 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:58:37.727969 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. May 13 23:58:37.728973 systemd-logind[1465]: Removed session 19. May 13 23:58:38.131744 kubelet[2559]: E0513 23:58:38.131342 2559 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"