May 13 23:53:23.007279 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:53:23.007331 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:53:23.007352 kernel: BIOS-provided physical RAM map: May 13 23:53:23.007364 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 23:53:23.007375 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 23:53:23.007386 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 23:53:23.007400 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 13 23:53:23.007428 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 13 23:53:23.007457 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 23:53:23.007469 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 23:53:23.007485 kernel: NX (Execute Disable) protection: active May 13 23:53:23.007496 kernel: APIC: Static calls initialized May 13 23:53:23.007514 kernel: SMBIOS 2.8 present. May 13 23:53:23.007526 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 13 23:53:23.007542 kernel: Hypervisor detected: KVM May 13 23:53:23.007554 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 23:53:23.007576 kernel: kvm-clock: using sched offset of 3719165999 cycles May 13 23:53:23.007590 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 23:53:23.007603 kernel: tsc: Detected 1999.999 MHz processor May 13 23:53:23.007617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:53:23.007631 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:53:23.007645 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 13 23:53:23.007657 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 23:53:23.007669 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:53:23.007682 kernel: ACPI: Early table checksum verification disabled May 13 23:53:23.007699 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 13 23:53:23.007712 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007725 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007738 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007750 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 13 23:53:23.007763 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007776 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007789 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007806 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:53:23.007819 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 13 23:53:23.007831 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 13 23:53:23.007843 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 13 23:53:23.007856 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 13 23:53:23.007869 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 13 23:53:23.007882 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 13 23:53:23.007901 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 13 23:53:23.007917 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:53:23.007930 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:53:23.007944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 23:53:23.007958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 23:53:23.007977 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 13 23:53:23.007990 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 13 23:53:23.008002 kernel: Zone ranges: May 13 23:53:23.008020 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:53:23.008033 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 13 23:53:23.008047 kernel: Normal empty May 13 23:53:23.008060 kernel: Movable zone start for each node May 13 23:53:23.008073 kernel: Early memory node ranges May 13 23:53:23.008085 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 23:53:23.008099 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 13 23:53:23.008113 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 13 23:53:23.008127 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:53:23.008144 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 23:53:23.008162 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 13 23:53:23.008174 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 23:53:23.008188 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 23:53:23.008202 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 23:53:23.008215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 23:53:23.008229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 23:53:23.008243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:53:23.008256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 23:53:23.008273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 23:53:23.008287 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:53:23.008301 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 23:53:23.008315 kernel: TSC deadline timer available May 13 23:53:23.008328 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:53:23.008342 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 23:53:23.008355 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 13 23:53:23.008373 kernel: Booting paravirtualized kernel on KVM May 13 23:53:23.008381 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:53:23.008394 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:53:23.008407 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:53:23.008447 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:53:23.008460 kernel: pcpu-alloc: [0] 0 1 May 13 23:53:23.008473 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 23:53:23.008490 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:53:23.008504 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:53:23.008518 kernel: random: crng init done May 13 23:53:23.008537 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:53:23.008551 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:53:23.008565 kernel: Fallback order for Node 0: 0 May 13 23:53:23.008578 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 13 23:53:23.008592 kernel: Policy zone: DMA32 May 13 23:53:23.008601 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:53:23.008610 kernel: Memory: 1967108K/2096612K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 129244K reserved, 0K cma-reserved) May 13 23:53:23.008618 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:53:23.008627 kernel: Kernel/User page tables isolation: enabled May 13 23:53:23.008643 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:53:23.008655 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:53:23.008667 kernel: Dynamic Preempt: voluntary May 13 23:53:23.008675 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:53:23.008685 kernel: rcu: RCU event tracing is enabled. May 13 23:53:23.008693 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:53:23.008701 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:53:23.008709 kernel: Rude variant of Tasks RCU enabled. May 13 23:53:23.008717 kernel: Tracing variant of Tasks RCU enabled. May 13 23:53:23.008728 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:53:23.008737 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:53:23.008745 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 23:53:23.008753 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:53:23.008766 kernel: Console: colour VGA+ 80x25 May 13 23:53:23.008773 kernel: printk: console [tty0] enabled May 13 23:53:23.008781 kernel: printk: console [ttyS0] enabled May 13 23:53:23.008789 kernel: ACPI: Core revision 20230628 May 13 23:53:23.008797 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 23:53:23.008808 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:53:23.008816 kernel: x2apic enabled May 13 23:53:23.008823 kernel: APIC: Switched APIC routing to: physical x2apic May 13 23:53:23.008831 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 23:53:23.008839 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 13 23:53:23.008847 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) May 13 23:53:23.008855 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 23:53:23.008863 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 23:53:23.008883 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:53:23.008891 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:53:23.008900 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:53:23.008909 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 13 23:53:23.008921 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 23:53:23.008929 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 23:53:23.008938 kernel: MDS: Mitigation: Clear CPU buffers May 13 23:53:23.008946 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:53:23.008959 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:53:23.008971 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:53:23.008979 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:53:23.008988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:53:23.008996 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 23:53:23.009005 kernel: Freeing SMP alternatives memory: 32K May 13 23:53:23.009013 kernel: pid_max: default: 32768 minimum: 301 May 13 23:53:23.009024 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:53:23.009038 kernel: landlock: Up and running. May 13 23:53:23.009051 kernel: SELinux: Initializing. May 13 23:53:23.009065 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:53:23.009074 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 23:53:23.009082 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 13 23:53:23.009091 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:53:23.009100 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:53:23.009109 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:53:23.009117 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 13 23:53:23.009126 kernel: signal: max sigframe size: 1776 May 13 23:53:23.009138 kernel: rcu: Hierarchical SRCU implementation. May 13 23:53:23.009147 kernel: rcu: Max phase no-delay instances is 400. May 13 23:53:23.009155 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:53:23.009164 kernel: smp: Bringing up secondary CPUs ... May 13 23:53:23.009172 kernel: smpboot: x86: Booting SMP configuration: May 13 23:53:23.009181 kernel: .... node #0, CPUs: #1 May 13 23:53:23.009189 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:53:23.009198 kernel: smpboot: Max logical packages: 1 May 13 23:53:23.009211 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) May 13 23:53:23.009219 kernel: devtmpfs: initialized May 13 23:53:23.009231 kernel: x86/mm: Memory block size: 128MB May 13 23:53:23.009243 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:53:23.009258 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:53:23.009272 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:53:23.009285 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:53:23.009296 kernel: audit: initializing netlink subsys (disabled) May 13 23:53:23.009309 kernel: audit: type=2000 audit(1747180401.286:1): state=initialized audit_enabled=0 res=1 May 13 23:53:23.009321 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:53:23.009338 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:53:23.009351 kernel: cpuidle: using governor menu May 13 23:53:23.009364 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:53:23.009377 kernel: dca service started, version 1.12.1 May 13 23:53:23.009391 kernel: PCI: Using configuration type 1 for base access May 13 23:53:23.009403 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:53:23.009429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:53:23.009449 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:53:23.009458 kernel: ACPI: Added _OSI(Module Device) May 13 23:53:23.009467 kernel: ACPI: Added _OSI(Processor Device) May 13 23:53:23.009480 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:53:23.009489 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:53:23.009498 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:53:23.009506 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:53:23.009515 kernel: ACPI: Interpreter enabled May 13 23:53:23.009523 kernel: ACPI: PM: (supports S0 S5) May 13 23:53:23.009532 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:53:23.009541 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:53:23.009549 kernel: PCI: Using E820 reservations for host bridge windows May 13 23:53:23.009562 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 23:53:23.009571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:53:23.009851 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 23:53:23.009984 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 23:53:23.010092 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 23:53:23.010104 kernel: acpiphp: Slot [3] registered May 13 23:53:23.010114 kernel: acpiphp: Slot [4] registered May 13 23:53:23.010130 kernel: acpiphp: Slot [5] registered May 13 23:53:23.010139 kernel: acpiphp: Slot [6] registered May 13 23:53:23.010147 kernel: acpiphp: Slot [7] registered May 13 23:53:23.010156 kernel: acpiphp: Slot [8] registered May 13 23:53:23.010164 kernel: acpiphp: Slot [9] registered May 13 23:53:23.010173 kernel: acpiphp: Slot [10] registered May 13 23:53:23.010182 kernel: acpiphp: Slot [11] registered May 13 23:53:23.010190 kernel: acpiphp: Slot [12] registered May 13 23:53:23.010199 kernel: acpiphp: Slot [13] registered May 13 23:53:23.010212 kernel: acpiphp: Slot [14] registered May 13 23:53:23.010220 kernel: acpiphp: Slot [15] registered May 13 23:53:23.010229 kernel: acpiphp: Slot [16] registered May 13 23:53:23.010237 kernel: acpiphp: Slot [17] registered May 13 23:53:23.010245 kernel: acpiphp: Slot [18] registered May 13 23:53:23.010254 kernel: acpiphp: Slot [19] registered May 13 23:53:23.010262 kernel: acpiphp: Slot [20] registered May 13 23:53:23.010270 kernel: acpiphp: Slot [21] registered May 13 23:53:23.010279 kernel: acpiphp: Slot [22] registered May 13 23:53:23.010288 kernel: acpiphp: Slot [23] registered May 13 23:53:23.010300 kernel: acpiphp: Slot [24] registered May 13 23:53:23.010309 kernel: acpiphp: Slot [25] registered May 13 23:53:23.010317 kernel: acpiphp: Slot [26] registered May 13 23:53:23.010326 kernel: acpiphp: Slot [27] registered May 13 23:53:23.010334 kernel: acpiphp: Slot [28] registered May 13 23:53:23.010343 kernel: acpiphp: Slot [29] registered May 13 23:53:23.010351 kernel: acpiphp: Slot [30] registered May 13 23:53:23.010360 kernel: acpiphp: Slot [31] registered May 13 23:53:23.010369 kernel: PCI host bridge to bus 0000:00 May 13 23:53:23.010564 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 23:53:23.010669 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 23:53:23.010782 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 23:53:23.010926 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 13 23:53:23.011047 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 13 23:53:23.011173 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:53:23.011457 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 23:53:23.011655 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 23:53:23.011807 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 23:53:23.011961 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 13 23:53:23.012078 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 23:53:23.012184 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 23:53:23.012285 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 23:53:23.012464 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 23:53:23.012606 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 13 23:53:23.012710 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 13 23:53:23.012833 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 23:53:23.012937 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 23:53:23.013036 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 23:53:23.013221 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 23:53:23.013368 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 23:53:23.013576 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 13 23:53:23.013695 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 13 23:53:23.013811 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 13 23:53:23.013968 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 23:53:23.014160 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 23:53:23.014327 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 13 23:53:23.014503 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 13 23:53:23.014660 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 13 23:53:23.014840 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 23:53:23.015003 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 13 23:53:23.015165 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 13 23:53:23.015348 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 13 23:53:23.015583 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 13 23:53:23.015748 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 13 23:53:23.015912 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 13 23:53:23.016070 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 13 23:53:23.016282 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 13 23:53:23.016618 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 13 23:53:23.016771 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 13 23:53:23.016934 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 13 23:53:23.017104 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 13 23:53:23.017272 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 13 23:53:23.017894 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 13 23:53:23.018065 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 13 23:53:23.018254 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 13 23:53:23.018432 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 13 23:53:23.018597 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 13 23:53:23.018617 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 23:53:23.018633 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 23:53:23.018648 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 23:53:23.018664 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 23:53:23.018680 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 23:53:23.018695 kernel: iommu: Default domain type: Translated May 13 23:53:23.018717 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:53:23.018732 kernel: PCI: Using ACPI for IRQ routing May 13 23:53:23.018748 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 23:53:23.018763 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 23:53:23.018780 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 13 23:53:23.018942 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 23:53:23.019090 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 23:53:23.019259 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 23:53:23.019279 kernel: vgaarb: loaded May 13 23:53:23.019303 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 23:53:23.019319 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 23:53:23.019334 kernel: clocksource: Switched to clocksource kvm-clock May 13 23:53:23.019349 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:53:23.019365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:53:23.019380 kernel: pnp: PnP ACPI init May 13 23:53:23.019395 kernel: pnp: PnP ACPI: found 4 devices May 13 23:53:23.020003 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:53:23.020028 kernel: NET: Registered PF_INET protocol family May 13 23:53:23.020051 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:53:23.020067 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 23:53:23.020082 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:53:23.020098 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:53:23.020124 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 23:53:23.020140 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 23:53:23.020155 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:53:23.020170 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 23:53:23.020186 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:53:23.020205 kernel: NET: Registered PF_XDP protocol family May 13 23:53:23.020406 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 23:53:23.020630 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 23:53:23.020760 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 23:53:23.020890 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 13 23:53:23.021640 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 13 23:53:23.021826 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 23:53:23.021993 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 23:53:23.022025 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 23:53:23.022180 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 49291 usecs May 13 23:53:23.022201 kernel: PCI: CLS 0 bytes, default 64 May 13 23:53:23.022217 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:53:23.022233 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns May 13 23:53:23.022249 kernel: Initialise system trusted keyrings May 13 23:53:23.022265 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 23:53:23.022281 kernel: Key type asymmetric registered May 13 23:53:23.022302 kernel: Asymmetric key parser 'x509' registered May 13 23:53:23.022318 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:53:23.022334 kernel: io scheduler mq-deadline registered May 13 23:53:23.022349 kernel: io scheduler kyber registered May 13 23:53:23.022364 kernel: io scheduler bfq registered May 13 23:53:23.022380 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:53:23.022395 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 23:53:23.024535 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 23:53:23.024581 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 23:53:23.024607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:53:23.024622 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:53:23.024638 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 23:53:23.024654 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 23:53:23.024684 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 23:53:23.024700 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 23:53:23.024992 kernel: rtc_cmos 00:03: RTC can wake from S4 May 13 23:53:23.025186 kernel: rtc_cmos 00:03: registered as rtc0 May 13 23:53:23.025330 kernel: rtc_cmos 00:03: setting system clock to 2025-05-13T23:53:22 UTC (1747180402) May 13 23:53:23.025488 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 13 23:53:23.025507 kernel: intel_pstate: CPU model not supported May 13 23:53:23.025523 kernel: NET: Registered PF_INET6 protocol family May 13 23:53:23.025538 kernel: Segment Routing with IPv6 May 13 23:53:23.025553 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:53:23.025568 kernel: NET: Registered PF_PACKET protocol family May 13 23:53:23.025583 kernel: Key type dns_resolver registered May 13 23:53:23.025599 kernel: IPI shorthand broadcast: enabled May 13 23:53:23.025620 kernel: sched_clock: Marking stable (1159004707, 175825762)->(1480724930, -145894461) May 13 23:53:23.025635 kernel: registered taskstats version 1 May 13 23:53:23.025650 kernel: Loading compiled-in X.509 certificates May 13 23:53:23.025665 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:53:23.025680 kernel: Key type .fscrypt registered May 13 23:53:23.025694 kernel: Key type fscrypt-provisioning registered May 13 23:53:23.025709 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:53:23.025724 kernel: ima: Allocated hash algorithm: sha1 May 13 23:53:23.025739 kernel: ima: No architecture policies found May 13 23:53:23.025757 kernel: clk: Disabling unused clocks May 13 23:53:23.025772 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:53:23.025787 kernel: Write protecting the kernel read-only data: 40960k May 13 23:53:23.025803 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:53:23.025843 kernel: Run /init as init process May 13 23:53:23.025861 kernel: with arguments: May 13 23:53:23.025877 kernel: /init May 13 23:53:23.025892 kernel: with environment: May 13 23:53:23.025907 kernel: HOME=/ May 13 23:53:23.025926 kernel: TERM=linux May 13 23:53:23.025941 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:53:23.025959 systemd[1]: Successfully made /usr/ read-only. May 13 23:53:23.025981 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:53:23.025999 systemd[1]: Detected virtualization kvm. May 13 23:53:23.026015 systemd[1]: Detected architecture x86-64. May 13 23:53:23.026031 systemd[1]: Running in initrd. May 13 23:53:23.026051 systemd[1]: No hostname configured, using default hostname. May 13 23:53:23.026068 systemd[1]: Hostname set to . May 13 23:53:23.026085 systemd[1]: Initializing machine ID from VM UUID. May 13 23:53:23.026101 systemd[1]: Queued start job for default target initrd.target. May 13 23:53:23.026117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:53:23.026134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:53:23.026152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:53:23.026169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:53:23.026190 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:53:23.026209 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:53:23.026227 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:53:23.026244 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:53:23.026261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:53:23.026278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:53:23.026294 systemd[1]: Reached target paths.target - Path Units. May 13 23:53:23.026316 systemd[1]: Reached target slices.target - Slice Units. May 13 23:53:23.026332 systemd[1]: Reached target swap.target - Swaps. May 13 23:53:23.026353 systemd[1]: Reached target timers.target - Timer Units. May 13 23:53:23.026369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:53:23.026386 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:53:23.026403 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:53:23.027513 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:53:23.027532 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:53:23.027549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:53:23.027566 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:53:23.027583 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:53:23.027598 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:53:23.027611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:53:23.027626 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:53:23.027648 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:53:23.027667 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:53:23.027682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:53:23.027696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:53:23.027768 systemd-journald[183]: Collecting audit messages is disabled. May 13 23:53:23.027814 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:53:23.027831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:53:23.027850 systemd-journald[183]: Journal started May 13 23:53:23.027891 systemd-journald[183]: Runtime Journal (/run/log/journal/f35ee0b3eab2474cbc45bcb9e1713943) is 4.9M, max 39.3M, 34.3M free. May 13 23:53:23.032456 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:53:23.034898 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:53:23.106569 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:53:23.106617 kernel: Bridge firewalling registered May 13 23:53:23.055390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:53:23.056015 systemd-modules-load[185]: Inserted module 'overlay' May 13 23:53:23.096590 systemd-modules-load[185]: Inserted module 'br_netfilter' May 13 23:53:23.110551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:53:23.117828 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:53:23.126849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:53:23.130530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:53:23.132750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:53:23.136982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:53:23.139653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:53:23.144727 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:53:23.174375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:53:23.176469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:53:23.181671 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:53:23.186582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:53:23.190656 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:53:23.229285 dracut-cmdline[219]: dracut-dracut-053 May 13 23:53:23.236446 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:53:23.243902 systemd-resolved[214]: Positive Trust Anchors: May 13 23:53:23.243922 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:53:23.243957 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:53:23.248273 systemd-resolved[214]: Defaulting to hostname 'linux'. May 13 23:53:23.249895 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:53:23.250634 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:53:23.359460 kernel: SCSI subsystem initialized May 13 23:53:23.372508 kernel: Loading iSCSI transport class v2.0-870. May 13 23:53:23.386472 kernel: iscsi: registered transport (tcp) May 13 23:53:23.413597 kernel: iscsi: registered transport (qla4xxx) May 13 23:53:23.413703 kernel: QLogic iSCSI HBA Driver May 13 23:53:23.480770 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:53:23.484506 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:53:23.531781 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:53:23.531907 kernel: device-mapper: uevent: version 1.0.3 May 13 23:53:23.531932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:53:23.591545 kernel: raid6: avx2x4 gen() 14503 MB/s May 13 23:53:23.609505 kernel: raid6: avx2x2 gen() 13844 MB/s May 13 23:53:23.627896 kernel: raid6: avx2x1 gen() 9509 MB/s May 13 23:53:23.628034 kernel: raid6: using algorithm avx2x4 gen() 14503 MB/s May 13 23:53:23.646883 kernel: raid6: .... xor() 5739 MB/s, rmw enabled May 13 23:53:23.647032 kernel: raid6: using avx2x2 recovery algorithm May 13 23:53:23.675486 kernel: xor: automatically using best checksumming function avx May 13 23:53:23.848486 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:53:23.867001 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:53:23.871351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:53:23.911548 systemd-udevd[402]: Using default interface naming scheme 'v255'. May 13 23:53:23.918664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:53:23.924637 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:53:23.968649 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation May 13 23:53:24.016043 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:53:24.019720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:53:24.114621 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:53:24.121453 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:53:24.155081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:53:24.158307 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:53:24.161395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:53:24.162132 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:53:24.167830 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:53:24.206941 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:53:24.244454 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 13 23:53:24.250475 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 13 23:53:24.265305 kernel: scsi host0: Virtio SCSI HBA May 13 23:53:24.269686 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:53:24.269802 kernel: GPT:9289727 != 125829119 May 13 23:53:24.269822 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:53:24.271689 kernel: GPT:9289727 != 125829119 May 13 23:53:24.271785 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:53:24.272457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:53:24.274442 kernel: libata version 3.00 loaded. May 13 23:53:24.279634 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 23:53:24.288468 kernel: scsi host1: ata_piix May 13 23:53:24.301200 kernel: scsi host2: ata_piix May 13 23:53:24.301674 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 13 23:53:24.301699 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 13 23:53:24.315483 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:53:24.320502 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 13 23:53:24.330049 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) May 13 23:53:24.334627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:53:24.336092 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:53:24.338994 kernel: ACPI: bus type USB registered May 13 23:53:24.339081 kernel: usbcore: registered new interface driver usbfs May 13 23:53:24.341034 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:53:24.345398 kernel: usbcore: registered new interface driver hub May 13 23:53:24.345516 kernel: usbcore: registered new device driver usb May 13 23:53:24.345933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:53:24.346259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:53:24.349313 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:53:24.351905 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:53:24.353354 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:53:24.436465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:53:24.438823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:53:24.489501 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:53:24.525524 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:53:24.538436 kernel: AES CTR mode by8 optimization enabled May 13 23:53:24.538522 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) May 13 23:53:24.556454 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (451) May 13 23:53:24.568186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:53:24.592828 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:53:24.605070 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 13 23:53:24.605330 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 13 23:53:24.605508 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 13 23:53:24.608103 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 13 23:53:24.610709 kernel: hub 1-0:1.0: USB hub found May 13 23:53:24.610994 kernel: hub 1-0:1.0: 2 ports detected May 13 23:53:24.618808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:53:24.627980 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:53:24.629700 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:53:24.632546 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:53:24.656516 disk-uuid[547]: Primary Header is updated. May 13 23:53:24.656516 disk-uuid[547]: Secondary Entries is updated. May 13 23:53:24.656516 disk-uuid[547]: Secondary Header is updated. May 13 23:53:24.662449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:53:24.667481 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:53:25.672598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:53:25.673937 disk-uuid[548]: The operation has completed successfully. May 13 23:53:25.748178 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:53:25.748392 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:53:25.785058 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:53:25.805080 sh[559]: Success May 13 23:53:25.823523 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:53:25.899919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:53:25.906629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:53:25.919374 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:53:25.937601 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:53:25.937748 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:53:25.937772 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:53:25.939743 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:53:25.941327 kernel: BTRFS info (device dm-0): using free space tree May 13 23:53:25.953451 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:53:25.955517 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:53:25.957530 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:53:25.961715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:53:26.001497 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:53:26.005038 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:53:26.005164 kernel: BTRFS info (device vda6): using free space tree May 13 23:53:26.010472 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:53:26.017529 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:53:26.022276 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:53:26.026625 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:53:26.161688 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:53:26.166721 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:53:26.213534 ignition[654]: Ignition 2.20.0 May 13 23:53:26.214554 ignition[654]: Stage: fetch-offline May 13 23:53:26.214985 ignition[654]: no configs at "/usr/lib/ignition/base.d" May 13 23:53:26.215011 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:26.219005 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:53:26.215181 ignition[654]: parsed url from cmdline: "" May 13 23:53:26.215186 ignition[654]: no config URL provided May 13 23:53:26.215261 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:53:26.215289 ignition[654]: no config at "/usr/lib/ignition/user.ign" May 13 23:53:26.215300 ignition[654]: failed to fetch config: resource requires networking May 13 23:53:26.215628 ignition[654]: Ignition finished successfully May 13 23:53:26.226198 systemd-networkd[742]: lo: Link UP May 13 23:53:26.226206 systemd-networkd[742]: lo: Gained carrier May 13 23:53:26.230462 systemd-networkd[742]: Enumeration completed May 13 23:53:26.231100 systemd-networkd[742]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 13 23:53:26.231106 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 13 23:53:26.232325 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:53:26.232530 systemd-networkd[742]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:53:26.232537 systemd-networkd[742]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:53:26.233672 systemd[1]: Reached target network.target - Network. May 13 23:53:26.233701 systemd-networkd[742]: eth0: Link UP May 13 23:53:26.233706 systemd-networkd[742]: eth0: Gained carrier May 13 23:53:26.233719 systemd-networkd[742]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 13 23:53:26.238090 systemd-networkd[742]: eth1: Link UP May 13 23:53:26.238097 systemd-networkd[742]: eth1: Gained carrier May 13 23:53:26.238123 systemd-networkd[742]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:53:26.242753 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:53:26.257649 systemd-networkd[742]: eth0: DHCPv4 address 147.182.251.203/20, gateway 147.182.240.1 acquired from 169.254.169.253 May 13 23:53:26.265743 systemd-networkd[742]: eth1: DHCPv4 address 10.124.0.32/20 acquired from 169.254.169.253 May 13 23:53:26.294606 ignition[751]: Ignition 2.20.0 May 13 23:53:26.294625 ignition[751]: Stage: fetch May 13 23:53:26.294910 ignition[751]: no configs at "/usr/lib/ignition/base.d" May 13 23:53:26.294929 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:26.295087 ignition[751]: parsed url from cmdline: "" May 13 23:53:26.295093 ignition[751]: no config URL provided May 13 23:53:26.295103 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:53:26.295116 ignition[751]: no config at "/usr/lib/ignition/user.ign" May 13 23:53:26.295153 ignition[751]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 13 23:53:26.323810 ignition[751]: GET result: OK May 13 23:53:26.324016 ignition[751]: parsing config with SHA512: 23062075812830790c3de6d9b615bc45d203ef8a5c218ea5b01fce259d58baa177abe0baa8ee3ca735a6ddebea525f367977276759508d02a0c8c4cd6f81d8b7 May 13 23:53:26.339950 unknown[751]: fetched base config from "system" May 13 23:53:26.339973 unknown[751]: fetched base config from "system" May 13 23:53:26.340958 ignition[751]: fetch: fetch complete May 13 23:53:26.339985 unknown[751]: fetched user config from "digitalocean" May 13 23:53:26.340969 ignition[751]: fetch: fetch passed May 13 23:53:26.344753 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:53:26.341094 ignition[751]: Ignition finished successfully May 13 23:53:26.348665 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:53:26.387493 ignition[759]: Ignition 2.20.0 May 13 23:53:26.387511 ignition[759]: Stage: kargs May 13 23:53:26.387855 ignition[759]: no configs at "/usr/lib/ignition/base.d" May 13 23:53:26.387885 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:26.390892 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:53:26.389107 ignition[759]: kargs: kargs passed May 13 23:53:26.389169 ignition[759]: Ignition finished successfully May 13 23:53:26.394236 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:53:26.427284 ignition[766]: Ignition 2.20.0 May 13 23:53:26.427302 ignition[766]: Stage: disks May 13 23:53:26.430325 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 13 23:53:26.430353 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:26.432919 ignition[766]: disks: disks passed May 13 23:53:26.434301 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:53:26.433021 ignition[766]: Ignition finished successfully May 13 23:53:26.442668 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:53:26.443674 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:53:26.445260 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:53:26.446860 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:53:26.448222 systemd[1]: Reached target basic.target - Basic System. May 13 23:53:26.451008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:53:26.482214 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:53:26.485978 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:53:26.492610 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:53:26.622468 kernel: EXT4-fs (vda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:53:26.623856 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:53:26.625210 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:53:26.628375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:53:26.633581 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:53:26.637735 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... May 13 23:53:26.647637 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:53:26.648336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:53:26.648374 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:53:26.656559 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:53:26.667458 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (783) May 13 23:53:26.667834 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:53:26.673452 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:53:26.677894 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:53:26.678020 kernel: BTRFS info (device vda6): using free space tree May 13 23:53:26.685568 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:53:26.690231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:53:26.768527 coreos-metadata[785]: May 13 23:53:26.767 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:53:26.775030 coreos-metadata[786]: May 13 23:53:26.774 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:53:26.779247 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:53:26.782364 coreos-metadata[785]: May 13 23:53:26.782 INFO Fetch successful May 13 23:53:26.788265 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory May 13 23:53:26.791009 coreos-metadata[786]: May 13 23:53:26.790 INFO Fetch successful May 13 23:53:26.793646 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. May 13 23:53:26.795076 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. May 13 23:53:26.800716 coreos-metadata[786]: May 13 23:53:26.800 INFO wrote hostname ci-4284.0.0-n-db25a1599d to /sysroot/etc/hostname May 13 23:53:26.803450 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:53:26.806546 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:53:26.819463 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:53:26.987325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:53:26.990108 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:53:26.992622 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:53:27.019800 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:53:27.022201 kernel: BTRFS info (device vda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:53:27.046831 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:53:27.061887 ignition[905]: INFO : Ignition 2.20.0 May 13 23:53:27.061887 ignition[905]: INFO : Stage: mount May 13 23:53:27.063793 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:53:27.063793 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:27.063793 ignition[905]: INFO : mount: mount passed May 13 23:53:27.068541 ignition[905]: INFO : Ignition finished successfully May 13 23:53:27.066729 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:53:27.071624 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:53:27.095981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:53:27.126479 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (917) May 13 23:53:27.130762 kernel: BTRFS info (device vda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:53:27.130851 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:53:27.130866 kernel: BTRFS info (device vda6): using free space tree May 13 23:53:27.137470 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:53:27.140014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:53:27.177732 ignition[934]: INFO : Ignition 2.20.0 May 13 23:53:27.177732 ignition[934]: INFO : Stage: files May 13 23:53:27.179790 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:53:27.179790 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:27.179790 ignition[934]: DEBUG : files: compiled without relabeling support, skipping May 13 23:53:27.183021 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:53:27.183021 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:53:27.185447 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:53:27.186693 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:53:27.188376 unknown[934]: wrote ssh authorized keys file for user: core May 13 23:53:27.189680 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:53:27.192995 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:53:27.195353 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 23:53:27.246117 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:53:27.358039 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 23:53:27.358039 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:53:27.361747 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 23:53:27.646740 systemd-networkd[742]: eth1: Gained IPv6LL May 13 23:53:27.843102 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:53:27.903869 systemd-networkd[742]: eth0: Gained IPv6LL May 13 23:53:28.257076 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 23:53:28.257076 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:53:28.260292 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:53:28.260292 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:53:28.260292 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:53:28.260292 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:53:28.260292 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:53:28.260292 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:53:28.260292 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:53:28.260292 ignition[934]: INFO : files: files passed May 13 23:53:28.260292 ignition[934]: INFO : Ignition finished successfully May 13 23:53:28.262624 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:53:28.266596 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:53:28.272692 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:53:28.292743 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:53:28.292944 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:53:28.305054 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:53:28.305054 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:53:28.307830 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:53:28.311158 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:53:28.313128 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:53:28.315276 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:53:28.380850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:53:28.381021 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:53:28.382678 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:53:28.383617 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:53:28.384844 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:53:28.387644 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:53:28.420099 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:53:28.423685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:53:28.450965 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:53:28.452794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:53:28.454384 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:53:28.455751 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:53:28.456019 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:53:28.458190 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:53:28.459058 systemd[1]: Stopped target basic.target - Basic System. May 13 23:53:28.460395 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:53:28.461681 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:53:28.463296 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:53:28.465633 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:53:28.467075 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:53:28.468743 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:53:28.470076 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:53:28.471559 systemd[1]: Stopped target swap.target - Swaps. May 13 23:53:28.472800 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:53:28.473051 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:53:28.474609 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:53:28.475588 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:53:28.476835 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:53:28.477174 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:53:28.478199 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:53:28.478432 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:53:28.480508 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:53:28.480740 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:53:28.482432 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:53:28.482631 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:53:28.483798 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:53:28.483996 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:53:28.488755 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:53:28.489734 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:53:28.489990 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:53:28.495359 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:53:28.496556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:53:28.497627 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:53:28.498380 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:53:28.500510 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:53:28.508901 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:53:28.510023 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:53:28.533106 ignition[988]: INFO : Ignition 2.20.0 May 13 23:53:28.535048 ignition[988]: INFO : Stage: umount May 13 23:53:28.535048 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:53:28.535048 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 13 23:53:28.578052 ignition[988]: INFO : umount: umount passed May 13 23:53:28.578052 ignition[988]: INFO : Ignition finished successfully May 13 23:53:28.555057 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:53:28.556319 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:53:28.556523 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:53:28.558225 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:53:28.558390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:53:28.577116 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:53:28.577218 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:53:28.578809 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:53:28.578971 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:53:28.600532 systemd[1]: Stopped target network.target - Network. May 13 23:53:28.601520 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:53:28.601672 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:53:28.602944 systemd[1]: Stopped target paths.target - Path Units. May 13 23:53:28.603573 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:53:28.604376 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:53:28.632505 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:53:28.636798 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:53:28.638575 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:53:28.638666 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:53:28.639922 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:53:28.640001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:53:28.641380 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:53:28.641525 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:53:28.642791 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:53:28.642882 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:53:28.644562 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:53:28.645824 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:53:28.649581 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:53:28.649763 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:53:28.653226 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:53:28.654917 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:53:28.661366 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:53:28.661960 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:53:28.662167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:53:28.665569 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:53:28.667341 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:53:28.667510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:53:28.668957 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:53:28.669063 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:53:28.672610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:53:28.673340 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:53:28.674554 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:53:28.676645 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:53:28.676776 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:53:28.678659 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:53:28.678754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:53:28.682250 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:53:28.682362 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:53:28.685492 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:53:28.691573 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:53:28.691717 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:53:28.699563 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:53:28.699820 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:53:28.701169 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:53:28.701236 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:53:28.702039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:53:28.702096 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:53:28.703161 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:53:28.703306 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:53:28.705962 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:53:28.706049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:53:28.707364 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:53:28.707451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:53:28.712826 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:53:28.714227 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:53:28.714335 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:53:28.716694 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:53:28.716773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:53:28.719684 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:53:28.719794 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:53:28.720352 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:53:28.720553 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:53:28.748061 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:53:28.748256 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:53:28.749890 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:53:28.752247 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:53:28.778864 systemd[1]: Switching root. May 13 23:53:28.857644 systemd-journald[183]: Journal stopped May 13 23:53:30.497406 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 13 23:53:30.498559 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:53:30.498579 kernel: SELinux: policy capability open_perms=1 May 13 23:53:30.498597 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:53:30.498613 kernel: SELinux: policy capability always_check_network=0 May 13 23:53:30.498629 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:53:30.498641 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:53:30.498654 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:53:30.498666 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:53:30.498682 kernel: audit: type=1403 audit(1747180409.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:53:30.498697 systemd[1]: Successfully loaded SELinux policy in 52.561ms. May 13 23:53:30.498721 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.198ms. May 13 23:53:30.498735 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:53:30.498752 systemd[1]: Detected virtualization kvm. May 13 23:53:30.498769 systemd[1]: Detected architecture x86-64. May 13 23:53:30.498781 systemd[1]: Detected first boot. May 13 23:53:30.498795 systemd[1]: Hostname set to . May 13 23:53:30.498810 systemd[1]: Initializing machine ID from VM UUID. May 13 23:53:30.498823 zram_generator::config[1033]: No configuration found. May 13 23:53:30.498836 kernel: Guest personality initialized and is inactive May 13 23:53:30.498849 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 23:53:30.498861 kernel: Initialized host personality May 13 23:53:30.498873 kernel: NET: Registered PF_VSOCK protocol family May 13 23:53:30.498885 systemd[1]: Populated /etc with preset unit settings. May 13 23:53:30.498899 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:53:30.498911 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:53:30.498925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:53:30.498942 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:53:30.498957 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:53:30.498974 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:53:30.498993 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:53:30.499012 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:53:30.499029 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:53:30.499048 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:53:30.499075 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:53:30.499095 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:53:30.499112 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:53:30.499131 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:53:30.499153 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:53:30.499190 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:53:30.499211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:53:30.499236 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:53:30.499255 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:53:30.499273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:53:30.499291 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:53:30.499311 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:53:30.499331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:53:30.499352 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:53:30.499371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:53:30.499394 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:53:30.500487 systemd[1]: Reached target slices.target - Slice Units. May 13 23:53:30.500541 systemd[1]: Reached target swap.target - Swaps. May 13 23:53:30.500563 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:53:30.500586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:53:30.500608 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:53:30.500632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:53:30.500651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:53:30.500672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:53:30.500691 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:53:30.500723 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:53:30.500743 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:53:30.500764 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:53:30.500784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:30.500798 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:53:30.500821 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:53:30.500841 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:53:30.500862 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:53:30.500882 systemd[1]: Reached target machines.target - Containers. May 13 23:53:30.500895 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:53:30.500909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:53:30.500920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:53:30.500933 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:53:30.500945 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:53:30.500958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:53:30.500970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:53:30.500994 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:53:30.501014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:53:30.501043 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:53:30.501064 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:53:30.501084 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:53:30.501105 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:53:30.501124 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:53:30.501138 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:53:30.501152 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:53:30.501168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:53:30.501184 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:53:30.501196 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:53:30.501209 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:53:30.501221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:53:30.501235 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:53:30.501247 systemd[1]: Stopped verity-setup.service. May 13 23:53:30.501263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:30.501275 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:53:30.501288 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:53:30.501303 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:53:30.501316 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:53:30.501329 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:53:30.501341 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:53:30.501353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:53:30.501365 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:53:30.501380 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:53:30.501398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:53:30.503081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:53:30.503115 kernel: ACPI: bus type drm_connector registered May 13 23:53:30.503132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:53:30.503145 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:53:30.503175 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:53:30.503198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:53:30.503215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:53:30.503229 kernel: loop: module loaded May 13 23:53:30.503243 kernel: fuse: init (API version 7.39) May 13 23:53:30.503254 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:53:30.503270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:53:30.503284 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:53:30.503296 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:53:30.503308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:53:30.503321 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:53:30.503333 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:53:30.503352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:53:30.503366 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:53:30.503378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:53:30.503394 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:53:30.503406 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:53:30.503505 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:53:30.503518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:53:30.503532 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:53:30.503544 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:53:30.503558 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:53:30.503616 systemd-journald[1110]: Collecting audit messages is disabled. May 13 23:53:30.503650 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:53:30.503667 systemd-journald[1110]: Journal started May 13 23:53:30.503694 systemd-journald[1110]: Runtime Journal (/run/log/journal/f35ee0b3eab2474cbc45bcb9e1713943) is 4.9M, max 39.3M, 34.3M free. May 13 23:53:29.925373 systemd[1]: Queued start job for default target multi-user.target. May 13 23:53:29.939775 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:53:29.940595 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:53:30.507495 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:53:30.514552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:53:30.521794 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:53:30.528551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:53:30.534602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:53:30.543604 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:53:30.554796 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:53:30.562964 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:53:30.576592 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:53:30.594118 kernel: loop0: detected capacity change from 0 to 218376 May 13 23:53:30.595055 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:53:30.610625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:53:30.638520 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:53:30.648689 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:53:30.651981 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:53:30.659532 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:53:30.658802 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:53:30.665741 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:53:30.671835 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:53:30.702456 kernel: loop1: detected capacity change from 0 to 109808 May 13 23:53:30.715025 systemd-journald[1110]: Time spent on flushing to /var/log/journal/f35ee0b3eab2474cbc45bcb9e1713943 is 55.929ms for 1008 entries. May 13 23:53:30.715025 systemd-journald[1110]: System Journal (/var/log/journal/f35ee0b3eab2474cbc45bcb9e1713943) is 8M, max 195.6M, 187.6M free. May 13 23:53:30.812249 systemd-journald[1110]: Received client request to flush runtime journal. May 13 23:53:30.812865 kernel: loop2: detected capacity change from 0 to 8 May 13 23:53:30.812898 kernel: loop3: detected capacity change from 0 to 151640 May 13 23:53:30.737366 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:53:30.781674 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:53:30.819034 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:53:30.832531 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:53:30.840714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:53:30.914522 kernel: loop4: detected capacity change from 0 to 218376 May 13 23:53:30.925812 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 13 23:53:30.926441 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 13 23:53:30.943242 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:53:30.957691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:53:30.962468 kernel: loop5: detected capacity change from 0 to 109808 May 13 23:53:30.990328 kernel: loop6: detected capacity change from 0 to 8 May 13 23:53:30.996712 kernel: loop7: detected capacity change from 0 to 151640 May 13 23:53:31.027792 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 13 23:53:31.029600 (sd-merge)[1184]: Merged extensions into '/usr'. May 13 23:53:31.039104 systemd[1]: Reload requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:53:31.039130 systemd[1]: Reloading... May 13 23:53:31.232457 zram_generator::config[1216]: No configuration found. May 13 23:53:31.515452 ldconfig[1137]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:53:31.515951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:53:31.613077 systemd[1]: Reloading finished in 573 ms. May 13 23:53:31.636349 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:53:31.637983 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:53:31.650792 systemd[1]: Starting ensure-sysext.service... May 13 23:53:31.653897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:53:31.693743 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... May 13 23:53:31.693772 systemd[1]: Reloading... May 13 23:53:31.735115 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:53:31.736094 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:53:31.739820 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:53:31.740146 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 13 23:53:31.740265 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 13 23:53:31.751705 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:53:31.751725 systemd-tmpfiles[1257]: Skipping /boot May 13 23:53:31.794013 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:53:31.798948 systemd-tmpfiles[1257]: Skipping /boot May 13 23:53:31.912456 zram_generator::config[1286]: No configuration found. May 13 23:53:32.129452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:53:32.243387 systemd[1]: Reloading finished in 549 ms. May 13 23:53:32.257538 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:53:32.278886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:53:32.293231 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:53:32.299889 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:53:32.304377 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:53:32.314103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:53:32.319913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:53:32.330931 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:53:32.338293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.338626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:53:32.345152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:53:32.363573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:53:32.371353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:53:32.373776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:53:32.374045 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:53:32.374230 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.385049 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:53:32.389888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.390180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:53:32.392650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:53:32.392906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:53:32.393063 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.397675 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:53:32.412625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.413435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:53:32.417399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:53:32.420385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:53:32.420645 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:53:32.430840 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:53:32.433601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.445524 systemd[1]: Finished ensure-sysext.service. May 13 23:53:32.446959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:53:32.447394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:53:32.466456 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:53:32.468023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:53:32.469531 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:53:32.480360 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:53:32.483537 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:53:32.486226 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:53:32.495983 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:53:32.497815 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:53:32.498556 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:53:32.499919 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:53:32.500558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:53:32.501982 systemd-udevd[1336]: Using default interface naming scheme 'v255'. May 13 23:53:32.503641 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:53:32.514768 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:53:32.544216 augenrules[1372]: No rules May 13 23:53:32.546564 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:53:32.546861 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:53:32.554556 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:53:32.561459 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:53:32.595672 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:53:32.726959 systemd-resolved[1334]: Positive Trust Anchors: May 13 23:53:32.727594 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:53:32.727736 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:53:32.736350 systemd-resolved[1334]: Using system hostname 'ci-4284.0.0-n-db25a1599d'. May 13 23:53:32.738561 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:53:32.747804 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:53:32.764330 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. May 13 23:53:32.779245 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 13 23:53:32.779990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.780156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:53:32.783598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:53:32.790787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:53:32.804860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:53:32.805796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:53:32.805850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:53:32.805910 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:53:32.805938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:53:32.840467 kernel: ISO 9660 Extensions: RRIP_1991A May 13 23:53:32.841527 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 13 23:53:32.849031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:53:32.849336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:53:32.850373 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:53:32.851621 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:53:32.854182 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:53:32.854483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:53:32.857554 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:53:32.870693 systemd-networkd[1380]: lo: Link UP May 13 23:53:32.870709 systemd-networkd[1380]: lo: Gained carrier May 13 23:53:32.876268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:53:32.876600 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:53:32.877829 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:53:32.878046 systemd-networkd[1380]: Enumeration completed May 13 23:53:32.878167 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:53:32.879210 systemd[1]: Reached target network.target - Network. May 13 23:53:32.882401 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:53:32.886401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:53:32.896901 systemd-networkd[1380]: eth1: Configuring with /run/systemd/network/10-92:8c:3b:f9:8d:9a.network. May 13 23:53:32.901000 systemd-networkd[1380]: eth1: Link UP May 13 23:53:32.901015 systemd-networkd[1380]: eth1: Gained carrier May 13 23:53:32.909548 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:32.922187 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:53:32.970535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1385) May 13 23:53:32.971720 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:53:33.044697 systemd-networkd[1380]: eth0: Configuring with /run/systemd/network/10-b2:4d:88:68:21:86.network. May 13 23:53:33.048122 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:33.048533 systemd-networkd[1380]: eth0: Link UP May 13 23:53:33.048538 systemd-networkd[1380]: eth0: Gained carrier May 13 23:53:33.050973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:53:33.055610 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:33.057279 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:53:33.057468 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:33.072468 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 13 23:53:33.072574 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 23:53:33.100374 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:53:33.107488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 23:53:33.114450 kernel: ACPI: button: Power Button [PWRF] May 13 23:53:33.137776 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 23:53:33.137892 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 23:53:33.148499 kernel: Console: switching to colour dummy device 80x25 May 13 23:53:33.149645 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 23:53:33.149705 kernel: [drm] features: -context_init May 13 23:53:33.153443 kernel: [drm] number of scanouts: 1 May 13 23:53:33.154449 kernel: [drm] number of cap sets: 0 May 13 23:53:33.156452 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 23:53:33.164194 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 23:53:33.164285 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:53:33.173480 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 23:53:33.241046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:53:33.269447 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:53:33.343237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:53:33.343768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:53:33.354915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:53:33.438890 kernel: EDAC MC: Ver: 3.0.0 May 13 23:53:33.491498 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:53:33.500317 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:53:33.517260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:53:33.534108 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:53:33.573101 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:53:33.576513 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:53:33.577576 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:53:33.577803 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:53:33.577918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:53:33.578232 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:53:33.578700 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:53:33.578913 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:53:33.579126 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:53:33.579333 systemd[1]: Reached target paths.target - Path Units. May 13 23:53:33.583771 systemd[1]: Reached target timers.target - Timer Units. May 13 23:53:33.585751 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:53:33.589185 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:53:33.596161 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:53:33.600577 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:53:33.601203 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:53:33.612313 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:53:33.613848 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:53:33.620072 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:53:33.626444 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:53:33.628617 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:53:33.629771 systemd[1]: Reached target basic.target - Basic System. May 13 23:53:33.631009 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:53:33.631332 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:53:33.637686 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:53:33.641372 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:53:33.645925 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:53:33.650837 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:53:33.658844 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:53:33.665967 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:53:33.668539 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:53:33.690841 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:53:33.697355 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:53:33.702180 jq[1453]: false May 13 23:53:33.704880 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:53:33.716962 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:53:33.731731 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:53:33.738252 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:53:33.739840 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:53:33.745719 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:53:33.760845 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:53:33.766389 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:53:33.777316 dbus-daemon[1452]: [system] SELinux support is enabled May 13 23:53:33.789000 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:53:33.792499 extend-filesystems[1454]: Found loop4 May 13 23:53:33.792499 extend-filesystems[1454]: Found loop5 May 13 23:53:33.792499 extend-filesystems[1454]: Found loop6 May 13 23:53:33.792499 extend-filesystems[1454]: Found loop7 May 13 23:53:33.792499 extend-filesystems[1454]: Found vda May 13 23:53:33.792499 extend-filesystems[1454]: Found vda1 May 13 23:53:33.792499 extend-filesystems[1454]: Found vda2 May 13 23:53:33.792499 extend-filesystems[1454]: Found vda3 May 13 23:53:33.792499 extend-filesystems[1454]: Found usr May 13 23:53:33.792499 extend-filesystems[1454]: Found vda4 May 13 23:53:33.792499 extend-filesystems[1454]: Found vda6 May 13 23:53:33.792499 extend-filesystems[1454]: Found vda7 May 13 23:53:33.792499 extend-filesystems[1454]: Found vda9 May 13 23:53:33.792499 extend-filesystems[1454]: Checking size of /dev/vda9 May 13 23:53:33.801826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:53:33.804599 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:53:33.815563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:53:33.816013 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:53:33.845675 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:53:33.845759 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:53:33.869984 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:53:33.870164 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 13 23:53:33.870206 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:53:33.943195 extend-filesystems[1454]: Resized partition /dev/vda9 May 13 23:53:33.956507 extend-filesystems[1483]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:53:33.966463 jq[1464]: true May 13 23:53:33.974971 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 13 23:53:33.986477 tar[1467]: linux-amd64/LICENSE May 13 23:53:33.986477 tar[1467]: linux-amd64/helm May 13 23:53:33.993128 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:53:34.021401 coreos-metadata[1451]: May 13 23:53:33.997 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:53:34.038220 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1383) May 13 23:53:34.047713 coreos-metadata[1451]: May 13 23:53:34.045 INFO Fetch successful May 13 23:53:34.050748 update_engine[1463]: I20250513 23:53:34.048494 1463 main.cc:92] Flatcar Update Engine starting May 13 23:53:34.076130 systemd[1]: Started update-engine.service - Update Engine. May 13 23:53:34.080706 update_engine[1463]: I20250513 23:53:34.076329 1463 update_check_scheduler.cc:74] Next update check in 4m59s May 13 23:53:34.116860 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:53:34.126879 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:53:34.128313 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:53:34.158305 jq[1486]: true May 13 23:53:34.232392 systemd-logind[1461]: New seat seat0. May 13 23:53:34.238660 systemd-networkd[1380]: eth1: Gained IPv6LL May 13 23:53:34.239829 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:34.242912 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (Power Button) May 13 23:53:34.242943 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:53:34.247155 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:53:34.265988 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:53:34.269005 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:53:34.277233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:34.291619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:53:34.339891 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 13 23:53:34.366224 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:53:34.389016 extend-filesystems[1483]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:53:34.389016 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 8 May 13 23:53:34.389016 extend-filesystems[1483]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 13 23:53:34.372268 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:53:34.410482 extend-filesystems[1454]: Resized filesystem in /dev/vda9 May 13 23:53:34.410482 extend-filesystems[1454]: Found vdb May 13 23:53:34.384318 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:53:34.384615 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:53:34.442463 bash[1516]: Updated "/home/core/.ssh/authorized_keys" May 13 23:53:34.450074 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:53:34.466254 systemd[1]: Starting sshkeys.service... May 13 23:53:34.580145 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:53:34.586090 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:53:34.592986 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:53:34.767506 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:53:34.769487 coreos-metadata[1535]: May 13 23:53:34.769 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 13 23:53:34.788565 coreos-metadata[1535]: May 13 23:53:34.786 INFO Fetch successful May 13 23:53:34.821024 unknown[1535]: wrote ssh authorized keys file for user: core May 13 23:53:34.880465 containerd[1478]: time="2025-05-13T23:53:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:53:34.882567 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" May 13 23:53:34.885326 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:53:34.887694 systemd[1]: Finished sshkeys.service. May 13 23:53:34.892686 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:53:34.896693 containerd[1478]: time="2025-05-13T23:53:34.896592157Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:53:34.921630 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:53:34.928375 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:53:34.943560 containerd[1478]: time="2025-05-13T23:53:34.942058657Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.821µs" May 13 23:53:34.945596 systemd-networkd[1380]: eth0: Gained IPv6LL May 13 23:53:34.946298 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.953465336Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.953565277Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.953869187Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.953898120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.953945854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.954048834Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:53:34.955376 containerd[1478]: time="2025-05-13T23:53:34.954068508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:53:34.958032 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:53:34.958552 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.958493882Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.958581672Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.958609347Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.958625045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.958834515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.959140899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.959187134Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.959200226Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:53:34.959641 containerd[1478]: time="2025-05-13T23:53:34.959257351Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:53:34.962349 containerd[1478]: time="2025-05-13T23:53:34.961623164Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:53:34.962349 containerd[1478]: time="2025-05-13T23:53:34.961775661Z" level=info msg="metadata content store policy set" policy=shared May 13 23:53:34.966702 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:53:34.985617 containerd[1478]: time="2025-05-13T23:53:34.985500458Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:53:34.985991 containerd[1478]: time="2025-05-13T23:53:34.985890451Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:53:34.985991 containerd[1478]: time="2025-05-13T23:53:34.985933158Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:53:34.985991 containerd[1478]: time="2025-05-13T23:53:34.985955743Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.985976771Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986387775Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986452835Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986485433Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986508044Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986526844Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986544361Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:53:34.986665 containerd[1478]: time="2025-05-13T23:53:34.986565564Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.986848396Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.986895480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.986921472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.986941667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.986960977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.986979826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.987000373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:53:34.987021 containerd[1478]: time="2025-05-13T23:53:34.987020205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:53:34.988508 containerd[1478]: time="2025-05-13T23:53:34.987073453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:53:34.988508 containerd[1478]: time="2025-05-13T23:53:34.987102026Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:53:34.988508 containerd[1478]: time="2025-05-13T23:53:34.987144962Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:53:34.988508 containerd[1478]: time="2025-05-13T23:53:34.987291958Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:53:34.988508 containerd[1478]: time="2025-05-13T23:53:34.987319430Z" level=info msg="Start snapshots syncer" May 13 23:53:34.988508 containerd[1478]: time="2025-05-13T23:53:34.987376100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:53:34.988747 containerd[1478]: time="2025-05-13T23:53:34.987750040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:53:34.988747 containerd[1478]: time="2025-05-13T23:53:34.987830891Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.987939234Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988114783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988139675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988152500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988164330Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988181240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988192834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988205137Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988237080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988250172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988263892Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988299571Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988315837Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:53:34.989939 containerd[1478]: time="2025-05-13T23:53:34.988326613Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.993821216Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.993886429Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.993915563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.993938515Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.993979603Z" level=info msg="runtime interface created" May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.993989527Z" level=info msg="created NRI interface" May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.994005731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.994062806Z" level=info msg="Connect containerd service" May 13 23:53:34.994651 containerd[1478]: time="2025-05-13T23:53:34.994137718Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:53:35.001484 containerd[1478]: time="2025-05-13T23:53:34.995390880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:53:35.024784 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:53:35.030782 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:53:35.036882 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:53:35.038650 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:53:35.320349 containerd[1478]: time="2025-05-13T23:53:35.320297428Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:53:35.320793 containerd[1478]: time="2025-05-13T23:53:35.320594005Z" level=info msg="Start subscribing containerd event" May 13 23:53:35.320890 containerd[1478]: time="2025-05-13T23:53:35.320837006Z" level=info msg="Start recovering state" May 13 23:53:35.321909 containerd[1478]: time="2025-05-13T23:53:35.320776774Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.321092531Z" level=info msg="Start event monitor" May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.322140427Z" level=info msg="Start cni network conf syncer for default" May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.322155715Z" level=info msg="Start streaming server" May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.322171157Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.322203163Z" level=info msg="runtime interface starting up..." May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.322215294Z" level=info msg="starting plugins..." May 13 23:53:35.322438 containerd[1478]: time="2025-05-13T23:53:35.322250860Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:53:35.322892 containerd[1478]: time="2025-05-13T23:53:35.322860756Z" level=info msg="containerd successfully booted in 0.444469s" May 13 23:53:35.323031 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:53:35.568502 tar[1467]: linux-amd64/README.md May 13 23:53:35.593231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:53:36.311785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:36.312970 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:53:36.316730 systemd[1]: Startup finished in 1.326s (kernel) + 6.259s (initrd) + 7.359s (userspace) = 14.945s. May 13 23:53:36.326277 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:53:36.772194 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:53:36.774487 systemd[1]: Started sshd@0-147.182.251.203:22-147.75.109.163:56242.service - OpenSSH per-connection server daemon (147.75.109.163:56242). May 13 23:53:36.877642 sshd[1596]: Accepted publickey for core from 147.75.109.163 port 56242 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:53:36.880735 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:53:36.894905 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:53:36.899001 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:53:36.913951 systemd-logind[1461]: New session 1 of user core. May 13 23:53:36.955005 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:53:36.961737 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:53:36.980783 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:53:36.985308 systemd-logind[1461]: New session c1 of user core. May 13 23:53:37.182630 kubelet[1586]: E0513 23:53:37.181542 1586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:53:37.188642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:53:37.189096 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:53:37.189754 systemd[1]: kubelet.service: Consumed 1.614s CPU time, 252.9M memory peak. May 13 23:53:37.217604 systemd[1600]: Queued start job for default target default.target. May 13 23:53:37.226492 systemd[1600]: Created slice app.slice - User Application Slice. May 13 23:53:37.226556 systemd[1600]: Reached target paths.target - Paths. May 13 23:53:37.226634 systemd[1600]: Reached target timers.target - Timers. May 13 23:53:37.229054 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:53:37.261287 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:53:37.261921 systemd[1600]: Reached target sockets.target - Sockets. May 13 23:53:37.262194 systemd[1600]: Reached target basic.target - Basic System. May 13 23:53:37.262431 systemd[1600]: Reached target default.target - Main User Target. May 13 23:53:37.262640 systemd[1600]: Startup finished in 264ms. May 13 23:53:37.262649 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:53:37.273908 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:53:37.356748 systemd[1]: Started sshd@1-147.182.251.203:22-147.75.109.163:56248.service - OpenSSH per-connection server daemon (147.75.109.163:56248). May 13 23:53:37.432274 sshd[1614]: Accepted publickey for core from 147.75.109.163 port 56248 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:53:37.434897 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:53:37.443443 systemd-logind[1461]: New session 2 of user core. May 13 23:53:37.452850 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:53:37.525213 sshd[1616]: Connection closed by 147.75.109.163 port 56248 May 13 23:53:37.525059 sshd-session[1614]: pam_unix(sshd:session): session closed for user core May 13 23:53:37.537356 systemd[1]: sshd@1-147.182.251.203:22-147.75.109.163:56248.service: Deactivated successfully. May 13 23:53:37.540383 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:53:37.542669 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. May 13 23:53:37.545262 systemd[1]: Started sshd@2-147.182.251.203:22-147.75.109.163:56250.service - OpenSSH per-connection server daemon (147.75.109.163:56250). May 13 23:53:37.547740 systemd-logind[1461]: Removed session 2. May 13 23:53:37.614312 sshd[1621]: Accepted publickey for core from 147.75.109.163 port 56250 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:53:37.617009 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:53:37.626026 systemd-logind[1461]: New session 3 of user core. May 13 23:53:37.640795 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:53:37.703643 sshd[1624]: Connection closed by 147.75.109.163 port 56250 May 13 23:53:37.702310 sshd-session[1621]: pam_unix(sshd:session): session closed for user core May 13 23:53:37.719292 systemd[1]: sshd@2-147.182.251.203:22-147.75.109.163:56250.service: Deactivated successfully. May 13 23:53:37.724206 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:53:37.731878 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. May 13 23:53:37.736032 systemd[1]: Started sshd@3-147.182.251.203:22-147.75.109.163:56262.service - OpenSSH per-connection server daemon (147.75.109.163:56262). May 13 23:53:37.739380 systemd-logind[1461]: Removed session 3. May 13 23:53:37.805572 sshd[1629]: Accepted publickey for core from 147.75.109.163 port 56262 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:53:37.808703 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:53:37.817745 systemd-logind[1461]: New session 4 of user core. May 13 23:53:37.824793 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:53:37.889868 sshd[1632]: Connection closed by 147.75.109.163 port 56262 May 13 23:53:37.890866 sshd-session[1629]: pam_unix(sshd:session): session closed for user core May 13 23:53:37.906981 systemd[1]: sshd@3-147.182.251.203:22-147.75.109.163:56262.service: Deactivated successfully. May 13 23:53:37.910002 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:53:37.913249 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. May 13 23:53:37.915852 systemd[1]: Started sshd@4-147.182.251.203:22-147.75.109.163:47424.service - OpenSSH per-connection server daemon (147.75.109.163:47424). May 13 23:53:37.917992 systemd-logind[1461]: Removed session 4. May 13 23:53:37.992571 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 47424 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:53:37.994787 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:53:38.002515 systemd-logind[1461]: New session 5 of user core. May 13 23:53:38.013792 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:53:38.086452 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:53:38.086800 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:53:38.663636 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:53:38.679263 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:53:39.281985 dockerd[1658]: time="2025-05-13T23:53:39.278940241Z" level=info msg="Starting up" May 13 23:53:39.319992 dockerd[1658]: time="2025-05-13T23:53:39.315964403Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:53:42.177170 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2759548161 wd_nsec: 2759548167 May 13 23:53:42.358048 dockerd[1658]: time="2025-05-13T23:53:42.355499909Z" level=info msg="Loading containers: start." May 13 23:53:42.636442 kernel: Initializing XFRM netlink socket May 13 23:53:42.639776 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:42.658394 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:42.757545 systemd-networkd[1380]: docker0: Link UP May 13 23:53:42.757959 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 13 23:53:42.841757 dockerd[1658]: time="2025-05-13T23:53:42.841682583Z" level=info msg="Loading containers: done." May 13 23:53:42.874706 dockerd[1658]: time="2025-05-13T23:53:42.874519548Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:53:42.874706 dockerd[1658]: time="2025-05-13T23:53:42.874674863Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:53:42.874998 dockerd[1658]: time="2025-05-13T23:53:42.874840189Z" level=info msg="Daemon has completed initialization" May 13 23:53:42.935225 dockerd[1658]: time="2025-05-13T23:53:42.934735990Z" level=info msg="API listen on /run/docker.sock" May 13 23:53:42.934921 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:53:43.920153 containerd[1478]: time="2025-05-13T23:53:43.919299710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 23:53:44.583407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485612303.mount: Deactivated successfully. May 13 23:53:46.136046 containerd[1478]: time="2025-05-13T23:53:46.135970380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:46.137749 containerd[1478]: time="2025-05-13T23:53:46.137680592Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:46.137885 containerd[1478]: time="2025-05-13T23:53:46.137793409Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 23:53:46.141226 containerd[1478]: time="2025-05-13T23:53:46.141168795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:46.142866 containerd[1478]: time="2025-05-13T23:53:46.142814741Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.22335373s" May 13 23:53:46.143089 containerd[1478]: time="2025-05-13T23:53:46.143063510Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 23:53:46.144368 containerd[1478]: time="2025-05-13T23:53:46.144281774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 23:53:47.221844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:53:47.227320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:47.450663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:47.464088 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:53:47.558607 kubelet[1931]: E0513 23:53:47.558141 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:53:47.566314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:53:47.566563 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:53:47.566992 systemd[1]: kubelet.service: Consumed 251ms CPU time, 106M memory peak. May 13 23:53:47.950988 containerd[1478]: time="2025-05-13T23:53:47.950834780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:47.953488 containerd[1478]: time="2025-05-13T23:53:47.953431963Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 23:53:47.954627 containerd[1478]: time="2025-05-13T23:53:47.954581083Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:47.957679 containerd[1478]: time="2025-05-13T23:53:47.957629553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:47.958725 containerd[1478]: time="2025-05-13T23:53:47.958690452Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.814357496s" May 13 23:53:47.958850 containerd[1478]: time="2025-05-13T23:53:47.958836221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 23:53:47.959663 containerd[1478]: time="2025-05-13T23:53:47.959642650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 23:53:49.230636 containerd[1478]: time="2025-05-13T23:53:49.230545469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:49.232151 containerd[1478]: time="2025-05-13T23:53:49.232086221Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 23:53:49.233971 containerd[1478]: time="2025-05-13T23:53:49.233148857Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:49.236814 containerd[1478]: time="2025-05-13T23:53:49.236769565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:49.237843 containerd[1478]: time="2025-05-13T23:53:49.237796422Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.278054629s" May 13 23:53:49.237948 containerd[1478]: time="2025-05-13T23:53:49.237850361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 23:53:49.238574 containerd[1478]: time="2025-05-13T23:53:49.238533625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 23:53:50.532639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823372220.mount: Deactivated successfully. May 13 23:53:51.293860 containerd[1478]: time="2025-05-13T23:53:51.292837805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:51.293860 containerd[1478]: time="2025-05-13T23:53:51.293769932Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 23:53:51.294636 containerd[1478]: time="2025-05-13T23:53:51.294604208Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:51.297035 containerd[1478]: time="2025-05-13T23:53:51.296991166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:51.297784 containerd[1478]: time="2025-05-13T23:53:51.297753797Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.05906928s" May 13 23:53:51.297934 containerd[1478]: time="2025-05-13T23:53:51.297916177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 23:53:51.298476 containerd[1478]: time="2025-05-13T23:53:51.298447578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 23:53:51.300285 systemd-resolved[1334]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. May 13 23:53:51.812840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285568203.mount: Deactivated successfully. May 13 23:53:52.891271 containerd[1478]: time="2025-05-13T23:53:52.889829024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:52.891271 containerd[1478]: time="2025-05-13T23:53:52.891178403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 23:53:52.892150 containerd[1478]: time="2025-05-13T23:53:52.892104798Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:52.895374 containerd[1478]: time="2025-05-13T23:53:52.895323480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:52.896384 containerd[1478]: time="2025-05-13T23:53:52.896342016Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.597855712s" May 13 23:53:52.896492 containerd[1478]: time="2025-05-13T23:53:52.896389271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 23:53:52.897773 containerd[1478]: time="2025-05-13T23:53:52.897717404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 23:53:53.385113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84374148.mount: Deactivated successfully. May 13 23:53:53.392646 containerd[1478]: time="2025-05-13T23:53:53.392578688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:53:53.393678 containerd[1478]: time="2025-05-13T23:53:53.393604925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 23:53:53.394455 containerd[1478]: time="2025-05-13T23:53:53.394258716Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:53:53.397155 containerd[1478]: time="2025-05-13T23:53:53.397090969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:53:53.398155 containerd[1478]: time="2025-05-13T23:53:53.398024203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 500.254262ms" May 13 23:53:53.398155 containerd[1478]: time="2025-05-13T23:53:53.398058496Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 23:53:53.398779 containerd[1478]: time="2025-05-13T23:53:53.398722551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 23:53:53.947201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70326253.mount: Deactivated successfully. May 13 23:53:54.399666 systemd-resolved[1334]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 13 23:53:55.812834 systemd[1]: Started sshd@5-147.182.251.203:22-167.94.138.207:46790.service - OpenSSH per-connection server daemon (167.94.138.207:46790). May 13 23:53:56.393707 containerd[1478]: time="2025-05-13T23:53:56.393632573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:56.397535 containerd[1478]: time="2025-05-13T23:53:56.397384737Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 23:53:56.400469 containerd[1478]: time="2025-05-13T23:53:56.399514978Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:56.404198 containerd[1478]: time="2025-05-13T23:53:56.404091002Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.005338083s" May 13 23:53:56.404198 containerd[1478]: time="2025-05-13T23:53:56.404157918Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 23:53:56.404502 containerd[1478]: time="2025-05-13T23:53:56.404380681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:53:57.722522 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:53:57.726545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:57.916688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:57.927558 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:53:57.992031 kubelet[2093]: E0513 23:53:57.991627 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:53:57.996552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:53:57.996921 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:53:57.997747 systemd[1]: kubelet.service: Consumed 206ms CPU time, 105.5M memory peak. May 13 23:53:59.517913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:53:59.518394 systemd[1]: kubelet.service: Consumed 206ms CPU time, 105.5M memory peak. May 13 23:53:59.522469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:53:59.563232 systemd[1]: Reload requested from client PID 2107 ('systemctl') (unit session-5.scope)... May 13 23:53:59.563269 systemd[1]: Reloading... May 13 23:53:59.741463 zram_generator::config[2156]: No configuration found. May 13 23:53:59.888235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:54:00.059228 systemd[1]: Reloading finished in 495 ms. May 13 23:54:00.162071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:54:00.169068 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:54:00.169459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:54:00.169546 systemd[1]: kubelet.service: Consumed 143ms CPU time, 91.8M memory peak. May 13 23:54:00.173531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:54:00.361405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:54:00.372953 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:54:00.442575 kubelet[2209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:54:00.442575 kubelet[2209]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:54:00.442575 kubelet[2209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:54:00.442575 kubelet[2209]: I0513 23:54:00.441871 2209 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:54:01.045875 kubelet[2209]: I0513 23:54:01.045803 2209 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:54:01.045875 kubelet[2209]: I0513 23:54:01.045861 2209 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:54:01.046221 kubelet[2209]: I0513 23:54:01.046180 2209 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:54:01.080020 kubelet[2209]: I0513 23:54:01.078757 2209 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:54:01.080020 kubelet[2209]: E0513 23:54:01.079937 2209 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.251.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:01.092124 kubelet[2209]: I0513 23:54:01.092069 2209 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:54:01.098760 kubelet[2209]: I0513 23:54:01.098532 2209 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:54:01.101455 kubelet[2209]: I0513 23:54:01.100892 2209 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:54:01.101455 kubelet[2209]: I0513 23:54:01.101008 2209 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-db25a1599d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:54:01.101455 kubelet[2209]: I0513 23:54:01.101278 2209 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:54:01.101455 kubelet[2209]: I0513 23:54:01.101291 2209 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:54:01.104048 kubelet[2209]: I0513 23:54:01.103990 2209 state_mem.go:36] "Initialized new in-memory state store" May 13 23:54:01.112981 kubelet[2209]: I0513 23:54:01.112869 2209 kubelet.go:446] "Attempting to sync node with API server" May 13 23:54:01.112981 kubelet[2209]: I0513 23:54:01.112936 2209 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:54:01.115221 kubelet[2209]: I0513 23:54:01.114627 2209 kubelet.go:352] "Adding apiserver pod source" May 13 23:54:01.117407 kubelet[2209]: I0513 23:54:01.116780 2209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:54:01.125840 kubelet[2209]: W0513 23:54:01.124677 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.251.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-db25a1599d&limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:01.125840 kubelet[2209]: E0513 23:54:01.124760 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.251.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-db25a1599d&limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:01.125840 kubelet[2209]: W0513 23:54:01.125348 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.251.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:01.125840 kubelet[2209]: E0513 23:54:01.125394 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.251.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:01.126478 kubelet[2209]: I0513 23:54:01.126455 2209 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:54:01.131854 kubelet[2209]: I0513 23:54:01.131788 2209 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:54:01.133021 kubelet[2209]: W0513 23:54:01.132970 2209 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:54:01.142792 kubelet[2209]: I0513 23:54:01.142742 2209 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:54:01.143168 kubelet[2209]: I0513 23:54:01.142828 2209 server.go:1287] "Started kubelet" May 13 23:54:01.148748 kubelet[2209]: I0513 23:54:01.147881 2209 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:54:01.161439 kubelet[2209]: I0513 23:54:01.160469 2209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:54:01.161439 kubelet[2209]: I0513 23:54:01.161224 2209 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:54:01.162458 kubelet[2209]: I0513 23:54:01.162092 2209 server.go:490] "Adding debug handlers to kubelet server" May 13 23:54:01.162676 kubelet[2209]: I0513 23:54:01.160555 2209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:54:01.163803 kubelet[2209]: I0513 23:54:01.163762 2209 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:54:01.171571 kubelet[2209]: E0513 23:54:01.168084 2209 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.251.203:6443/api/v1/namespaces/default/events\": dial tcp 147.182.251.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-db25a1599d.183f3b5ab0d37692 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-db25a1599d,UID:ci-4284.0.0-n-db25a1599d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-db25a1599d,},FirstTimestamp:2025-05-13 23:54:01.142785682 +0000 UTC m=+0.762822217,LastTimestamp:2025-05-13 23:54:01.142785682 +0000 UTC m=+0.762822217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-db25a1599d,}" May 13 23:54:01.175980 kubelet[2209]: E0513 23:54:01.175933 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:01.177488 kubelet[2209]: W0513 23:54:01.177337 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.251.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:01.177488 kubelet[2209]: E0513 23:54:01.177451 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.251.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:01.177488 kubelet[2209]: I0513 23:54:01.176134 2209 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:54:01.177885 kubelet[2209]: I0513 23:54:01.177856 2209 factory.go:221] Registration of the systemd container factory successfully May 13 23:54:01.178025 kubelet[2209]: I0513 23:54:01.178003 2209 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:54:01.180800 kubelet[2209]: E0513 23:54:01.180724 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-db25a1599d?timeout=10s\": dial tcp 147.182.251.203:6443: connect: connection refused" interval="200ms" May 13 23:54:01.180991 kubelet[2209]: I0513 23:54:01.180958 2209 factory.go:221] Registration of the containerd container factory successfully May 13 23:54:01.182463 kubelet[2209]: I0513 23:54:01.176110 2209 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:54:01.186661 kubelet[2209]: I0513 23:54:01.186620 2209 reconciler.go:26] "Reconciler: start to sync state" May 13 23:54:01.200346 kubelet[2209]: I0513 23:54:01.200065 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:54:01.203295 kubelet[2209]: I0513 23:54:01.202575 2209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:54:01.203295 kubelet[2209]: I0513 23:54:01.202646 2209 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:54:01.203295 kubelet[2209]: I0513 23:54:01.202703 2209 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:54:01.203295 kubelet[2209]: I0513 23:54:01.202716 2209 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:54:01.203295 kubelet[2209]: E0513 23:54:01.202820 2209 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:54:01.213253 kubelet[2209]: W0513 23:54:01.213188 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.251.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:01.213615 kubelet[2209]: E0513 23:54:01.213585 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.251.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:01.215590 kubelet[2209]: E0513 23:54:01.213181 2209 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:54:01.219074 kubelet[2209]: I0513 23:54:01.219019 2209 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:54:01.219074 kubelet[2209]: I0513 23:54:01.219054 2209 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:54:01.219074 kubelet[2209]: I0513 23:54:01.219083 2209 state_mem.go:36] "Initialized new in-memory state store" May 13 23:54:01.223037 kubelet[2209]: I0513 23:54:01.222890 2209 policy_none.go:49] "None policy: Start" May 13 23:54:01.223256 kubelet[2209]: I0513 23:54:01.223054 2209 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:54:01.223256 kubelet[2209]: I0513 23:54:01.223084 2209 state_mem.go:35] "Initializing new in-memory state store" May 13 23:54:01.234715 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:54:01.250421 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:54:01.257704 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:54:01.268274 kubelet[2209]: I0513 23:54:01.268221 2209 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:54:01.268555 kubelet[2209]: I0513 23:54:01.268532 2209 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:54:01.268646 kubelet[2209]: I0513 23:54:01.268571 2209 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:54:01.269368 kubelet[2209]: I0513 23:54:01.269341 2209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:54:01.271978 kubelet[2209]: E0513 23:54:01.271843 2209 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:54:01.271978 kubelet[2209]: E0513 23:54:01.271938 2209 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:01.320038 systemd[1]: Created slice kubepods-burstable-podc64626f4862bb3b3e59f456be2f327a4.slice - libcontainer container kubepods-burstable-podc64626f4862bb3b3e59f456be2f327a4.slice. May 13 23:54:01.332020 kubelet[2209]: E0513 23:54:01.331051 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.338246 systemd[1]: Created slice kubepods-burstable-pod946a43551ab4fff0fae9d9325700f0ed.slice - libcontainer container kubepods-burstable-pod946a43551ab4fff0fae9d9325700f0ed.slice. May 13 23:54:01.350747 kubelet[2209]: E0513 23:54:01.350229 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.354854 systemd[1]: Created slice kubepods-burstable-poda98fc19953546bf0a31f2d7535542def.slice - libcontainer container kubepods-burstable-poda98fc19953546bf0a31f2d7535542def.slice. May 13 23:54:01.361942 kubelet[2209]: E0513 23:54:01.361577 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.369873 kubelet[2209]: I0513 23:54:01.369792 2209 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.370382 kubelet[2209]: E0513 23:54:01.370332 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.182.251.203:6443/api/v1/nodes\": dial tcp 147.182.251.203:6443: connect: connection refused" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.386040 kubelet[2209]: E0513 23:54:01.385621 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-db25a1599d?timeout=10s\": dial tcp 147.182.251.203:6443: connect: connection refused" interval="400ms" May 13 23:54:01.387896 kubelet[2209]: I0513 23:54:01.387727 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a98fc19953546bf0a31f2d7535542def-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" (UID: \"a98fc19953546bf0a31f2d7535542def\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.387896 kubelet[2209]: I0513 23:54:01.387783 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.387896 kubelet[2209]: I0513 23:54:01.387810 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.387896 kubelet[2209]: I0513 23:54:01.387827 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.387896 kubelet[2209]: I0513 23:54:01.387844 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/946a43551ab4fff0fae9d9325700f0ed-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-db25a1599d\" (UID: \"946a43551ab4fff0fae9d9325700f0ed\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.388177 kubelet[2209]: I0513 23:54:01.387909 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a98fc19953546bf0a31f2d7535542def-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" (UID: \"a98fc19953546bf0a31f2d7535542def\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.388177 kubelet[2209]: I0513 23:54:01.387999 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.388177 kubelet[2209]: I0513 23:54:01.388022 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.388177 kubelet[2209]: I0513 23:54:01.388042 2209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a98fc19953546bf0a31f2d7535542def-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" (UID: \"a98fc19953546bf0a31f2d7535542def\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:01.572813 kubelet[2209]: I0513 23:54:01.572397 2209 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.574074 kubelet[2209]: E0513 23:54:01.573204 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.182.251.203:6443/api/v1/nodes\": dial tcp 147.182.251.203:6443: connect: connection refused" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.645527 kubelet[2209]: E0513 23:54:01.644718 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:01.646282 containerd[1478]: time="2025-05-13T23:54:01.646225552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-db25a1599d,Uid:c64626f4862bb3b3e59f456be2f327a4,Namespace:kube-system,Attempt:0,}" May 13 23:54:01.652861 kubelet[2209]: E0513 23:54:01.651869 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:01.658523 containerd[1478]: time="2025-05-13T23:54:01.658439918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-db25a1599d,Uid:946a43551ab4fff0fae9d9325700f0ed,Namespace:kube-system,Attempt:0,}" May 13 23:54:01.663702 kubelet[2209]: E0513 23:54:01.662660 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:01.663906 containerd[1478]: time="2025-05-13T23:54:01.663468977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-db25a1599d,Uid:a98fc19953546bf0a31f2d7535542def,Namespace:kube-system,Attempt:0,}" May 13 23:54:01.786453 kubelet[2209]: E0513 23:54:01.786310 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-db25a1599d?timeout=10s\": dial tcp 147.182.251.203:6443: connect: connection refused" interval="800ms" May 13 23:54:01.819307 containerd[1478]: time="2025-05-13T23:54:01.819030546Z" level=info msg="connecting to shim e1abf0f66a9be646d363f5220221f8a6be072398e143152cbc82f6a07023e9da" address="unix:///run/containerd/s/7318ce8a1bac093ad9b4574907d8bbe859965a3189552eceddf3930a629df934" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:01.822009 containerd[1478]: time="2025-05-13T23:54:01.821305320Z" level=info msg="connecting to shim cc0043dce267c4f2fc871da766ae9736a31232635b86c835e0b9d3cd7c9674ac" address="unix:///run/containerd/s/ebae65cab5c0cd3eddfcb88fb25c3ea47f94c3426bd63548fac4e7a24c8d228b" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:01.829149 containerd[1478]: time="2025-05-13T23:54:01.827765018Z" level=info msg="connecting to shim f29367772496d27642200fdff1ce43c845d40dcf0edc4f564cc7424a813dc9dc" address="unix:///run/containerd/s/3a3516a305690d1a7bc18f4d42d23cc1db067d8f001b40e186b2d2641db4a817" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:01.957813 systemd[1]: Started cri-containerd-e1abf0f66a9be646d363f5220221f8a6be072398e143152cbc82f6a07023e9da.scope - libcontainer container e1abf0f66a9be646d363f5220221f8a6be072398e143152cbc82f6a07023e9da. May 13 23:54:01.967692 systemd[1]: Started cri-containerd-cc0043dce267c4f2fc871da766ae9736a31232635b86c835e0b9d3cd7c9674ac.scope - libcontainer container cc0043dce267c4f2fc871da766ae9736a31232635b86c835e0b9d3cd7c9674ac. May 13 23:54:01.974135 systemd[1]: Started cri-containerd-f29367772496d27642200fdff1ce43c845d40dcf0edc4f564cc7424a813dc9dc.scope - libcontainer container f29367772496d27642200fdff1ce43c845d40dcf0edc4f564cc7424a813dc9dc. May 13 23:54:01.980241 kubelet[2209]: I0513 23:54:01.980202 2209 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:01.982659 kubelet[2209]: E0513 23:54:01.982588 2209 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.182.251.203:6443/api/v1/nodes\": dial tcp 147.182.251.203:6443: connect: connection refused" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:02.058108 kubelet[2209]: W0513 23:54:02.056301 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.251.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-db25a1599d&limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:02.058108 kubelet[2209]: E0513 23:54:02.057846 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.251.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-db25a1599d&limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:02.094808 containerd[1478]: time="2025-05-13T23:54:02.094479346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-db25a1599d,Uid:a98fc19953546bf0a31f2d7535542def,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1abf0f66a9be646d363f5220221f8a6be072398e143152cbc82f6a07023e9da\"" May 13 23:54:02.100214 kubelet[2209]: E0513 23:54:02.100155 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:02.106041 containerd[1478]: time="2025-05-13T23:54:02.105974758Z" level=info msg="CreateContainer within sandbox \"e1abf0f66a9be646d363f5220221f8a6be072398e143152cbc82f6a07023e9da\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:54:02.127426 containerd[1478]: time="2025-05-13T23:54:02.126781732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-db25a1599d,Uid:c64626f4862bb3b3e59f456be2f327a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f29367772496d27642200fdff1ce43c845d40dcf0edc4f564cc7424a813dc9dc\"" May 13 23:54:02.128887 kubelet[2209]: E0513 23:54:02.128464 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:02.131001 kubelet[2209]: W0513 23:54:02.130071 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.251.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:02.131001 kubelet[2209]: E0513 23:54:02.130438 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.251.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:02.132390 containerd[1478]: time="2025-05-13T23:54:02.132284257Z" level=info msg="CreateContainer within sandbox \"f29367772496d27642200fdff1ce43c845d40dcf0edc4f564cc7424a813dc9dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:54:02.142978 containerd[1478]: time="2025-05-13T23:54:02.142550672Z" level=info msg="Container e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:02.149894 containerd[1478]: time="2025-05-13T23:54:02.149836127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-db25a1599d,Uid:946a43551ab4fff0fae9d9325700f0ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc0043dce267c4f2fc871da766ae9736a31232635b86c835e0b9d3cd7c9674ac\"" May 13 23:54:02.151560 kubelet[2209]: E0513 23:54:02.151466 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:02.155652 containerd[1478]: time="2025-05-13T23:54:02.155010594Z" level=info msg="CreateContainer within sandbox \"cc0043dce267c4f2fc871da766ae9736a31232635b86c835e0b9d3cd7c9674ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:54:02.156676 containerd[1478]: time="2025-05-13T23:54:02.156641265Z" level=info msg="Container 7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:02.162485 containerd[1478]: time="2025-05-13T23:54:02.162262586Z" level=info msg="CreateContainer within sandbox \"e1abf0f66a9be646d363f5220221f8a6be072398e143152cbc82f6a07023e9da\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773\"" May 13 23:54:02.164023 containerd[1478]: time="2025-05-13T23:54:02.163626116Z" level=info msg="StartContainer for \"e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773\"" May 13 23:54:02.170399 containerd[1478]: time="2025-05-13T23:54:02.170313824Z" level=info msg="connecting to shim e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773" address="unix:///run/containerd/s/7318ce8a1bac093ad9b4574907d8bbe859965a3189552eceddf3930a629df934" protocol=ttrpc version=3 May 13 23:54:02.175090 containerd[1478]: time="2025-05-13T23:54:02.175028137Z" level=info msg="CreateContainer within sandbox \"f29367772496d27642200fdff1ce43c845d40dcf0edc4f564cc7424a813dc9dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39\"" May 13 23:54:02.179462 containerd[1478]: time="2025-05-13T23:54:02.178206989Z" level=info msg="StartContainer for \"7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39\"" May 13 23:54:02.180792 containerd[1478]: time="2025-05-13T23:54:02.180713504Z" level=info msg="Container 08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:02.183458 containerd[1478]: time="2025-05-13T23:54:02.183198800Z" level=info msg="connecting to shim 7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39" address="unix:///run/containerd/s/3a3516a305690d1a7bc18f4d42d23cc1db067d8f001b40e186b2d2641db4a817" protocol=ttrpc version=3 May 13 23:54:02.193804 containerd[1478]: time="2025-05-13T23:54:02.193656566Z" level=info msg="CreateContainer within sandbox \"cc0043dce267c4f2fc871da766ae9736a31232635b86c835e0b9d3cd7c9674ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067\"" May 13 23:54:02.198368 containerd[1478]: time="2025-05-13T23:54:02.196778717Z" level=info msg="StartContainer for \"08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067\"" May 13 23:54:02.198368 containerd[1478]: time="2025-05-13T23:54:02.198254147Z" level=info msg="connecting to shim 08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067" address="unix:///run/containerd/s/ebae65cab5c0cd3eddfcb88fb25c3ea47f94c3426bd63548fac4e7a24c8d228b" protocol=ttrpc version=3 May 13 23:54:02.231738 systemd[1]: Started cri-containerd-e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773.scope - libcontainer container e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773. May 13 23:54:02.236640 systemd[1]: Started cri-containerd-7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39.scope - libcontainer container 7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39. May 13 23:54:02.268783 systemd[1]: Started cri-containerd-08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067.scope - libcontainer container 08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067. May 13 23:54:02.274504 kubelet[2209]: W0513 23:54:02.274405 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.251.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:02.274504 kubelet[2209]: E0513 23:54:02.274518 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.251.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:02.361651 containerd[1478]: time="2025-05-13T23:54:02.360902529Z" level=info msg="StartContainer for \"e17135a3fb918310ec3dac1b4f98a63303f7e95acb64851ed393866134226773\" returns successfully" May 13 23:54:02.411679 containerd[1478]: time="2025-05-13T23:54:02.411331370Z" level=info msg="StartContainer for \"08af793afa85dd3eec45dbeba806e7194078d8f4ee5443619e6cd6b39a17b067\" returns successfully" May 13 23:54:02.428469 containerd[1478]: time="2025-05-13T23:54:02.423990761Z" level=info msg="StartContainer for \"7496ed21b91aee1679a9660dc4261533cfe52047e94cdb2d759e49b8dcafea39\" returns successfully" May 13 23:54:02.486507 kubelet[2209]: W0513 23:54:02.486344 2209 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.251.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.251.203:6443: connect: connection refused May 13 23:54:02.486507 kubelet[2209]: E0513 23:54:02.486472 2209 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.251.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.251.203:6443: connect: connection refused" logger="UnhandledError" May 13 23:54:02.587482 kubelet[2209]: E0513 23:54:02.587376 2209 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.251.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-db25a1599d?timeout=10s\": dial tcp 147.182.251.203:6443: connect: connection refused" interval="1.6s" May 13 23:54:02.784730 kubelet[2209]: I0513 23:54:02.784673 2209 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:03.264080 kubelet[2209]: E0513 23:54:03.263974 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:03.265848 kubelet[2209]: E0513 23:54:03.265731 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:03.267969 kubelet[2209]: E0513 23:54:03.267926 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:03.268156 kubelet[2209]: E0513 23:54:03.268133 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:03.275445 kubelet[2209]: E0513 23:54:03.275008 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:03.275445 kubelet[2209]: E0513 23:54:03.275294 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:04.277617 kubelet[2209]: E0513 23:54:04.277573 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:04.279056 kubelet[2209]: E0513 23:54:04.278773 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:04.279056 kubelet[2209]: E0513 23:54:04.278180 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:04.279056 kubelet[2209]: E0513 23:54:04.278774 2209 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284.0.0-n-db25a1599d\" not found" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:04.279056 kubelet[2209]: E0513 23:54:04.278974 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:04.279056 kubelet[2209]: E0513 23:54:04.278992 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:04.785493 kubelet[2209]: I0513 23:54:04.783843 2209 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:04.785493 kubelet[2209]: E0513 23:54:04.783911 2209 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4284.0.0-n-db25a1599d\": node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:04.823873 kubelet[2209]: E0513 23:54:04.821495 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:04.921864 kubelet[2209]: E0513 23:54:04.921813 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:05.023145 kubelet[2209]: E0513 23:54:05.022659 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:05.123248 kubelet[2209]: E0513 23:54:05.122861 2209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:05.181065 kubelet[2209]: I0513 23:54:05.180952 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.190563 kubelet[2209]: E0513 23:54:05.190462 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.190563 kubelet[2209]: I0513 23:54:05.190517 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.193822 kubelet[2209]: E0513 23:54:05.193752 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-db25a1599d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.193822 kubelet[2209]: I0513 23:54:05.193800 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.196703 kubelet[2209]: E0513 23:54:05.196663 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.279003 kubelet[2209]: I0513 23:54:05.278559 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.279003 kubelet[2209]: I0513 23:54:05.278596 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.280439 kubelet[2209]: I0513 23:54:05.279562 2209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.283869 kubelet[2209]: E0513 23:54:05.283476 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.283869 kubelet[2209]: E0513 23:54:05.283550 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-db25a1599d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.283869 kubelet[2209]: E0513 23:54:05.283751 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:05.283869 kubelet[2209]: E0513 23:54:05.283762 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:05.284930 kubelet[2209]: E0513 23:54:05.284879 2209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:05.285152 kubelet[2209]: E0513 23:54:05.285120 2209 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:06.127530 kubelet[2209]: I0513 23:54:06.127082 2209 apiserver.go:52] "Watching apiserver" May 13 23:54:06.177906 kubelet[2209]: I0513 23:54:06.177817 2209 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:54:06.834107 systemd[1]: Reload requested from client PID 2474 ('systemctl') (unit session-5.scope)... May 13 23:54:06.834132 systemd[1]: Reloading... May 13 23:54:07.043459 zram_generator::config[2520]: No configuration found. May 13 23:54:07.298215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:54:07.519308 systemd[1]: Reloading finished in 684 ms. May 13 23:54:07.562765 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:54:07.578078 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:54:07.578490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:54:07.578576 systemd[1]: kubelet.service: Consumed 1.365s CPU time, 120.7M memory peak. May 13 23:54:07.583582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:54:07.861531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:54:07.878643 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:54:08.005447 kubelet[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:54:08.005447 kubelet[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 23:54:08.005447 kubelet[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:54:08.009039 kubelet[2571]: I0513 23:54:08.005382 2571 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:54:08.032450 kubelet[2571]: I0513 23:54:08.030022 2571 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 23:54:08.032450 kubelet[2571]: I0513 23:54:08.030128 2571 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:54:08.032450 kubelet[2571]: I0513 23:54:08.032442 2571 server.go:954] "Client rotation is on, will bootstrap in background" May 13 23:54:08.042985 kubelet[2571]: I0513 23:54:08.041518 2571 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:54:08.053408 kubelet[2571]: I0513 23:54:08.053342 2571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:54:08.071256 kubelet[2571]: I0513 23:54:08.070656 2571 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 23:54:08.080610 kubelet[2571]: I0513 23:54:08.080249 2571 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:54:08.081862 kubelet[2571]: I0513 23:54:08.081263 2571 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:54:08.081862 kubelet[2571]: I0513 23:54:08.081334 2571 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-db25a1599d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 23:54:08.081862 kubelet[2571]: I0513 23:54:08.081794 2571 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:54:08.081862 kubelet[2571]: I0513 23:54:08.081815 2571 container_manager_linux.go:304] "Creating device plugin manager" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.081889 2571 state_mem.go:36] "Initialized new in-memory state store" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.082715 2571 kubelet.go:446] "Attempting to sync node with API server" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.082739 2571 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.082776 2571 kubelet.go:352] "Adding apiserver pod source" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.082794 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.089275 2571 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:54:08.099799 kubelet[2571]: I0513 23:54:08.089999 2571 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:54:08.103464 kubelet[2571]: I0513 23:54:08.100762 2571 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 23:54:08.103755 kubelet[2571]: I0513 23:54:08.103729 2571 server.go:1287] "Started kubelet" May 13 23:54:08.127083 kubelet[2571]: I0513 23:54:08.125915 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:54:08.138741 kubelet[2571]: I0513 23:54:08.138662 2571 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:54:08.141601 kubelet[2571]: I0513 23:54:08.141555 2571 server.go:490] "Adding debug handlers to kubelet server" May 13 23:54:08.146769 kubelet[2571]: I0513 23:54:08.146665 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:54:08.148864 kubelet[2571]: I0513 23:54:08.148823 2571 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:54:08.154816 kubelet[2571]: I0513 23:54:08.154755 2571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 23:54:08.158745 kubelet[2571]: I0513 23:54:08.158714 2571 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 23:54:08.159184 kubelet[2571]: E0513 23:54:08.159159 2571 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-db25a1599d\" not found" May 13 23:54:08.160861 kubelet[2571]: I0513 23:54:08.160818 2571 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:54:08.161928 kubelet[2571]: I0513 23:54:08.161912 2571 reconciler.go:26] "Reconciler: start to sync state" May 13 23:54:08.177161 kubelet[2571]: I0513 23:54:08.177096 2571 factory.go:221] Registration of the systemd container factory successfully May 13 23:54:08.177722 kubelet[2571]: I0513 23:54:08.177660 2571 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:54:08.184541 kubelet[2571]: I0513 23:54:08.183396 2571 factory.go:221] Registration of the containerd container factory successfully May 13 23:54:08.184541 kubelet[2571]: I0513 23:54:08.183964 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:54:08.188952 kubelet[2571]: I0513 23:54:08.188889 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:54:08.188952 kubelet[2571]: I0513 23:54:08.188939 2571 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 23:54:08.189208 kubelet[2571]: I0513 23:54:08.188985 2571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 23:54:08.189208 kubelet[2571]: I0513 23:54:08.188996 2571 kubelet.go:2388] "Starting kubelet main sync loop" May 13 23:54:08.189739 kubelet[2571]: E0513 23:54:08.189084 2571 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:54:08.292274 kubelet[2571]: E0513 23:54:08.291192 2571 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294281 2571 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294304 2571 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294335 2571 state_mem.go:36] "Initialized new in-memory state store" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294683 2571 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294705 2571 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294729 2571 policy_none.go:49] "None policy: Start" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294739 2571 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.294754 2571 state_mem.go:35] "Initializing new in-memory state store" May 13 23:54:08.295275 kubelet[2571]: I0513 23:54:08.295007 2571 state_mem.go:75] "Updated machine memory state" May 13 23:54:08.311562 kubelet[2571]: I0513 23:54:08.310150 2571 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:54:08.311562 kubelet[2571]: I0513 23:54:08.310445 2571 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 23:54:08.311562 kubelet[2571]: I0513 23:54:08.310468 2571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:54:08.319574 kubelet[2571]: I0513 23:54:08.319249 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:54:08.329291 kubelet[2571]: E0513 23:54:08.329244 2571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 23:54:08.428872 kubelet[2571]: I0513 23:54:08.428126 2571 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:08.443636 kubelet[2571]: I0513 23:54:08.443299 2571 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:08.445495 kubelet[2571]: I0513 23:54:08.444254 2571 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284.0.0-n-db25a1599d" May 13 23:54:08.493917 kubelet[2571]: I0513 23:54:08.493720 2571 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.495453 kubelet[2571]: I0513 23:54:08.495385 2571 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.495977 kubelet[2571]: I0513 23:54:08.495777 2571 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.518050 kubelet[2571]: W0513 23:54:08.517131 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:54:08.518050 kubelet[2571]: W0513 23:54:08.517438 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:54:08.518778 kubelet[2571]: W0513 23:54:08.518401 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:54:08.563947 kubelet[2571]: I0513 23:54:08.563825 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.563947 kubelet[2571]: I0513 23:54:08.563911 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.563947 kubelet[2571]: I0513 23:54:08.563937 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/946a43551ab4fff0fae9d9325700f0ed-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-db25a1599d\" (UID: \"946a43551ab4fff0fae9d9325700f0ed\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.563947 kubelet[2571]: I0513 23:54:08.563959 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a98fc19953546bf0a31f2d7535542def-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" (UID: \"a98fc19953546bf0a31f2d7535542def\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.564331 kubelet[2571]: I0513 23:54:08.563977 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.564331 kubelet[2571]: I0513 23:54:08.563994 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.564331 kubelet[2571]: I0513 23:54:08.564013 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c64626f4862bb3b3e59f456be2f327a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-db25a1599d\" (UID: \"c64626f4862bb3b3e59f456be2f327a4\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.564331 kubelet[2571]: I0513 23:54:08.564028 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a98fc19953546bf0a31f2d7535542def-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" (UID: \"a98fc19953546bf0a31f2d7535542def\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.564331 kubelet[2571]: I0513 23:54:08.564042 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a98fc19953546bf0a31f2d7535542def-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-db25a1599d\" (UID: \"a98fc19953546bf0a31f2d7535542def\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" May 13 23:54:08.819249 kubelet[2571]: E0513 23:54:08.818711 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:08.819249 kubelet[2571]: E0513 23:54:08.818754 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:08.821288 kubelet[2571]: E0513 23:54:08.821061 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:09.104055 kubelet[2571]: I0513 23:54:09.103846 2571 apiserver.go:52] "Watching apiserver" May 13 23:54:09.162683 kubelet[2571]: I0513 23:54:09.162607 2571 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:54:09.261035 kubelet[2571]: I0513 23:54:09.260735 2571 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:09.262644 kubelet[2571]: E0513 23:54:09.261760 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:09.262970 kubelet[2571]: E0513 23:54:09.262816 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:09.274458 kubelet[2571]: W0513 23:54:09.274107 2571 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:54:09.274458 kubelet[2571]: E0513 23:54:09.274208 2571 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284.0.0-n-db25a1599d\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" May 13 23:54:09.274676 kubelet[2571]: E0513 23:54:09.274481 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:09.321774 kubelet[2571]: I0513 23:54:09.321604 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-db25a1599d" podStartSLOduration=1.3215382039999999 podStartE2EDuration="1.321538204s" podCreationTimestamp="2025-05-13 23:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:54:09.306626945 +0000 UTC m=+1.417087551" watchObservedRunningTime="2025-05-13 23:54:09.321538204 +0000 UTC m=+1.431998810" May 13 23:54:09.322099 kubelet[2571]: I0513 23:54:09.321850 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-db25a1599d" podStartSLOduration=1.321839754 podStartE2EDuration="1.321839754s" podCreationTimestamp="2025-05-13 23:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:54:09.317442097 +0000 UTC m=+1.427902704" watchObservedRunningTime="2025-05-13 23:54:09.321839754 +0000 UTC m=+1.432300349" May 13 23:54:09.349725 kubelet[2571]: I0513 23:54:09.349647 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-db25a1599d" podStartSLOduration=1.349615199 podStartE2EDuration="1.349615199s" podCreationTimestamp="2025-05-13 23:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:54:09.332238661 +0000 UTC m=+1.442699262" watchObservedRunningTime="2025-05-13 23:54:09.349615199 +0000 UTC m=+1.460075799" May 13 23:54:09.543575 sudo[1641]: pam_unix(sudo:session): session closed for user root May 13 23:54:09.549534 sshd[1640]: Connection closed by 147.75.109.163 port 47424 May 13 23:54:09.551382 sshd-session[1637]: pam_unix(sshd:session): session closed for user core May 13 23:54:09.559135 systemd[1]: sshd@4-147.182.251.203:22-147.75.109.163:47424.service: Deactivated successfully. May 13 23:54:09.562226 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:54:09.562491 systemd[1]: session-5.scope: Consumed 6.390s CPU time, 160.4M memory peak. May 13 23:54:09.563916 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. May 13 23:54:09.566091 systemd-logind[1461]: Removed session 5. May 13 23:54:10.262826 kubelet[2571]: E0513 23:54:10.262707 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:10.262826 kubelet[2571]: E0513 23:54:10.262707 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:11.293332 sshd[2061]: Connection closed by 167.94.138.207 port 46790 [preauth] May 13 23:54:11.295357 systemd[1]: sshd@5-147.182.251.203:22-167.94.138.207:46790.service: Deactivated successfully. May 13 23:54:11.578760 kubelet[2571]: I0513 23:54:11.578554 2571 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:54:11.579980 kubelet[2571]: I0513 23:54:11.579800 2571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:54:11.580073 containerd[1478]: time="2025-05-13T23:54:11.579466424Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:54:12.242166 systemd[1]: Created slice kubepods-burstable-pod01379a57_939a_4cf8_b600_0e87b47c3c24.slice - libcontainer container kubepods-burstable-pod01379a57_939a_4cf8_b600_0e87b47c3c24.slice. May 13 23:54:12.255971 systemd[1]: Created slice kubepods-besteffort-pod859f3ea3_86d9_4ef4_80f8_00326eb2fa75.slice - libcontainer container kubepods-besteffort-pod859f3ea3_86d9_4ef4_80f8_00326eb2fa75.slice. May 13 23:54:12.292112 kubelet[2571]: I0513 23:54:12.292030 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01379a57-939a-4cf8-b600-0e87b47c3c24-xtables-lock\") pod \"kube-flannel-ds-fbvw8\" (UID: \"01379a57-939a-4cf8-b600-0e87b47c3c24\") " pod="kube-flannel/kube-flannel-ds-fbvw8" May 13 23:54:12.292112 kubelet[2571]: I0513 23:54:12.292103 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/859f3ea3-86d9-4ef4-80f8-00326eb2fa75-xtables-lock\") pod \"kube-proxy-qz5n9\" (UID: \"859f3ea3-86d9-4ef4-80f8-00326eb2fa75\") " pod="kube-system/kube-proxy-qz5n9" May 13 23:54:12.292343 kubelet[2571]: I0513 23:54:12.292135 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/859f3ea3-86d9-4ef4-80f8-00326eb2fa75-lib-modules\") pod \"kube-proxy-qz5n9\" (UID: \"859f3ea3-86d9-4ef4-80f8-00326eb2fa75\") " pod="kube-system/kube-proxy-qz5n9" May 13 23:54:12.292343 kubelet[2571]: I0513 23:54:12.292155 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/01379a57-939a-4cf8-b600-0e87b47c3c24-cni\") pod \"kube-flannel-ds-fbvw8\" (UID: \"01379a57-939a-4cf8-b600-0e87b47c3c24\") " pod="kube-flannel/kube-flannel-ds-fbvw8" May 13 23:54:12.292343 kubelet[2571]: I0513 23:54:12.292178 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9n5\" (UniqueName: \"kubernetes.io/projected/01379a57-939a-4cf8-b600-0e87b47c3c24-kube-api-access-rn9n5\") pod \"kube-flannel-ds-fbvw8\" (UID: \"01379a57-939a-4cf8-b600-0e87b47c3c24\") " pod="kube-flannel/kube-flannel-ds-fbvw8" May 13 23:54:12.292343 kubelet[2571]: I0513 23:54:12.292195 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/859f3ea3-86d9-4ef4-80f8-00326eb2fa75-kube-proxy\") pod \"kube-proxy-qz5n9\" (UID: \"859f3ea3-86d9-4ef4-80f8-00326eb2fa75\") " pod="kube-system/kube-proxy-qz5n9" May 13 23:54:12.292343 kubelet[2571]: I0513 23:54:12.292213 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/01379a57-939a-4cf8-b600-0e87b47c3c24-run\") pod \"kube-flannel-ds-fbvw8\" (UID: \"01379a57-939a-4cf8-b600-0e87b47c3c24\") " pod="kube-flannel/kube-flannel-ds-fbvw8" May 13 23:54:12.292561 kubelet[2571]: I0513 23:54:12.292228 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/01379a57-939a-4cf8-b600-0e87b47c3c24-flannel-cfg\") pod \"kube-flannel-ds-fbvw8\" (UID: \"01379a57-939a-4cf8-b600-0e87b47c3c24\") " pod="kube-flannel/kube-flannel-ds-fbvw8" May 13 23:54:12.292561 kubelet[2571]: I0513 23:54:12.292249 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g7rq\" (UniqueName: \"kubernetes.io/projected/859f3ea3-86d9-4ef4-80f8-00326eb2fa75-kube-api-access-8g7rq\") pod \"kube-proxy-qz5n9\" (UID: \"859f3ea3-86d9-4ef4-80f8-00326eb2fa75\") " pod="kube-system/kube-proxy-qz5n9" May 13 23:54:12.292561 kubelet[2571]: I0513 23:54:12.292266 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/01379a57-939a-4cf8-b600-0e87b47c3c24-cni-plugin\") pod \"kube-flannel-ds-fbvw8\" (UID: \"01379a57-939a-4cf8-b600-0e87b47c3c24\") " pod="kube-flannel/kube-flannel-ds-fbvw8" May 13 23:54:12.553525 kubelet[2571]: E0513 23:54:12.550063 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:12.553705 containerd[1478]: time="2025-05-13T23:54:12.552929485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fbvw8,Uid:01379a57-939a-4cf8-b600-0e87b47c3c24,Namespace:kube-flannel,Attempt:0,}" May 13 23:54:12.567601 kubelet[2571]: E0513 23:54:12.566565 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:12.569478 containerd[1478]: time="2025-05-13T23:54:12.569057422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz5n9,Uid:859f3ea3-86d9-4ef4-80f8-00326eb2fa75,Namespace:kube-system,Attempt:0,}" May 13 23:54:12.613610 containerd[1478]: time="2025-05-13T23:54:12.613539628Z" level=info msg="connecting to shim 1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27" address="unix:///run/containerd/s/1c3f597298aeb802d5464f964c930dc9bb576e93a0f4f835424bfbe4613b24d7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:12.653302 containerd[1478]: time="2025-05-13T23:54:12.652786391Z" level=info msg="connecting to shim e090e19a36ac1aa58e9ab4b6ec87883cb7f3eedf82daa99fee1d4da9e1dae639" address="unix:///run/containerd/s/590492bd6068543e87724b0595df4d8df22db9edf17a2970df92008955da2d38" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:12.693736 systemd[1]: Started cri-containerd-1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27.scope - libcontainer container 1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27. May 13 23:54:12.719719 systemd[1]: Started cri-containerd-e090e19a36ac1aa58e9ab4b6ec87883cb7f3eedf82daa99fee1d4da9e1dae639.scope - libcontainer container e090e19a36ac1aa58e9ab4b6ec87883cb7f3eedf82daa99fee1d4da9e1dae639. May 13 23:54:12.857589 containerd[1478]: time="2025-05-13T23:54:12.857133139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fbvw8,Uid:01379a57-939a-4cf8-b600-0e87b47c3c24,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\"" May 13 23:54:12.860733 kubelet[2571]: E0513 23:54:12.859381 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:12.861159 containerd[1478]: time="2025-05-13T23:54:12.860136828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qz5n9,Uid:859f3ea3-86d9-4ef4-80f8-00326eb2fa75,Namespace:kube-system,Attempt:0,} returns sandbox id \"e090e19a36ac1aa58e9ab4b6ec87883cb7f3eedf82daa99fee1d4da9e1dae639\"" May 13 23:54:12.863028 kubelet[2571]: E0513 23:54:12.861626 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:12.863195 containerd[1478]: time="2025-05-13T23:54:12.862649703Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 13 23:54:12.865227 systemd-resolved[1334]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 13 23:54:12.867247 containerd[1478]: time="2025-05-13T23:54:12.867197039Z" level=info msg="CreateContainer within sandbox \"e090e19a36ac1aa58e9ab4b6ec87883cb7f3eedf82daa99fee1d4da9e1dae639\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:54:12.886640 containerd[1478]: time="2025-05-13T23:54:12.886573635Z" level=info msg="Container 4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:12.898353 containerd[1478]: time="2025-05-13T23:54:12.898270917Z" level=info msg="CreateContainer within sandbox \"e090e19a36ac1aa58e9ab4b6ec87883cb7f3eedf82daa99fee1d4da9e1dae639\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf\"" May 13 23:54:12.900233 containerd[1478]: time="2025-05-13T23:54:12.900015326Z" level=info msg="StartContainer for \"4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf\"" May 13 23:54:12.904319 containerd[1478]: time="2025-05-13T23:54:12.904210918Z" level=info msg="connecting to shim 4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf" address="unix:///run/containerd/s/590492bd6068543e87724b0595df4d8df22db9edf17a2970df92008955da2d38" protocol=ttrpc version=3 May 13 23:54:12.930889 systemd[1]: Started cri-containerd-4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf.scope - libcontainer container 4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf. May 13 23:54:13.010723 containerd[1478]: time="2025-05-13T23:54:13.010671342Z" level=info msg="StartContainer for \"4b903e3b5e483bb1c90843cfc33495aa090c3c086a8658049bf554371ab16dcf\" returns successfully" May 13 23:54:13.045283 systemd-timesyncd[1362]: Contacted time server 208.67.75.242:123 (2.flatcar.pool.ntp.org). May 13 23:54:13.045377 systemd-timesyncd[1362]: Initial clock synchronization to Tue 2025-05-13 23:54:13.158314 UTC. May 13 23:54:13.273588 kubelet[2571]: E0513 23:54:13.273195 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:14.969762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001161942.mount: Deactivated successfully. May 13 23:54:15.029084 containerd[1478]: time="2025-05-13T23:54:15.028990121Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:15.031005 containerd[1478]: time="2025-05-13T23:54:15.030708858Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" May 13 23:54:15.032268 containerd[1478]: time="2025-05-13T23:54:15.031866018Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:15.035795 containerd[1478]: time="2025-05-13T23:54:15.035705110Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:15.036329 containerd[1478]: time="2025-05-13T23:54:15.036280999Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.173587437s" May 13 23:54:15.036405 containerd[1478]: time="2025-05-13T23:54:15.036337798Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 13 23:54:15.043232 containerd[1478]: time="2025-05-13T23:54:15.043063861Z" level=info msg="CreateContainer within sandbox \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 13 23:54:15.055922 containerd[1478]: time="2025-05-13T23:54:15.054142824Z" level=info msg="Container 3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:15.071534 containerd[1478]: time="2025-05-13T23:54:15.071465656Z" level=info msg="CreateContainer within sandbox \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\"" May 13 23:54:15.075473 containerd[1478]: time="2025-05-13T23:54:15.074397565Z" level=info msg="StartContainer for \"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\"" May 13 23:54:15.076500 containerd[1478]: time="2025-05-13T23:54:15.075927788Z" level=info msg="connecting to shim 3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803" address="unix:///run/containerd/s/1c3f597298aeb802d5464f964c930dc9bb576e93a0f4f835424bfbe4613b24d7" protocol=ttrpc version=3 May 13 23:54:15.110811 systemd[1]: Started cri-containerd-3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803.scope - libcontainer container 3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803. May 13 23:54:15.161877 kubelet[2571]: E0513 23:54:15.161823 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:15.169766 systemd[1]: cri-containerd-3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803.scope: Deactivated successfully. May 13 23:54:15.178707 containerd[1478]: time="2025-05-13T23:54:15.178625524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\" id:\"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\" pid:2907 exited_at:{seconds:1747180455 nanos:177148370}" May 13 23:54:15.182376 containerd[1478]: time="2025-05-13T23:54:15.181879305Z" level=info msg="received exit event container_id:\"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\" id:\"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\" pid:2907 exited_at:{seconds:1747180455 nanos:177148370}" May 13 23:54:15.185911 containerd[1478]: time="2025-05-13T23:54:15.185747666Z" level=info msg="StartContainer for \"3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803\" returns successfully" May 13 23:54:15.201185 kubelet[2571]: I0513 23:54:15.201102 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qz5n9" podStartSLOduration=3.200781656 podStartE2EDuration="3.200781656s" podCreationTimestamp="2025-05-13 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:54:13.29354121 +0000 UTC m=+5.404001812" watchObservedRunningTime="2025-05-13 23:54:15.200781656 +0000 UTC m=+7.311242297" May 13 23:54:15.281869 kubelet[2571]: E0513 23:54:15.281669 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:15.282133 kubelet[2571]: E0513 23:54:15.282076 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:15.283996 containerd[1478]: time="2025-05-13T23:54:15.283937914Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 13 23:54:15.842688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bde2bc297d4fe086c2054846e58b5d745de9ef044f829b7822f800155eec803-rootfs.mount: Deactivated successfully. May 13 23:54:16.484299 kubelet[2571]: E0513 23:54:16.483334 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:17.290083 kubelet[2571]: E0513 23:54:17.290037 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:17.569476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684346587.mount: Deactivated successfully. May 13 23:54:17.740556 kubelet[2571]: E0513 23:54:17.740042 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:18.291387 kubelet[2571]: E0513 23:54:18.291252 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:18.689757 containerd[1478]: time="2025-05-13T23:54:18.688270171Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:18.690341 containerd[1478]: time="2025-05-13T23:54:18.689837224Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 13 23:54:18.690991 containerd[1478]: time="2025-05-13T23:54:18.690878406Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:18.696521 containerd[1478]: time="2025-05-13T23:54:18.694769447Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:54:18.696795 containerd[1478]: time="2025-05-13T23:54:18.696747197Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.412749698s" May 13 23:54:18.696917 containerd[1478]: time="2025-05-13T23:54:18.696897699Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 13 23:54:18.703960 containerd[1478]: time="2025-05-13T23:54:18.703045354Z" level=info msg="CreateContainer within sandbox \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:54:18.721989 containerd[1478]: time="2025-05-13T23:54:18.721926509Z" level=info msg="Container a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:18.728387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993869376.mount: Deactivated successfully. May 13 23:54:18.735494 containerd[1478]: time="2025-05-13T23:54:18.735091253Z" level=info msg="CreateContainer within sandbox \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\"" May 13 23:54:18.737350 containerd[1478]: time="2025-05-13T23:54:18.737184061Z" level=info msg="StartContainer for \"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\"" May 13 23:54:18.740717 containerd[1478]: time="2025-05-13T23:54:18.740650941Z" level=info msg="connecting to shim a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31" address="unix:///run/containerd/s/1c3f597298aeb802d5464f964c930dc9bb576e93a0f4f835424bfbe4613b24d7" protocol=ttrpc version=3 May 13 23:54:18.775816 systemd[1]: Started cri-containerd-a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31.scope - libcontainer container a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31. May 13 23:54:18.822338 systemd[1]: cri-containerd-a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31.scope: Deactivated successfully. May 13 23:54:18.827460 containerd[1478]: time="2025-05-13T23:54:18.827329191Z" level=info msg="received exit event container_id:\"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\" id:\"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\" pid:2978 exited_at:{seconds:1747180458 nanos:826898848}" May 13 23:54:18.829015 containerd[1478]: time="2025-05-13T23:54:18.828547441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\" id:\"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\" pid:2978 exited_at:{seconds:1747180458 nanos:826898848}" May 13 23:54:18.839482 containerd[1478]: time="2025-05-13T23:54:18.839385075Z" level=info msg="StartContainer for \"a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31\" returns successfully" May 13 23:54:18.866054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3b931460f1b6b4bbe618ce9fe21d9eb1410a70db999ee633d2dfdc2977d0a31-rootfs.mount: Deactivated successfully. May 13 23:54:18.869770 kubelet[2571]: I0513 23:54:18.869467 2571 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 23:54:18.935144 kubelet[2571]: W0513 23:54:18.934236 2571 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4284.0.0-n-db25a1599d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284.0.0-n-db25a1599d' and this object May 13 23:54:18.935144 kubelet[2571]: E0513 23:54:18.934402 2571 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4284.0.0-n-db25a1599d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-db25a1599d' and this object" logger="UnhandledError" May 13 23:54:18.936038 kubelet[2571]: I0513 23:54:18.935884 2571 status_manager.go:890] "Failed to get status for pod" podUID="34b79b62-2043-4d0e-91c3-b8684a5d94ad" pod="kube-system/coredns-668d6bf9bc-vnrbq" err="pods \"coredns-668d6bf9bc-vnrbq\" is forbidden: User \"system:node:ci-4284.0.0-n-db25a1599d\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284.0.0-n-db25a1599d' and this object" May 13 23:54:18.945563 kubelet[2571]: I0513 23:54:18.945333 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b79b62-2043-4d0e-91c3-b8684a5d94ad-config-volume\") pod \"coredns-668d6bf9bc-vnrbq\" (UID: \"34b79b62-2043-4d0e-91c3-b8684a5d94ad\") " pod="kube-system/coredns-668d6bf9bc-vnrbq" May 13 23:54:18.945563 kubelet[2571]: I0513 23:54:18.945392 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cltjv\" (UniqueName: \"kubernetes.io/projected/34b79b62-2043-4d0e-91c3-b8684a5d94ad-kube-api-access-cltjv\") pod \"coredns-668d6bf9bc-vnrbq\" (UID: \"34b79b62-2043-4d0e-91c3-b8684a5d94ad\") " pod="kube-system/coredns-668d6bf9bc-vnrbq" May 13 23:54:18.945944 kubelet[2571]: I0513 23:54:18.945765 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc3bcd7-184c-4e37-97ac-dd0f950547c5-config-volume\") pod \"coredns-668d6bf9bc-4xjjs\" (UID: \"ebc3bcd7-184c-4e37-97ac-dd0f950547c5\") " pod="kube-system/coredns-668d6bf9bc-4xjjs" May 13 23:54:18.945944 kubelet[2571]: I0513 23:54:18.945839 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7klk\" (UniqueName: \"kubernetes.io/projected/ebc3bcd7-184c-4e37-97ac-dd0f950547c5-kube-api-access-w7klk\") pod \"coredns-668d6bf9bc-4xjjs\" (UID: \"ebc3bcd7-184c-4e37-97ac-dd0f950547c5\") " pod="kube-system/coredns-668d6bf9bc-4xjjs" May 13 23:54:18.964551 systemd[1]: Created slice kubepods-burstable-pod34b79b62_2043_4d0e_91c3_b8684a5d94ad.slice - libcontainer container kubepods-burstable-pod34b79b62_2043_4d0e_91c3_b8684a5d94ad.slice. May 13 23:54:18.972016 systemd[1]: Created slice kubepods-burstable-podebc3bcd7_184c_4e37_97ac_dd0f950547c5.slice - libcontainer container kubepods-burstable-podebc3bcd7_184c_4e37_97ac_dd0f950547c5.slice. May 13 23:54:19.299526 kubelet[2571]: E0513 23:54:19.299185 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:19.301762 kubelet[2571]: E0513 23:54:19.301554 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:19.307460 containerd[1478]: time="2025-05-13T23:54:19.307233034Z" level=info msg="CreateContainer within sandbox \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 13 23:54:19.328396 containerd[1478]: time="2025-05-13T23:54:19.328201821Z" level=info msg="Container 9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:19.339756 containerd[1478]: time="2025-05-13T23:54:19.339682161Z" level=info msg="CreateContainer within sandbox \"1a06c32424941c2c246c3c13cdd9316a80813a7929ab25523f597a47cc019a27\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551\"" May 13 23:54:19.340785 containerd[1478]: time="2025-05-13T23:54:19.340717794Z" level=info msg="StartContainer for \"9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551\"" May 13 23:54:19.342990 containerd[1478]: time="2025-05-13T23:54:19.342905080Z" level=info msg="connecting to shim 9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551" address="unix:///run/containerd/s/1c3f597298aeb802d5464f964c930dc9bb576e93a0f4f835424bfbe4613b24d7" protocol=ttrpc version=3 May 13 23:54:19.372584 update_engine[1463]: I20250513 23:54:19.371920 1463 update_attempter.cc:509] Updating boot flags... May 13 23:54:19.374960 systemd[1]: Started cri-containerd-9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551.scope - libcontainer container 9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551. May 13 23:54:19.477100 containerd[1478]: time="2025-05-13T23:54:19.477040023Z" level=info msg="StartContainer for \"9ceb820f19413b566f18af233677aa11b031d836153ad99abcd2ef74c3317551\" returns successfully" May 13 23:54:19.482462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3034) May 13 23:54:19.630772 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3042) May 13 23:54:20.048331 kubelet[2571]: E0513 23:54:20.047638 2571 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 13 23:54:20.048331 kubelet[2571]: E0513 23:54:20.047830 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ebc3bcd7-184c-4e37-97ac-dd0f950547c5-config-volume podName:ebc3bcd7-184c-4e37-97ac-dd0f950547c5 nodeName:}" failed. No retries permitted until 2025-05-13 23:54:20.54779446 +0000 UTC m=+12.658255065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ebc3bcd7-184c-4e37-97ac-dd0f950547c5-config-volume") pod "coredns-668d6bf9bc-4xjjs" (UID: "ebc3bcd7-184c-4e37-97ac-dd0f950547c5") : failed to sync configmap cache: timed out waiting for the condition May 13 23:54:20.048331 kubelet[2571]: E0513 23:54:20.047636 2571 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 13 23:54:20.048331 kubelet[2571]: E0513 23:54:20.048283 2571 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b79b62-2043-4d0e-91c3-b8684a5d94ad-config-volume podName:34b79b62-2043-4d0e-91c3-b8684a5d94ad nodeName:}" failed. No retries permitted until 2025-05-13 23:54:20.548258754 +0000 UTC m=+12.658719346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34b79b62-2043-4d0e-91c3-b8684a5d94ad-config-volume") pod "coredns-668d6bf9bc-vnrbq" (UID: "34b79b62-2043-4d0e-91c3-b8684a5d94ad") : failed to sync configmap cache: timed out waiting for the condition May 13 23:54:20.305587 kubelet[2571]: E0513 23:54:20.305088 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:20.665247 systemd-networkd[1380]: flannel.1: Link UP May 13 23:54:20.665257 systemd-networkd[1380]: flannel.1: Gained carrier May 13 23:54:20.779944 kubelet[2571]: E0513 23:54:20.779876 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:20.781693 kubelet[2571]: E0513 23:54:20.780073 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:20.782619 containerd[1478]: time="2025-05-13T23:54:20.782240006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vnrbq,Uid:34b79b62-2043-4d0e-91c3-b8684a5d94ad,Namespace:kube-system,Attempt:0,}" May 13 23:54:20.782619 containerd[1478]: time="2025-05-13T23:54:20.782288369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4xjjs,Uid:ebc3bcd7-184c-4e37-97ac-dd0f950547c5,Namespace:kube-system,Attempt:0,}" May 13 23:54:20.846660 systemd-networkd[1380]: cni0: Link UP May 13 23:54:20.846674 systemd-networkd[1380]: cni0: Gained carrier May 13 23:54:20.847169 systemd-networkd[1380]: cni0: Lost carrier May 13 23:54:20.867344 kernel: cni0: port 1(veth4a0503ef) entered blocking state May 13 23:54:20.867510 kernel: cni0: port 1(veth4a0503ef) entered disabled state May 13 23:54:20.869292 kernel: veth4a0503ef: entered allmulticast mode May 13 23:54:20.869235 systemd-networkd[1380]: veth4a0503ef: Link UP May 13 23:54:20.870527 kernel: veth4a0503ef: entered promiscuous mode May 13 23:54:20.873813 kernel: cni0: port 1(veth4a0503ef) entered blocking state May 13 23:54:20.876755 kernel: cni0: port 1(veth4a0503ef) entered forwarding state May 13 23:54:20.876933 kernel: cni0: port 1(veth4a0503ef) entered disabled state May 13 23:54:20.894660 systemd-networkd[1380]: vetha2faa748: Link UP May 13 23:54:20.898702 kernel: cni0: port 2(vetha2faa748) entered blocking state May 13 23:54:20.898928 kernel: cni0: port 2(vetha2faa748) entered disabled state May 13 23:54:20.915521 kernel: vetha2faa748: entered allmulticast mode May 13 23:54:20.926658 kernel: vetha2faa748: entered promiscuous mode May 13 23:54:20.933829 kernel: cni0: port 2(vetha2faa748) entered blocking state May 13 23:54:20.934922 kernel: cni0: port 2(vetha2faa748) entered forwarding state May 13 23:54:20.942845 kernel: cni0: port 2(vetha2faa748) entered disabled state May 13 23:54:20.944625 systemd-networkd[1380]: cni0: Gained carrier May 13 23:54:20.973617 kernel: cni0: port 1(veth4a0503ef) entered blocking state May 13 23:54:20.974368 kernel: cni0: port 1(veth4a0503ef) entered forwarding state May 13 23:54:20.973931 systemd-networkd[1380]: veth4a0503ef: Gained carrier May 13 23:54:20.998288 containerd[1478]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} May 13 23:54:20.998288 containerd[1478]: delegateAdd: netconf sent to delegate plugin: May 13 23:54:21.020161 kernel: cni0: port 2(vetha2faa748) entered blocking state May 13 23:54:21.020338 kernel: cni0: port 2(vetha2faa748) entered forwarding state May 13 23:54:21.024609 systemd-networkd[1380]: vetha2faa748: Gained carrier May 13 23:54:21.031014 containerd[1478]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} May 13 23:54:21.031014 containerd[1478]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} May 13 23:54:21.031014 containerd[1478]: delegateAdd: netconf sent to delegate plugin: May 13 23:54:21.175627 containerd[1478]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-13T23:54:21.175540486Z" level=info msg="connecting to shim 65a9f2f9fa33f4fb13a6192f5061be1574cf78143023eac3a409107bc9559359" address="unix:///run/containerd/s/a3a7e6bcd66ef02f7eccf923c5efcc797a9be1f74a676c4a9052a684d1d74632" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:21.182076 containerd[1478]: time="2025-05-13T23:54:21.181966497Z" level=info msg="connecting to shim 8854450634bba28b66a13df2cb5ed639a2090702e2f5934084d37134b5dced99" address="unix:///run/containerd/s/9335e9a40decb57be20da7e581b3b27a7988298ecb54087fcb6f3f5af075383d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:54:21.244822 systemd[1]: Started cri-containerd-65a9f2f9fa33f4fb13a6192f5061be1574cf78143023eac3a409107bc9559359.scope - libcontainer container 65a9f2f9fa33f4fb13a6192f5061be1574cf78143023eac3a409107bc9559359. May 13 23:54:21.258146 systemd[1]: Started cri-containerd-8854450634bba28b66a13df2cb5ed639a2090702e2f5934084d37134b5dced99.scope - libcontainer container 8854450634bba28b66a13df2cb5ed639a2090702e2f5934084d37134b5dced99. May 13 23:54:21.313880 kubelet[2571]: E0513 23:54:21.313665 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:21.398358 containerd[1478]: time="2025-05-13T23:54:21.398060987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vnrbq,Uid:34b79b62-2043-4d0e-91c3-b8684a5d94ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"65a9f2f9fa33f4fb13a6192f5061be1574cf78143023eac3a409107bc9559359\"" May 13 23:54:21.400691 kubelet[2571]: E0513 23:54:21.399905 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:21.405773 containerd[1478]: time="2025-05-13T23:54:21.403337544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4xjjs,Uid:ebc3bcd7-184c-4e37-97ac-dd0f950547c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8854450634bba28b66a13df2cb5ed639a2090702e2f5934084d37134b5dced99\"" May 13 23:54:21.406491 containerd[1478]: time="2025-05-13T23:54:21.406090639Z" level=info msg="CreateContainer within sandbox \"65a9f2f9fa33f4fb13a6192f5061be1574cf78143023eac3a409107bc9559359\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:54:21.411933 kubelet[2571]: E0513 23:54:21.411806 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:21.416522 containerd[1478]: time="2025-05-13T23:54:21.416451641Z" level=info msg="CreateContainer within sandbox \"8854450634bba28b66a13df2cb5ed639a2090702e2f5934084d37134b5dced99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:54:21.439178 containerd[1478]: time="2025-05-13T23:54:21.439091994Z" level=info msg="Container 311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:21.442094 containerd[1478]: time="2025-05-13T23:54:21.442031966Z" level=info msg="Container 2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559: CDI devices from CRI Config.CDIDevices: []" May 13 23:54:21.462098 containerd[1478]: time="2025-05-13T23:54:21.462025108Z" level=info msg="CreateContainer within sandbox \"65a9f2f9fa33f4fb13a6192f5061be1574cf78143023eac3a409107bc9559359\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e\"" May 13 23:54:21.465117 containerd[1478]: time="2025-05-13T23:54:21.465055101Z" level=info msg="StartContainer for \"311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e\"" May 13 23:54:21.469505 containerd[1478]: time="2025-05-13T23:54:21.469052365Z" level=info msg="connecting to shim 311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e" address="unix:///run/containerd/s/a3a7e6bcd66ef02f7eccf923c5efcc797a9be1f74a676c4a9052a684d1d74632" protocol=ttrpc version=3 May 13 23:54:21.472436 containerd[1478]: time="2025-05-13T23:54:21.472214032Z" level=info msg="CreateContainer within sandbox \"8854450634bba28b66a13df2cb5ed639a2090702e2f5934084d37134b5dced99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559\"" May 13 23:54:21.475041 containerd[1478]: time="2025-05-13T23:54:21.474868325Z" level=info msg="StartContainer for \"2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559\"" May 13 23:54:21.481476 containerd[1478]: time="2025-05-13T23:54:21.480962542Z" level=info msg="connecting to shim 2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559" address="unix:///run/containerd/s/9335e9a40decb57be20da7e581b3b27a7988298ecb54087fcb6f3f5af075383d" protocol=ttrpc version=3 May 13 23:54:21.510699 systemd[1]: Started cri-containerd-311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e.scope - libcontainer container 311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e. May 13 23:54:21.532961 systemd[1]: Started cri-containerd-2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559.scope - libcontainer container 2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559. May 13 23:54:21.613095 containerd[1478]: time="2025-05-13T23:54:21.612665573Z" level=info msg="StartContainer for \"311ad08e3294b7b7ca50763db1f80d4641e453c81f39d8bd1f38b992ee82e30e\" returns successfully" May 13 23:54:21.635774 containerd[1478]: time="2025-05-13T23:54:21.635611396Z" level=info msg="StartContainer for \"2438da2db7555bbf3a36da9d83fec0eb3878b12fb5f88c319072193c471f7559\" returns successfully" May 13 23:54:22.174605 systemd-networkd[1380]: cni0: Gained IPv6LL May 13 23:54:22.322759 kubelet[2571]: E0513 23:54:22.321632 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:22.335373 kubelet[2571]: E0513 23:54:22.333504 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:22.362095 kubelet[2571]: I0513 23:54:22.361394 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4xjjs" podStartSLOduration=10.361362748 podStartE2EDuration="10.361362748s" podCreationTimestamp="2025-05-13 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:54:22.356808426 +0000 UTC m=+14.467269041" watchObservedRunningTime="2025-05-13 23:54:22.361362748 +0000 UTC m=+14.471823356" May 13 23:54:22.362095 kubelet[2571]: I0513 23:54:22.361719 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-fbvw8" podStartSLOduration=4.524315502 podStartE2EDuration="10.36170684s" podCreationTimestamp="2025-05-13 23:54:12 +0000 UTC" firstStartedPulling="2025-05-13 23:54:12.861801025 +0000 UTC m=+4.972261601" lastFinishedPulling="2025-05-13 23:54:18.699192348 +0000 UTC m=+10.809652939" observedRunningTime="2025-05-13 23:54:20.320058893 +0000 UTC m=+12.430519497" watchObservedRunningTime="2025-05-13 23:54:22.36170684 +0000 UTC m=+14.472167453" May 13 23:54:22.366667 systemd-networkd[1380]: veth4a0503ef: Gained IPv6LL May 13 23:54:22.418929 kubelet[2571]: I0513 23:54:22.414282 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vnrbq" podStartSLOduration=10.414249256 podStartE2EDuration="10.414249256s" podCreationTimestamp="2025-05-13 23:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:54:22.413758199 +0000 UTC m=+14.524218809" watchObservedRunningTime="2025-05-13 23:54:22.414249256 +0000 UTC m=+14.524709876" May 13 23:54:22.558826 systemd-networkd[1380]: vetha2faa748: Gained IPv6LL May 13 23:54:22.559390 systemd-networkd[1380]: flannel.1: Gained IPv6LL May 13 23:54:23.336916 kubelet[2571]: E0513 23:54:23.336402 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:23.336916 kubelet[2571]: E0513 23:54:23.336699 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:24.338954 kubelet[2571]: E0513 23:54:24.338621 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:24.340708 kubelet[2571]: E0513 23:54:24.340599 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:54:49.558984 systemd[1]: Started sshd@6-147.182.251.203:22-147.75.109.163:41122.service - OpenSSH per-connection server daemon (147.75.109.163:41122). May 13 23:54:49.645870 sshd[3463]: Accepted publickey for core from 147.75.109.163 port 41122 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:49.648109 sshd-session[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:49.655605 systemd-logind[1461]: New session 6 of user core. May 13 23:54:49.666773 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:54:49.894546 sshd[3465]: Connection closed by 147.75.109.163 port 41122 May 13 23:54:49.894990 sshd-session[3463]: pam_unix(sshd:session): session closed for user core May 13 23:54:49.903681 systemd[1]: sshd@6-147.182.251.203:22-147.75.109.163:41122.service: Deactivated successfully. May 13 23:54:49.908473 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:54:49.911963 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. May 13 23:54:49.914287 systemd-logind[1461]: Removed session 6. May 13 23:54:54.913638 systemd[1]: Started sshd@7-147.182.251.203:22-147.75.109.163:41130.service - OpenSSH per-connection server daemon (147.75.109.163:41130). May 13 23:54:54.981607 sshd[3503]: Accepted publickey for core from 147.75.109.163 port 41130 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:54:54.984042 sshd-session[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:54:54.992181 systemd-logind[1461]: New session 7 of user core. May 13 23:54:54.999703 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:54:55.164722 sshd[3505]: Connection closed by 147.75.109.163 port 41130 May 13 23:54:55.165915 sshd-session[3503]: pam_unix(sshd:session): session closed for user core May 13 23:54:55.170571 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. May 13 23:54:55.170952 systemd[1]: sshd@7-147.182.251.203:22-147.75.109.163:41130.service: Deactivated successfully. May 13 23:54:55.174015 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:54:55.176496 systemd-logind[1461]: Removed session 7. May 13 23:55:00.191627 systemd[1]: Started sshd@8-147.182.251.203:22-147.75.109.163:41388.service - OpenSSH per-connection server daemon (147.75.109.163:41388). May 13 23:55:00.295938 sshd[3540]: Accepted publickey for core from 147.75.109.163 port 41388 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:00.300003 sshd-session[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:00.307700 systemd-logind[1461]: New session 8 of user core. May 13 23:55:00.314727 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:55:00.476770 sshd[3542]: Connection closed by 147.75.109.163 port 41388 May 13 23:55:00.476597 sshd-session[3540]: pam_unix(sshd:session): session closed for user core May 13 23:55:00.487567 systemd[1]: sshd@8-147.182.251.203:22-147.75.109.163:41388.service: Deactivated successfully. May 13 23:55:00.490964 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:55:00.493699 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. May 13 23:55:00.496349 systemd[1]: Started sshd@9-147.182.251.203:22-147.75.109.163:41396.service - OpenSSH per-connection server daemon (147.75.109.163:41396). May 13 23:55:00.498297 systemd-logind[1461]: Removed session 8. May 13 23:55:00.562010 sshd[3553]: Accepted publickey for core from 147.75.109.163 port 41396 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:00.564298 sshd-session[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:00.574294 systemd-logind[1461]: New session 9 of user core. May 13 23:55:00.586780 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:55:00.792309 sshd[3556]: Connection closed by 147.75.109.163 port 41396 May 13 23:55:00.793172 sshd-session[3553]: pam_unix(sshd:session): session closed for user core May 13 23:55:00.810184 systemd[1]: sshd@9-147.182.251.203:22-147.75.109.163:41396.service: Deactivated successfully. May 13 23:55:00.812838 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:55:00.816480 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. May 13 23:55:00.818940 systemd[1]: Started sshd@10-147.182.251.203:22-147.75.109.163:41400.service - OpenSSH per-connection server daemon (147.75.109.163:41400). May 13 23:55:00.825981 systemd-logind[1461]: Removed session 9. May 13 23:55:00.888176 sshd[3565]: Accepted publickey for core from 147.75.109.163 port 41400 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:00.890332 sshd-session[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:00.896856 systemd-logind[1461]: New session 10 of user core. May 13 23:55:00.901694 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:55:01.069047 sshd[3568]: Connection closed by 147.75.109.163 port 41400 May 13 23:55:01.070174 sshd-session[3565]: pam_unix(sshd:session): session closed for user core May 13 23:55:01.086688 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. May 13 23:55:01.087849 systemd[1]: sshd@10-147.182.251.203:22-147.75.109.163:41400.service: Deactivated successfully. May 13 23:55:01.093972 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:55:01.100258 systemd-logind[1461]: Removed session 10. May 13 23:55:06.093763 systemd[1]: Started sshd@11-147.182.251.203:22-147.75.109.163:41414.service - OpenSSH per-connection server daemon (147.75.109.163:41414). May 13 23:55:06.174563 sshd[3607]: Accepted publickey for core from 147.75.109.163 port 41414 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:06.177229 sshd-session[3607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:06.191987 systemd-logind[1461]: New session 11 of user core. May 13 23:55:06.198872 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:55:06.393352 sshd[3611]: Connection closed by 147.75.109.163 port 41414 May 13 23:55:06.394632 sshd-session[3607]: pam_unix(sshd:session): session closed for user core May 13 23:55:06.403515 systemd[1]: sshd@11-147.182.251.203:22-147.75.109.163:41414.service: Deactivated successfully. May 13 23:55:06.408531 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:55:06.410350 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. May 13 23:55:06.412754 systemd-logind[1461]: Removed session 11. May 13 23:55:11.410799 systemd[1]: Started sshd@12-147.182.251.203:22-147.75.109.163:51770.service - OpenSSH per-connection server daemon (147.75.109.163:51770). May 13 23:55:11.487774 sshd[3661]: Accepted publickey for core from 147.75.109.163 port 51770 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:11.489929 sshd-session[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:11.497904 systemd-logind[1461]: New session 12 of user core. May 13 23:55:11.502764 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:55:11.660780 sshd[3663]: Connection closed by 147.75.109.163 port 51770 May 13 23:55:11.661530 sshd-session[3661]: pam_unix(sshd:session): session closed for user core May 13 23:55:11.668274 systemd[1]: sshd@12-147.182.251.203:22-147.75.109.163:51770.service: Deactivated successfully. May 13 23:55:11.671342 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:55:11.672755 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. May 13 23:55:11.674218 systemd-logind[1461]: Removed session 12. May 13 23:55:16.678392 systemd[1]: Started sshd@13-147.182.251.203:22-147.75.109.163:51772.service - OpenSSH per-connection server daemon (147.75.109.163:51772). May 13 23:55:16.772521 sshd[3699]: Accepted publickey for core from 147.75.109.163 port 51772 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:16.775499 sshd-session[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:16.787528 systemd-logind[1461]: New session 13 of user core. May 13 23:55:16.793748 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:55:16.952888 sshd[3701]: Connection closed by 147.75.109.163 port 51772 May 13 23:55:16.953728 sshd-session[3699]: pam_unix(sshd:session): session closed for user core May 13 23:55:16.960163 systemd[1]: sshd@13-147.182.251.203:22-147.75.109.163:51772.service: Deactivated successfully. May 13 23:55:16.964576 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:55:16.967245 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. May 13 23:55:16.969242 systemd-logind[1461]: Removed session 13. May 13 23:55:18.200544 kubelet[2571]: E0513 23:55:18.199771 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:55:21.974211 systemd[1]: Started sshd@14-147.182.251.203:22-147.75.109.163:44166.service - OpenSSH per-connection server daemon (147.75.109.163:44166). May 13 23:55:22.049067 sshd[3735]: Accepted publickey for core from 147.75.109.163 port 44166 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:22.051483 sshd-session[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:22.061199 systemd-logind[1461]: New session 14 of user core. May 13 23:55:22.066850 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:55:22.249372 sshd[3737]: Connection closed by 147.75.109.163 port 44166 May 13 23:55:22.252047 sshd-session[3735]: pam_unix(sshd:session): session closed for user core May 13 23:55:22.273192 systemd[1]: sshd@14-147.182.251.203:22-147.75.109.163:44166.service: Deactivated successfully. May 13 23:55:22.277106 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:55:22.279436 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. May 13 23:55:22.283062 systemd[1]: Started sshd@15-147.182.251.203:22-147.75.109.163:44180.service - OpenSSH per-connection server daemon (147.75.109.163:44180). May 13 23:55:22.289171 systemd-logind[1461]: Removed session 14. May 13 23:55:22.361348 sshd[3748]: Accepted publickey for core from 147.75.109.163 port 44180 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:22.363773 sshd-session[3748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:22.372797 systemd-logind[1461]: New session 15 of user core. May 13 23:55:22.384976 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 23:55:22.787912 sshd[3751]: Connection closed by 147.75.109.163 port 44180 May 13 23:55:22.789182 sshd-session[3748]: pam_unix(sshd:session): session closed for user core May 13 23:55:22.811065 systemd[1]: sshd@15-147.182.251.203:22-147.75.109.163:44180.service: Deactivated successfully. May 13 23:55:22.816918 systemd[1]: session-15.scope: Deactivated successfully. May 13 23:55:22.819353 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. May 13 23:55:22.823261 systemd[1]: Started sshd@16-147.182.251.203:22-147.75.109.163:44196.service - OpenSSH per-connection server daemon (147.75.109.163:44196). May 13 23:55:22.826263 systemd-logind[1461]: Removed session 15. May 13 23:55:22.938826 sshd[3760]: Accepted publickey for core from 147.75.109.163 port 44196 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:22.941344 sshd-session[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:22.950513 systemd-logind[1461]: New session 16 of user core. May 13 23:55:22.959872 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 23:55:24.054315 sshd[3763]: Connection closed by 147.75.109.163 port 44196 May 13 23:55:24.056063 sshd-session[3760]: pam_unix(sshd:session): session closed for user core May 13 23:55:24.068544 systemd[1]: sshd@16-147.182.251.203:22-147.75.109.163:44196.service: Deactivated successfully. May 13 23:55:24.077656 systemd[1]: session-16.scope: Deactivated successfully. May 13 23:55:24.080667 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. May 13 23:55:24.088209 systemd[1]: Started sshd@17-147.182.251.203:22-147.75.109.163:44208.service - OpenSSH per-connection server daemon (147.75.109.163:44208). May 13 23:55:24.089556 systemd-logind[1461]: Removed session 16. May 13 23:55:24.171639 sshd[3779]: Accepted publickey for core from 147.75.109.163 port 44208 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:24.174601 sshd-session[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:24.184863 systemd-logind[1461]: New session 17 of user core. May 13 23:55:24.190750 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 23:55:24.514821 sshd[3782]: Connection closed by 147.75.109.163 port 44208 May 13 23:55:24.517000 sshd-session[3779]: pam_unix(sshd:session): session closed for user core May 13 23:55:24.532167 systemd[1]: sshd@17-147.182.251.203:22-147.75.109.163:44208.service: Deactivated successfully. May 13 23:55:24.535074 systemd[1]: session-17.scope: Deactivated successfully. May 13 23:55:24.536341 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. May 13 23:55:24.538901 systemd-logind[1461]: Removed session 17. May 13 23:55:24.541482 systemd[1]: Started sshd@18-147.182.251.203:22-147.75.109.163:44210.service - OpenSSH per-connection server daemon (147.75.109.163:44210). May 13 23:55:24.605156 sshd[3791]: Accepted publickey for core from 147.75.109.163 port 44210 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:24.606580 sshd-session[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:24.614515 systemd-logind[1461]: New session 18 of user core. May 13 23:55:24.621397 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 23:55:24.766146 sshd[3794]: Connection closed by 147.75.109.163 port 44210 May 13 23:55:24.767036 sshd-session[3791]: pam_unix(sshd:session): session closed for user core May 13 23:55:24.772660 systemd[1]: sshd@18-147.182.251.203:22-147.75.109.163:44210.service: Deactivated successfully. May 13 23:55:24.775885 systemd[1]: session-18.scope: Deactivated successfully. May 13 23:55:24.778636 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. May 13 23:55:24.780258 systemd-logind[1461]: Removed session 18. May 13 23:55:26.190633 kubelet[2571]: E0513 23:55:26.190042 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:55:29.190404 kubelet[2571]: E0513 23:55:29.190332 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:55:29.790921 systemd[1]: Started sshd@19-147.182.251.203:22-147.75.109.163:50008.service - OpenSSH per-connection server daemon (147.75.109.163:50008). May 13 23:55:29.893599 sshd[3827]: Accepted publickey for core from 147.75.109.163 port 50008 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:29.896040 sshd-session[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:29.904156 systemd-logind[1461]: New session 19 of user core. May 13 23:55:29.912840 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 23:55:30.101399 sshd[3829]: Connection closed by 147.75.109.163 port 50008 May 13 23:55:30.102681 sshd-session[3827]: pam_unix(sshd:session): session closed for user core May 13 23:55:30.113994 systemd[1]: sshd@19-147.182.251.203:22-147.75.109.163:50008.service: Deactivated successfully. May 13 23:55:30.119489 systemd[1]: session-19.scope: Deactivated successfully. May 13 23:55:30.121372 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. May 13 23:55:30.123387 systemd-logind[1461]: Removed session 19. May 13 23:55:35.122041 systemd[1]: Started sshd@20-147.182.251.203:22-147.75.109.163:50012.service - OpenSSH per-connection server daemon (147.75.109.163:50012). May 13 23:55:35.190754 kubelet[2571]: E0513 23:55:35.190658 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:55:35.194020 sshd[3863]: Accepted publickey for core from 147.75.109.163 port 50012 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:35.195341 sshd-session[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:35.205462 systemd-logind[1461]: New session 20 of user core. May 13 23:55:35.208919 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 23:55:35.418396 sshd[3865]: Connection closed by 147.75.109.163 port 50012 May 13 23:55:35.419599 sshd-session[3863]: pam_unix(sshd:session): session closed for user core May 13 23:55:35.424515 systemd[1]: sshd@20-147.182.251.203:22-147.75.109.163:50012.service: Deactivated successfully. May 13 23:55:35.429009 systemd[1]: session-20.scope: Deactivated successfully. May 13 23:55:35.433159 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. May 13 23:55:35.434919 systemd-logind[1461]: Removed session 20. May 13 23:55:40.437784 systemd[1]: Started sshd@21-147.182.251.203:22-147.75.109.163:39052.service - OpenSSH per-connection server daemon (147.75.109.163:39052). May 13 23:55:40.500724 sshd[3897]: Accepted publickey for core from 147.75.109.163 port 39052 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:40.502485 sshd-session[3897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:40.508502 systemd-logind[1461]: New session 21 of user core. May 13 23:55:40.519748 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 23:55:40.659260 sshd[3899]: Connection closed by 147.75.109.163 port 39052 May 13 23:55:40.660081 sshd-session[3897]: pam_unix(sshd:session): session closed for user core May 13 23:55:40.665133 systemd[1]: sshd@21-147.182.251.203:22-147.75.109.163:39052.service: Deactivated successfully. May 13 23:55:40.668515 systemd[1]: session-21.scope: Deactivated successfully. May 13 23:55:40.670509 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. May 13 23:55:40.672061 systemd-logind[1461]: Removed session 21. May 13 23:55:43.190055 kubelet[2571]: E0513 23:55:43.189911 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:55:44.191360 kubelet[2571]: E0513 23:55:44.191046 2571 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" May 13 23:55:45.678027 systemd[1]: Started sshd@22-147.182.251.203:22-147.75.109.163:39060.service - OpenSSH per-connection server daemon (147.75.109.163:39060). May 13 23:55:45.748742 sshd[3935]: Accepted publickey for core from 147.75.109.163 port 39060 ssh2: RSA SHA256:bC78CM2YHyER82uuK7NAX7heS0tcdIEHhEXL2ubzJPc May 13 23:55:45.751297 sshd-session[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:55:45.761631 systemd-logind[1461]: New session 22 of user core. May 13 23:55:45.769766 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 23:55:45.928067 sshd[3937]: Connection closed by 147.75.109.163 port 39060 May 13 23:55:45.928828 sshd-session[3935]: pam_unix(sshd:session): session closed for user core May 13 23:55:45.936198 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. May 13 23:55:45.937016 systemd[1]: sshd@22-147.182.251.203:22-147.75.109.163:39060.service: Deactivated successfully. May 13 23:55:45.942163 systemd[1]: session-22.scope: Deactivated successfully. May 13 23:55:45.945340 systemd-logind[1461]: Removed session 22.