Jun 20 19:07:51.041091 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 19:07:51.041150 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:07:51.041174 kernel: BIOS-provided physical RAM map: Jun 20 19:07:51.041186 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:07:51.041199 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:07:51.041211 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:07:51.041227 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jun 20 19:07:51.041240 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jun 20 19:07:51.041254 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:07:51.041267 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:07:51.041283 kernel: NX (Execute Disable) protection: active Jun 20 19:07:51.041292 kernel: APIC: Static calls initialized Jun 20 19:07:51.041308 kernel: SMBIOS 2.8 present. Jun 20 19:07:51.041322 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 20 19:07:51.041339 kernel: Hypervisor detected: KVM Jun 20 19:07:51.041353 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:07:51.041376 kernel: kvm-clock: using sched offset of 3281770599 cycles Jun 20 19:07:51.041392 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:07:51.041408 kernel: tsc: Detected 2494.138 MHz processor Jun 20 19:07:51.041423 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:07:51.041438 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:07:51.041453 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jun 20 19:07:51.041469 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:07:51.041484 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:07:51.041503 kernel: ACPI: Early table checksum verification disabled Jun 20 19:07:51.041517 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jun 20 19:07:51.041533 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041547 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041563 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041578 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 20 19:07:51.041593 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041608 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041623 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041641 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:07:51.041656 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 20 19:07:51.041671 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 20 19:07:51.041686 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 20 19:07:51.041701 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 20 19:07:51.041715 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 20 19:07:51.041730 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 20 19:07:51.041752 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 20 19:07:51.041771 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 19:07:51.041787 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 19:07:51.041803 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 20 19:07:51.041818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 20 19:07:51.041840 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jun 20 19:07:51.041856 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jun 20 19:07:51.041876 kernel: Zone ranges: Jun 20 19:07:51.041892 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:07:51.041924 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jun 20 19:07:51.043959 kernel: Normal empty Jun 20 19:07:51.043977 kernel: Movable zone start for each node Jun 20 19:07:51.043993 kernel: Early memory node ranges Jun 20 19:07:51.044010 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:07:51.044026 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jun 20 19:07:51.044041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jun 20 19:07:51.044057 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:07:51.044083 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:07:51.044128 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jun 20 19:07:51.044143 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:07:51.044164 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:07:51.044194 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:07:51.044238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:07:51.044270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:07:51.044286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:07:51.044302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:07:51.044323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:07:51.044338 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:07:51.044355 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:07:51.044371 kernel: TSC deadline timer available Jun 20 19:07:51.044386 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 19:07:51.044402 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:07:51.044418 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 20 19:07:51.044438 kernel: Booting paravirtualized kernel on KVM Jun 20 19:07:51.044454 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:07:51.044477 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:07:51.044493 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 19:07:51.044509 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 19:07:51.044524 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:07:51.044545 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 19:07:51.044569 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:07:51.044587 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:07:51.044602 kernel: random: crng init done Jun 20 19:07:51.044622 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:07:51.044638 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:07:51.044653 kernel: Fallback order for Node 0: 0 Jun 20 19:07:51.044670 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jun 20 19:07:51.044686 kernel: Policy zone: DMA32 Jun 20 19:07:51.044702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:07:51.044718 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 127196K reserved, 0K cma-reserved) Jun 20 19:07:51.044735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:07:51.044751 kernel: Kernel/User page tables isolation: enabled Jun 20 19:07:51.044772 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 19:07:51.044788 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 19:07:51.044804 kernel: Dynamic Preempt: voluntary Jun 20 19:07:51.044820 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:07:51.044837 kernel: rcu: RCU event tracing is enabled. Jun 20 19:07:51.044853 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:07:51.044870 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:07:51.044887 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:07:51.044930 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:07:51.044951 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:07:51.044968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:07:51.044984 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:07:51.045000 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:07:51.045019 kernel: Console: colour VGA+ 80x25 Jun 20 19:07:51.045035 kernel: printk: console [tty0] enabled Jun 20 19:07:51.045051 kernel: printk: console [ttyS0] enabled Jun 20 19:07:51.045067 kernel: ACPI: Core revision 20230628 Jun 20 19:07:51.045083 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:07:51.045105 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:07:51.045121 kernel: x2apic enabled Jun 20 19:07:51.045137 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:07:51.045153 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:07:51.045169 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jun 20 19:07:51.045185 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jun 20 19:07:51.045204 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:07:51.045221 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:07:51.045254 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:07:51.045271 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:07:51.045289 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:07:51.045311 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 20 19:07:51.045328 kernel: Spectre V2 : User space: Vulnerable Jun 20 19:07:51.045356 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:07:51.045372 kernel: MDS: Mitigation: Clear CPU buffers Jun 20 19:07:51.045396 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:07:51.045413 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:07:51.045447 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:07:51.045464 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:07:51.045477 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:07:51.045491 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:07:51.045504 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 20 19:07:51.045517 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:07:51.045533 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:07:51.045550 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 19:07:51.045571 kernel: landlock: Up and running. Jun 20 19:07:51.045588 kernel: SELinux: Initializing. Jun 20 19:07:51.045604 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:07:51.045621 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:07:51.045638 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 20 19:07:51.045655 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:07:51.045671 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:07:51.045688 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:07:51.045721 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 20 19:07:51.045744 kernel: signal: max sigframe size: 1776 Jun 20 19:07:51.045761 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:07:51.045778 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:07:51.045795 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:07:51.045811 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:07:51.045828 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:07:51.045844 kernel: .... node #0, CPUs: #1 Jun 20 19:07:51.045860 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:07:51.045880 kernel: smpboot: Max logical packages: 1 Jun 20 19:07:51.045916 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jun 20 19:07:51.045932 kernel: devtmpfs: initialized Jun 20 19:07:51.045949 kernel: x86/mm: Memory block size: 128MB Jun 20 19:07:51.045966 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:07:51.045982 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:07:51.045999 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:07:51.046015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:07:51.046031 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:07:51.046048 kernel: audit: type=2000 audit(1750446470.386:1): state=initialized audit_enabled=0 res=1 Jun 20 19:07:51.046069 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:07:51.046085 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:07:51.046102 kernel: cpuidle: using governor menu Jun 20 19:07:51.046118 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:07:51.046135 kernel: dca service started, version 1.12.1 Jun 20 19:07:51.046151 kernel: PCI: Using configuration type 1 for base access Jun 20 19:07:51.046168 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:07:51.046185 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:07:51.046201 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:07:51.046222 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:07:51.046239 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:07:51.046256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:07:51.046272 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:07:51.046288 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 19:07:51.046305 kernel: ACPI: Interpreter enabled Jun 20 19:07:51.046321 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:07:51.046361 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:07:51.046378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:07:51.046400 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:07:51.046417 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 19:07:51.046433 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:07:51.046768 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:07:51.049066 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:07:51.049255 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:07:51.049277 kernel: acpiphp: Slot [3] registered Jun 20 19:07:51.049303 kernel: acpiphp: Slot [4] registered Jun 20 19:07:51.049320 kernel: acpiphp: Slot [5] registered Jun 20 19:07:51.049336 kernel: acpiphp: Slot [6] registered Jun 20 19:07:51.049353 kernel: acpiphp: Slot [7] registered Jun 20 19:07:51.049370 kernel: acpiphp: Slot [8] registered Jun 20 19:07:51.049387 kernel: acpiphp: Slot [9] registered Jun 20 19:07:51.049404 kernel: acpiphp: Slot [10] registered Jun 20 19:07:51.049422 kernel: acpiphp: Slot [11] registered Jun 20 19:07:51.049438 kernel: acpiphp: Slot [12] registered Jun 20 19:07:51.049455 kernel: acpiphp: Slot [13] registered Jun 20 19:07:51.049477 kernel: acpiphp: Slot [14] registered Jun 20 19:07:51.049494 kernel: acpiphp: Slot [15] registered Jun 20 19:07:51.049511 kernel: acpiphp: Slot [16] registered Jun 20 19:07:51.049527 kernel: acpiphp: Slot [17] registered Jun 20 19:07:51.049544 kernel: acpiphp: Slot [18] registered Jun 20 19:07:51.049561 kernel: acpiphp: Slot [19] registered Jun 20 19:07:51.049578 kernel: acpiphp: Slot [20] registered Jun 20 19:07:51.049594 kernel: acpiphp: Slot [21] registered Jun 20 19:07:51.049611 kernel: acpiphp: Slot [22] registered Jun 20 19:07:51.049633 kernel: acpiphp: Slot [23] registered Jun 20 19:07:51.049650 kernel: acpiphp: Slot [24] registered Jun 20 19:07:51.049667 kernel: acpiphp: Slot [25] registered Jun 20 19:07:51.049684 kernel: acpiphp: Slot [26] registered Jun 20 19:07:51.049701 kernel: acpiphp: Slot [27] registered Jun 20 19:07:51.049718 kernel: acpiphp: Slot [28] registered Jun 20 19:07:51.049735 kernel: acpiphp: Slot [29] registered Jun 20 19:07:51.049752 kernel: acpiphp: Slot [30] registered Jun 20 19:07:51.049770 kernel: acpiphp: Slot [31] registered Jun 20 19:07:51.049787 kernel: PCI host bridge to bus 0000:00 Jun 20 19:07:51.050021 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:07:51.050170 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:07:51.050314 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:07:51.050458 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 20 19:07:51.050599 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 20 19:07:51.050699 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:07:51.050861 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 20 19:07:51.053172 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 20 19:07:51.053380 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 20 19:07:51.053543 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 20 19:07:51.053701 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 20 19:07:51.053865 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 20 19:07:51.056134 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 20 19:07:51.056351 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 20 19:07:51.056559 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 20 19:07:51.056773 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 20 19:07:51.056991 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 20 19:07:51.057127 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 20 19:07:51.057294 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 20 19:07:51.057519 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 20 19:07:51.057669 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 20 19:07:51.057778 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 20 19:07:51.057891 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 20 19:07:51.058032 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 20 19:07:51.058169 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:07:51.058336 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 20 19:07:51.058502 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 20 19:07:51.058632 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 20 19:07:51.058754 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 20 19:07:51.058894 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 20 19:07:51.061286 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 20 19:07:51.061478 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 20 19:07:51.061601 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 20 19:07:51.061753 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 20 19:07:51.061867 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 20 19:07:51.064179 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 20 19:07:51.064403 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 20 19:07:51.064598 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 20 19:07:51.064770 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 20 19:07:51.064976 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 20 19:07:51.065160 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 20 19:07:51.065338 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 20 19:07:51.065500 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 20 19:07:51.065660 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 20 19:07:51.065818 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 20 19:07:51.066052 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 20 19:07:51.066221 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 20 19:07:51.066401 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 20 19:07:51.066420 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:07:51.066435 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:07:51.066448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:07:51.066461 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:07:51.066474 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:07:51.066488 kernel: iommu: Default domain type: Translated Jun 20 19:07:51.066510 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:07:51.066525 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:07:51.066538 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:07:51.066552 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:07:51.066565 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jun 20 19:07:51.066743 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 20 19:07:51.068973 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 20 19:07:51.069291 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:07:51.069339 kernel: vgaarb: loaded Jun 20 19:07:51.069355 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:07:51.069369 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:07:51.069382 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:07:51.069396 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:07:51.069411 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:07:51.069424 kernel: pnp: PnP ACPI init Jun 20 19:07:51.069438 kernel: pnp: PnP ACPI: found 4 devices Jun 20 19:07:51.069451 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:07:51.069469 kernel: NET: Registered PF_INET protocol family Jun 20 19:07:51.069483 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:07:51.069496 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 19:07:51.069510 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:07:51.069524 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:07:51.069539 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 19:07:51.069556 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 19:07:51.069569 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:07:51.069583 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:07:51.069601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:07:51.069614 kernel: NET: Registered PF_XDP protocol family Jun 20 19:07:51.069830 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:07:51.072104 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:07:51.072276 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:07:51.072381 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 20 19:07:51.072514 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 20 19:07:51.072695 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 20 19:07:51.072886 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:07:51.072943 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 19:07:51.073121 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 38077 usecs Jun 20 19:07:51.073147 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:07:51.073161 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:07:51.073176 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jun 20 19:07:51.073189 kernel: Initialise system trusted keyrings Jun 20 19:07:51.073203 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 19:07:51.073228 kernel: Key type asymmetric registered Jun 20 19:07:51.073241 kernel: Asymmetric key parser 'x509' registered Jun 20 19:07:51.073254 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 19:07:51.073267 kernel: io scheduler mq-deadline registered Jun 20 19:07:51.073281 kernel: io scheduler kyber registered Jun 20 19:07:51.073294 kernel: io scheduler bfq registered Jun 20 19:07:51.073307 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:07:51.073323 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 20 19:07:51.073337 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 20 19:07:51.073351 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 20 19:07:51.073373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:07:51.073387 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:07:51.073401 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:07:51.073416 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:07:51.073429 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:07:51.073694 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 20 19:07:51.073888 kernel: rtc_cmos 00:03: registered as rtc0 Jun 20 19:07:51.074097 kernel: rtc_cmos 00:03: setting system clock to 2025-06-20T19:07:50 UTC (1750446470) Jun 20 19:07:51.074132 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jun 20 19:07:51.074263 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 20 19:07:51.074283 kernel: intel_pstate: CPU model not supported Jun 20 19:07:51.074294 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:07:51.074304 kernel: Segment Routing with IPv6 Jun 20 19:07:51.074358 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:07:51.074370 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:07:51.074384 kernel: Key type dns_resolver registered Jun 20 19:07:51.074408 kernel: IPI shorthand broadcast: enabled Jun 20 19:07:51.074423 kernel: sched_clock: Marking stable (1177018271, 107070367)->(1418577927, -134489289) Jun 20 19:07:51.074440 kernel: registered taskstats version 1 Jun 20 19:07:51.074450 kernel: Loading compiled-in X.509 certificates Jun 20 19:07:51.074460 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 19:07:51.074472 kernel: Key type .fscrypt registered Jun 20 19:07:51.074486 kernel: Key type fscrypt-provisioning registered Jun 20 19:07:51.074502 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:07:51.074518 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:07:51.074537 kernel: ima: No architecture policies found Jun 20 19:07:51.074552 kernel: clk: Disabling unused clocks Jun 20 19:07:51.074565 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 19:07:51.074578 kernel: Write protecting the kernel read-only data: 38912k Jun 20 19:07:51.074588 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 19:07:51.074627 kernel: Run /init as init process Jun 20 19:07:51.074647 kernel: with arguments: Jun 20 19:07:51.074663 kernel: /init Jun 20 19:07:51.074678 kernel: with environment: Jun 20 19:07:51.074710 kernel: HOME=/ Jun 20 19:07:51.074722 kernel: TERM=linux Jun 20 19:07:51.074732 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:07:51.074750 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:07:51.074766 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:07:51.074787 systemd[1]: Detected virtualization kvm. Jun 20 19:07:51.074801 systemd[1]: Detected architecture x86-64. Jun 20 19:07:51.074817 systemd[1]: Running in initrd. Jun 20 19:07:51.074831 systemd[1]: No hostname configured, using default hostname. Jun 20 19:07:51.074842 systemd[1]: Hostname set to . Jun 20 19:07:51.074854 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:07:51.074870 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:07:51.074886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:07:51.076970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:07:51.076997 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:07:51.077009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:07:51.077031 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:07:51.077042 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:07:51.077055 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:07:51.077067 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:07:51.077085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:07:51.077101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:07:51.077123 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:07:51.077141 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:07:51.077159 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:07:51.077180 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:07:51.077198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:07:51.077212 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:07:51.077231 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:07:51.077245 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:07:51.077260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:07:51.077274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:07:51.077290 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:07:51.077305 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:07:51.077320 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:07:51.077335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:07:51.077356 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:07:51.077375 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:07:51.077390 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:07:51.077405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:07:51.077419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:07:51.077436 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:07:51.077447 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:07:51.077464 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:07:51.077559 systemd-journald[182]: Collecting audit messages is disabled. Jun 20 19:07:51.077592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:07:51.077604 systemd-journald[182]: Journal started Jun 20 19:07:51.077639 systemd-journald[182]: Runtime Journal (/run/log/journal/949c6562318f4fc8b7524d68cb7a9223) is 4.9M, max 39.3M, 34.4M free. Jun 20 19:07:51.044619 systemd-modules-load[183]: Inserted module 'overlay' Jun 20 19:07:51.093698 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:07:51.093739 kernel: Bridge firewalling registered Jun 20 19:07:51.090826 systemd-modules-load[183]: Inserted module 'br_netfilter' Jun 20 19:07:51.100916 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:07:51.101564 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:07:51.103275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:07:51.108144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:07:51.116194 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:07:51.125320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:07:51.131270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:07:51.145026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:07:51.150972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:07:51.160329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:07:51.167724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:07:51.175230 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:07:51.177091 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:07:51.187189 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:07:51.198231 dracut-cmdline[217]: dracut-dracut-053 Jun 20 19:07:51.202855 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 19:07:51.237619 systemd-resolved[220]: Positive Trust Anchors: Jun 20 19:07:51.238471 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:07:51.238538 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:07:51.245859 systemd-resolved[220]: Defaulting to hostname 'linux'. Jun 20 19:07:51.247455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:07:51.248126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:07:51.312949 kernel: SCSI subsystem initialized Jun 20 19:07:51.325937 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:07:51.339979 kernel: iscsi: registered transport (tcp) Jun 20 19:07:51.369062 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:07:51.369177 kernel: QLogic iSCSI HBA Driver Jun 20 19:07:51.437929 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:07:51.447183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:07:51.480372 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:07:51.481988 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:07:51.482018 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 19:07:51.530947 kernel: raid6: avx2x4 gen() 15569 MB/s Jun 20 19:07:51.547947 kernel: raid6: avx2x2 gen() 15870 MB/s Jun 20 19:07:51.565020 kernel: raid6: avx2x1 gen() 11982 MB/s Jun 20 19:07:51.565121 kernel: raid6: using algorithm avx2x2 gen() 15870 MB/s Jun 20 19:07:51.583100 kernel: raid6: .... xor() 16194 MB/s, rmw enabled Jun 20 19:07:51.583204 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:07:51.608952 kernel: xor: automatically using best checksumming function avx Jun 20 19:07:51.789989 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:07:51.806878 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:07:51.813304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:07:51.840861 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jun 20 19:07:51.850523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:07:51.859165 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:07:51.887988 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jun 20 19:07:51.935197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:07:51.942293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:07:52.032595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:07:52.040160 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:07:52.080036 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:07:52.081574 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:07:52.083442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:07:52.084700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:07:52.095514 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:07:52.126984 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:07:52.179375 kernel: scsi host0: Virtio SCSI HBA Jun 20 19:07:52.181926 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 20 19:07:52.201152 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 20 19:07:52.201467 kernel: libata version 3.00 loaded. Jun 20 19:07:52.207953 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:07:52.215151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:07:52.215286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:07:52.216714 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:07:52.217188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:07:52.217396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:07:52.219830 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:07:52.229456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:07:52.230431 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:07:52.234088 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 20 19:07:52.242941 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 19:07:52.243022 kernel: AES CTR mode by8 optimization enabled Jun 20 19:07:52.251486 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:07:52.251591 kernel: GPT:9289727 != 125829119 Jun 20 19:07:52.251613 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:07:52.251632 kernel: GPT:9289727 != 125829119 Jun 20 19:07:52.251650 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:07:52.251682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:07:52.257947 kernel: scsi host1: ata_piix Jun 20 19:07:52.271929 kernel: scsi host2: ata_piix Jun 20 19:07:52.272321 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 20 19:07:52.272344 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 20 19:07:52.279270 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 20 19:07:52.279764 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jun 20 19:07:52.291933 kernel: ACPI: bus type USB registered Jun 20 19:07:52.294017 kernel: usbcore: registered new interface driver usbfs Jun 20 19:07:52.294088 kernel: usbcore: registered new interface driver hub Jun 20 19:07:52.294110 kernel: usbcore: registered new device driver usb Jun 20 19:07:52.321054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:07:52.327215 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:07:52.347161 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:07:52.473971 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (449) Jun 20 19:07:52.474057 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (452) Jun 20 19:07:52.492709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:07:52.509975 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 20 19:07:52.510230 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 20 19:07:52.510365 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 20 19:07:52.510500 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 20 19:07:52.512916 kernel: hub 1-0:1.0: USB hub found Jun 20 19:07:52.513212 kernel: hub 1-0:1.0: 2 ports detected Jun 20 19:07:52.516572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:07:52.525742 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:07:52.526699 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:07:52.536791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:07:52.542208 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:07:52.550539 disk-uuid[551]: Primary Header is updated. Jun 20 19:07:52.550539 disk-uuid[551]: Secondary Entries is updated. Jun 20 19:07:52.550539 disk-uuid[551]: Secondary Header is updated. Jun 20 19:07:52.568948 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:07:53.583036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:07:53.583112 disk-uuid[552]: The operation has completed successfully. Jun 20 19:07:53.637685 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:07:53.637911 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:07:53.692226 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:07:53.697251 sh[563]: Success Jun 20 19:07:53.715018 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 19:07:53.799257 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:07:53.800583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:07:53.807184 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:07:53.828152 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 19:07:53.828225 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:07:53.829003 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 19:07:53.829933 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 19:07:53.830969 kernel: BTRFS info (device dm-0): using free space tree Jun 20 19:07:53.840402 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:07:53.841674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:07:53.847218 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:07:53.850163 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:07:53.872228 kernel: BTRFS info (device vda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:07:53.872303 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:07:53.873356 kernel: BTRFS info (device vda6): using free space tree Jun 20 19:07:53.878985 kernel: BTRFS info (device vda6): auto enabling async discard Jun 20 19:07:53.884964 kernel: BTRFS info (device vda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:07:53.886619 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:07:53.895239 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:07:54.022531 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:07:54.033270 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:07:54.075381 ignition[643]: Ignition 2.20.0 Jun 20 19:07:54.075401 ignition[643]: Stage: fetch-offline Jun 20 19:07:54.075511 ignition[643]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:54.075528 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:54.075747 ignition[643]: parsed url from cmdline: "" Jun 20 19:07:54.079229 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:07:54.075753 ignition[643]: no config URL provided Jun 20 19:07:54.075761 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:07:54.075777 ignition[643]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:07:54.075787 ignition[643]: failed to fetch config: resource requires networking Jun 20 19:07:54.076151 ignition[643]: Ignition finished successfully Jun 20 19:07:54.084159 systemd-networkd[746]: lo: Link UP Jun 20 19:07:54.084171 systemd-networkd[746]: lo: Gained carrier Jun 20 19:07:54.086890 systemd-networkd[746]: Enumeration completed Jun 20 19:07:54.087186 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:07:54.087442 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 20 19:07:54.087450 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 20 19:07:54.087777 systemd[1]: Reached target network.target - Network. Jun 20 19:07:54.088585 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:07:54.088591 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:07:54.089491 systemd-networkd[746]: eth0: Link UP Jun 20 19:07:54.089497 systemd-networkd[746]: eth0: Gained carrier Jun 20 19:07:54.089509 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 20 19:07:54.093498 systemd-networkd[746]: eth1: Link UP Jun 20 19:07:54.093503 systemd-networkd[746]: eth1: Gained carrier Jun 20 19:07:54.093519 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:07:54.097191 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:07:54.107063 systemd-networkd[746]: eth0: DHCPv4 address 146.190.167.30/20, gateway 146.190.160.1 acquired from 169.254.169.253 Jun 20 19:07:54.111067 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Jun 20 19:07:54.121557 ignition[755]: Ignition 2.20.0 Jun 20 19:07:54.121571 ignition[755]: Stage: fetch Jun 20 19:07:54.121837 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:54.121850 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:54.121991 ignition[755]: parsed url from cmdline: "" Jun 20 19:07:54.121995 ignition[755]: no config URL provided Jun 20 19:07:54.122001 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:07:54.122014 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:07:54.122041 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 20 19:07:54.138879 ignition[755]: GET result: OK Jun 20 19:07:54.139074 ignition[755]: parsing config with SHA512: ebbee3d203083eb9f76c681e473bc6faf608ecf85b375b7a2e538748ac7b3c027d8c1ab2c1e508d7851a7da17c13406500f238caa07b4984a2eb1652ce66ba25 Jun 20 19:07:54.144518 unknown[755]: fetched base config from "system" Jun 20 19:07:54.144530 unknown[755]: fetched base config from "system" Jun 20 19:07:54.144932 ignition[755]: fetch: fetch complete Jun 20 19:07:54.144538 unknown[755]: fetched user config from "digitalocean" Jun 20 19:07:54.144938 ignition[755]: fetch: fetch passed Jun 20 19:07:54.145000 ignition[755]: Ignition finished successfully Jun 20 19:07:54.147585 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:07:54.154206 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:07:54.182508 ignition[763]: Ignition 2.20.0 Jun 20 19:07:54.182527 ignition[763]: Stage: kargs Jun 20 19:07:54.182769 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:54.182782 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:54.183973 ignition[763]: kargs: kargs passed Jun 20 19:07:54.185750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:07:54.184048 ignition[763]: Ignition finished successfully Jun 20 19:07:54.200007 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:07:54.222342 ignition[769]: Ignition 2.20.0 Jun 20 19:07:54.222360 ignition[769]: Stage: disks Jun 20 19:07:54.222680 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:54.222697 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:54.224255 ignition[769]: disks: disks passed Jun 20 19:07:54.225864 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:07:54.224342 ignition[769]: Ignition finished successfully Jun 20 19:07:54.231835 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:07:54.232747 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:07:54.233697 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:07:54.234624 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:07:54.235387 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:07:54.241242 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:07:54.275621 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 20 19:07:54.279165 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:07:54.286225 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:07:54.406931 kernel: EXT4-fs (vda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 19:07:54.406617 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:07:54.408377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:07:54.415116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:07:54.431243 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:07:54.440224 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jun 20 19:07:54.444288 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 19:07:54.445123 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Jun 20 19:07:54.447564 kernel: BTRFS info (device vda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:07:54.447032 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:07:54.447084 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:07:54.451940 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:07:54.452007 kernel: BTRFS info (device vda6): using free space tree Jun 20 19:07:54.456409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:07:54.462256 kernel: BTRFS info (device vda6): auto enabling async discard Jun 20 19:07:54.462192 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:07:54.468525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:07:54.552457 coreos-metadata[789]: Jun 20 19:07:54.552 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 20 19:07:54.561591 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:07:54.568855 coreos-metadata[789]: Jun 20 19:07:54.568 INFO Fetch successful Jun 20 19:07:54.574649 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:07:54.579695 coreos-metadata[788]: Jun 20 19:07:54.575 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 20 19:07:54.579033 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:07:54.581990 coreos-metadata[789]: Jun 20 19:07:54.575 INFO wrote hostname ci-4230.2.0-6-80f26ce993 to /sysroot/etc/hostname Jun 20 19:07:54.584820 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:07:54.587414 coreos-metadata[788]: Jun 20 19:07:54.587 INFO Fetch successful Jun 20 19:07:54.597867 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jun 20 19:07:54.598496 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jun 20 19:07:54.601202 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:07:54.727678 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:07:54.732110 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:07:54.734186 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:07:54.750957 kernel: BTRFS info (device vda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:07:54.778323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:07:54.795259 ignition[907]: INFO : Ignition 2.20.0 Jun 20 19:07:54.795259 ignition[907]: INFO : Stage: mount Jun 20 19:07:54.796543 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:54.796543 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:54.798251 ignition[907]: INFO : mount: mount passed Jun 20 19:07:54.798251 ignition[907]: INFO : Ignition finished successfully Jun 20 19:07:54.799473 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:07:54.814204 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:07:54.828104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:07:54.838276 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:07:54.850966 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Jun 20 19:07:54.854385 kernel: BTRFS info (device vda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 19:07:54.854460 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:07:54.854474 kernel: BTRFS info (device vda6): using free space tree Jun 20 19:07:54.860962 kernel: BTRFS info (device vda6): auto enabling async discard Jun 20 19:07:54.863662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:07:54.898231 ignition[935]: INFO : Ignition 2.20.0 Jun 20 19:07:54.898231 ignition[935]: INFO : Stage: files Jun 20 19:07:54.899300 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:54.899300 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:54.900208 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:07:54.900781 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:07:54.900781 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:07:54.904464 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:07:54.905433 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:07:54.906276 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:07:54.906101 unknown[935]: wrote ssh authorized keys file for user: core Jun 20 19:07:54.908100 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:07:54.908977 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:07:54.947654 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:07:55.113689 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:07:55.113689 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:07:55.115603 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:07:55.115603 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:07:55.115603 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:07:55.115603 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:07:55.115603 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:07:55.115603 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:07:55.120220 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:07:55.246131 systemd-networkd[746]: eth1: Gained IPv6LL Jun 20 19:07:55.886359 systemd-networkd[746]: eth0: Gained IPv6LL Jun 20 19:07:55.904281 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:07:58.524654 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:07:58.524654 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 19:07:58.526712 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:07:58.527390 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:07:58.527390 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 19:07:58.527390 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:07:58.527390 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:07:58.529440 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:07:58.529440 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:07:58.529440 ignition[935]: INFO : files: files passed Jun 20 19:07:58.529440 ignition[935]: INFO : Ignition finished successfully Jun 20 19:07:58.529641 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:07:58.552765 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:07:58.555116 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:07:58.559448 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:07:58.560395 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:07:58.575571 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:07:58.575571 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:07:58.578647 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:07:58.581366 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:07:58.582028 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:07:58.586116 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:07:58.638006 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:07:58.638151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:07:58.639252 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:07:58.639711 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:07:58.640598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:07:58.646147 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:07:58.664752 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:07:58.670182 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:07:58.690982 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:07:58.692141 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:07:58.692616 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:07:58.693001 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:07:58.693185 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:07:58.693810 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:07:58.694284 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:07:58.695259 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:07:58.696079 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:07:58.696962 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:07:58.697759 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:07:58.698496 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:07:58.699173 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:07:58.699829 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:07:58.700509 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:07:58.701226 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:07:58.701470 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:07:58.702582 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:07:58.703263 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:07:58.703888 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:07:58.704026 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:07:58.704637 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:07:58.704840 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:07:58.705917 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:07:58.706129 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:07:58.706831 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:07:58.706965 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:07:58.707664 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 19:07:58.707863 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 19:07:58.722281 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:07:58.727240 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:07:58.728212 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:07:58.728460 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:07:58.731362 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:07:58.731570 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:07:58.749034 ignition[987]: INFO : Ignition 2.20.0 Jun 20 19:07:58.749034 ignition[987]: INFO : Stage: umount Jun 20 19:07:58.749034 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:07:58.749034 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 20 19:07:58.749034 ignition[987]: INFO : umount: umount passed Jun 20 19:07:58.749034 ignition[987]: INFO : Ignition finished successfully Jun 20 19:07:58.748207 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:07:58.748361 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:07:58.749396 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:07:58.749529 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:07:58.755223 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:07:58.755394 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:07:58.756650 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:07:58.756728 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:07:58.757929 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:07:58.757988 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:07:58.759649 systemd[1]: Stopped target network.target - Network. Jun 20 19:07:58.767467 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:07:58.767576 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:07:58.768367 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:07:58.769152 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:07:58.775028 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:07:58.775632 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:07:58.776716 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:07:58.778620 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:07:58.778698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:07:58.779532 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:07:58.780768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:07:58.788963 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:07:58.793494 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:07:58.796737 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:07:58.796862 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:07:58.797630 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:07:58.798540 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:07:58.802601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:07:58.803377 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:07:58.804446 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:07:58.811141 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:07:58.812207 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:07:58.812363 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:07:58.823743 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:07:58.824858 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:07:58.825056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:07:58.828004 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:07:58.828467 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:07:58.828767 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:07:58.831039 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:07:58.831102 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:07:58.832257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:07:58.832345 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:07:58.843755 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:07:58.844378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:07:58.844524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:07:58.845168 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:07:58.845245 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:07:58.846115 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:07:58.846202 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:07:58.846935 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:07:58.852479 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:07:58.865314 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:07:58.865574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:07:58.867363 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:07:58.867607 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:07:58.869337 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:07:58.869475 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:07:58.870722 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:07:58.870795 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:07:58.871480 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:07:58.871559 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:07:58.872962 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:07:58.873058 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:07:58.874389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:07:58.874481 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:07:58.887581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:07:58.889264 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:07:58.889379 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:07:58.892133 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:07:58.892248 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:07:58.893630 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:07:58.893720 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:07:58.895136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:07:58.895211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:07:58.897637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:07:58.897744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:07:58.898991 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:07:58.906281 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:07:58.919282 systemd[1]: Switching root. Jun 20 19:07:58.958749 systemd-journald[182]: Journal stopped Jun 20 19:08:00.557262 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jun 20 19:08:00.557362 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:08:00.557394 kernel: SELinux: policy capability open_perms=1 Jun 20 19:08:00.557417 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:08:00.557452 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:08:00.557472 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:08:00.557495 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:08:00.557524 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:08:00.557604 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:08:00.557629 kernel: audit: type=1403 audit(1750446479.098:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:08:00.557658 systemd[1]: Successfully loaded SELinux policy in 51.576ms. Jun 20 19:08:00.557698 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.311ms. Jun 20 19:08:00.557724 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:08:00.557750 systemd[1]: Detected virtualization kvm. Jun 20 19:08:00.557774 systemd[1]: Detected architecture x86-64. Jun 20 19:08:00.557797 systemd[1]: Detected first boot. Jun 20 19:08:00.557822 systemd[1]: Hostname set to . Jun 20 19:08:00.557846 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:08:00.557869 zram_generator::config[1031]: No configuration found. Jun 20 19:08:00.558075 kernel: Guest personality initialized and is inactive Jun 20 19:08:00.558109 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:08:00.558133 kernel: Initialized host personality Jun 20 19:08:00.558155 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:08:00.558177 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:08:00.558202 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:08:00.558226 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:08:00.558249 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:08:00.558286 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:08:00.558309 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:08:00.558333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:08:00.558355 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:08:00.558377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:08:00.558401 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:08:00.558424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:08:00.558446 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:08:00.558480 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:08:00.558503 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:08:00.558525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:08:00.558550 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:08:00.558573 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:08:00.558597 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:08:00.558626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:08:00.558650 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:08:00.558673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:08:00.558697 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:08:00.558720 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:08:00.558747 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:08:00.558780 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:08:00.558803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:08:00.558826 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:08:00.558850 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:08:00.558876 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:08:00.558935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:08:00.558960 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:08:00.558984 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:08:00.559003 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:08:00.559023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:08:00.559042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:08:00.559061 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:08:00.559081 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:08:00.559107 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:08:00.559127 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:08:00.559148 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:00.559167 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:08:00.559187 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:08:00.559207 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:08:00.559228 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:08:00.559249 systemd[1]: Reached target machines.target - Containers. Jun 20 19:08:00.559274 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:08:00.559295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:00.559314 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:08:00.559333 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:08:00.559352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:00.559373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:08:00.559412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:00.559433 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:08:00.559454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:00.559482 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:08:00.559502 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:08:00.559522 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:08:00.559542 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:08:00.559567 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:08:00.559589 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:00.559612 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:08:00.559632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:08:00.559658 kernel: fuse: init (API version 7.39) Jun 20 19:08:00.559678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:08:00.559698 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:08:00.559718 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:08:00.559738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:08:00.559758 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:08:00.559786 systemd[1]: Stopped verity-setup.service. Jun 20 19:08:00.559808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:00.559829 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:08:00.559847 kernel: loop: module loaded Jun 20 19:08:00.559872 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:08:00.559893 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:08:00.559936 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:08:00.559957 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:08:00.559976 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:08:00.559996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:08:00.560015 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:08:00.560042 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:08:00.560062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:00.560090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:00.560110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:00.560130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:00.560151 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:08:00.560170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:08:00.560191 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:00.560210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:00.560231 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:08:00.560252 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:08:00.560278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:00.560299 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:08:00.560319 kernel: ACPI: bus type drm_connector registered Jun 20 19:08:00.560339 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:08:00.560360 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:08:00.560389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:08:00.560409 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:08:00.560431 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:08:00.560451 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:08:00.560477 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:08:00.560501 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:08:00.560522 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:08:00.560544 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:08:00.560565 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:08:00.560586 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:08:00.560609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:00.560629 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:08:00.560656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:00.560745 systemd-journald[1105]: Collecting audit messages is disabled. Jun 20 19:08:00.560792 systemd-journald[1105]: Journal started Jun 20 19:08:00.560833 systemd-journald[1105]: Runtime Journal (/run/log/journal/949c6562318f4fc8b7524d68cb7a9223) is 4.9M, max 39.3M, 34.4M free. Jun 20 19:08:00.043623 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:08:00.059662 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:08:00.060411 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:08:00.569985 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:08:00.578864 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:08:00.588066 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:08:00.595941 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:08:00.601241 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:08:00.613573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:08:00.615971 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:08:00.667173 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:08:00.701251 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:08:00.702800 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:08:00.711938 kernel: loop0: detected capacity change from 0 to 8 Jun 20 19:08:00.713205 systemd-tmpfiles[1123]: ACLs are not supported, ignoring. Jun 20 19:08:00.713230 systemd-tmpfiles[1123]: ACLs are not supported, ignoring. Jun 20 19:08:00.717203 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:08:00.720677 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:08:00.721605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:08:00.753508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:08:00.759993 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:08:00.770708 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:08:00.779632 systemd-journald[1105]: Time spent on flushing to /var/log/journal/949c6562318f4fc8b7524d68cb7a9223 is 60.994ms for 1008 entries. Jun 20 19:08:00.779632 systemd-journald[1105]: System Journal (/var/log/journal/949c6562318f4fc8b7524d68cb7a9223) is 8M, max 195.6M, 187.6M free. Jun 20 19:08:00.864747 systemd-journald[1105]: Received client request to flush runtime journal. Jun 20 19:08:00.864905 kernel: loop1: detected capacity change from 0 to 138176 Jun 20 19:08:00.806858 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:08:00.872144 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:08:00.887956 kernel: loop2: detected capacity change from 0 to 147912 Jun 20 19:08:00.935960 kernel: loop3: detected capacity change from 0 to 224512 Jun 20 19:08:00.955964 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:08:00.958493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:08:00.973256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:08:00.981224 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 19:08:01.020496 kernel: loop4: detected capacity change from 0 to 8 Jun 20 19:08:01.032936 kernel: loop5: detected capacity change from 0 to 138176 Jun 20 19:08:01.066251 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:08:01.075788 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 20 19:08:01.079035 kernel: loop6: detected capacity change from 0 to 147912 Jun 20 19:08:01.119932 kernel: loop7: detected capacity change from 0 to 224512 Jun 20 19:08:01.113767 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jun 20 19:08:01.113801 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jun 20 19:08:01.148548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:08:01.189693 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 20 19:08:01.190706 (sd-merge)[1184]: Merged extensions into '/usr'. Jun 20 19:08:01.207841 systemd[1]: Reload requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:08:01.208108 systemd[1]: Reloading... Jun 20 19:08:01.460955 zram_generator::config[1215]: No configuration found. Jun 20 19:08:01.498062 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:08:01.713124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:01.814294 systemd[1]: Reloading finished in 605 ms. Jun 20 19:08:01.830312 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:08:01.836618 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:08:01.850247 systemd[1]: Starting ensure-sysext.service... Jun 20 19:08:01.855251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:08:01.919590 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:08:01.919611 systemd[1]: Reloading... Jun 20 19:08:01.962275 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:08:01.962601 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:08:01.964097 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:08:01.964610 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jun 20 19:08:01.964714 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jun 20 19:08:01.976303 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:08:01.976322 systemd-tmpfiles[1258]: Skipping /boot Jun 20 19:08:02.015537 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:08:02.015556 systemd-tmpfiles[1258]: Skipping /boot Jun 20 19:08:02.143940 zram_generator::config[1290]: No configuration found. Jun 20 19:08:02.310835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:02.407192 systemd[1]: Reloading finished in 486 ms. Jun 20 19:08:02.426421 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:08:02.440619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:08:02.455298 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:08:02.459361 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:08:02.464009 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:08:02.468930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:08:02.474458 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:08:02.479211 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:08:02.484678 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.485016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:02.494051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:02.499352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:02.505005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:02.506655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:02.506844 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:02.507107 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.511856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.513291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:02.513508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:02.513612 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:02.513719 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.527266 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:08:02.532064 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:08:02.536224 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.537439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:02.546939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:08:02.547677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:02.547836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:02.553220 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:08:02.554397 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.563663 systemd[1]: Finished ensure-sysext.service. Jun 20 19:08:02.578313 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:08:02.589307 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:08:02.589614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:08:02.593672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:02.594030 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:02.595414 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:08:02.605391 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Jun 20 19:08:02.631575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:02.631819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:02.632610 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:02.634460 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:02.634662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:02.635541 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:02.640693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:08:02.675131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:08:02.691597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:08:02.693677 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:08:02.698392 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:08:02.710392 augenrules[1386]: No rules Jun 20 19:08:02.715195 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:08:02.715571 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:08:02.731756 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:08:02.885575 systemd-resolved[1336]: Positive Trust Anchors: Jun 20 19:08:02.885592 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:08:02.885641 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:08:02.906033 systemd-resolved[1336]: Using system hostname 'ci-4230.2.0-6-80f26ce993'. Jun 20 19:08:02.916150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:08:02.916739 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:08:02.928705 systemd-networkd[1375]: lo: Link UP Jun 20 19:08:02.928718 systemd-networkd[1375]: lo: Gained carrier Jun 20 19:08:02.930408 systemd-networkd[1375]: Enumeration completed Jun 20 19:08:02.930524 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:08:02.935076 systemd[1]: Reached target network.target - Network. Jun 20 19:08:02.941350 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:08:02.954238 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:08:02.970655 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Jun 20 19:08:02.978174 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 20 19:08:02.978668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:02.978883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:08:02.981246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:08:02.989214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:08:02.992209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:08:02.994130 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:08:02.994189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:08:02.994235 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:08:02.994258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:08:03.005494 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:08:03.006351 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:08:03.023286 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:08:03.032996 kernel: ISO 9660 Extensions: RRIP_1991A Jun 20 19:08:03.036140 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 20 19:08:03.051574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:08:03.053011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:08:03.053818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:08:03.054108 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:08:03.054792 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:08:03.055103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:08:03.057090 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:08:03.058012 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:08:03.058079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:08:03.076067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1372) Jun 20 19:08:03.120813 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-26:15:d0:b3:b5:41.network. Jun 20 19:08:03.124417 systemd-networkd[1375]: eth1: Link UP Jun 20 19:08:03.125182 systemd-networkd[1375]: eth1: Gained carrier Jun 20 19:08:03.132145 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jun 20 19:08:03.165448 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-b6:40:0b:46:d9:53.network. Jun 20 19:08:03.166341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:08:03.170358 systemd-networkd[1375]: eth0: Link UP Jun 20 19:08:03.170683 systemd-networkd[1375]: eth0: Gained carrier Jun 20 19:08:03.179795 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:08:03.196948 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:08:03.203145 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:08:03.206818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:08:03.208364 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 20 19:08:03.261931 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jun 20 19:08:03.340068 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 20 19:08:03.341509 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 20 19:08:03.345032 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:08:03.346428 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:08:03.349944 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:08:03.350044 kernel: [drm] features: -context_init Jun 20 19:08:03.352957 kernel: [drm] number of scanouts: 1 Jun 20 19:08:03.353099 kernel: [drm] number of cap sets: 0 Jun 20 19:08:03.353128 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 20 19:08:03.360081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:03.362952 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 20 19:08:03.363066 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 19:08:03.375734 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 20 19:08:03.401259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:03.401926 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:03.417527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:03.438880 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:08:03.440698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:03.508335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:08:03.573078 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:08:03.600056 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 19:08:03.611272 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 19:08:03.615507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:08:03.633870 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:08:03.665553 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 19:08:03.666636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:08:03.666877 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:08:03.668795 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:08:03.668980 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:08:03.669301 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:08:03.669482 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:08:03.669565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:08:03.669651 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:08:03.669684 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:08:03.669744 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:08:03.671715 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:08:03.674433 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:08:03.679675 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:08:03.682858 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:08:03.684571 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:08:03.695648 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:08:03.697945 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:08:03.711249 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 19:08:03.714134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:08:03.716148 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:08:03.717756 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:08:03.719190 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:08:03.719241 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:08:03.719895 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 19:08:03.727197 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:08:03.737843 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:08:03.745110 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:08:03.753062 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:08:03.756366 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:08:03.757195 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:08:03.763320 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:08:03.774154 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:08:03.780175 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:08:03.796272 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:08:03.802249 dbus-daemon[1455]: [system] SELinux support is enabled Jun 20 19:08:03.812331 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:08:03.816777 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:08:03.817715 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:08:03.828412 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:08:03.840077 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:08:03.845203 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:08:03.859095 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 19:08:03.876649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:08:03.876740 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:08:03.879033 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:08:03.883940 jq[1468]: true Jun 20 19:08:03.902990 jq[1456]: false Jun 20 19:08:03.904468 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:08:03.906282 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:08:03.906884 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:08:03.907128 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:08:03.922296 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 20 19:08:03.922344 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:08:03.940159 jq[1478]: true Jun 20 19:08:03.949457 coreos-metadata[1454]: Jun 20 19:08:03.948 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 20 19:08:03.955489 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:08:03.958080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:08:03.967422 extend-filesystems[1457]: Found loop4 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found loop5 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found loop6 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found loop7 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda1 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda2 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda3 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found usr Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda4 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda6 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda7 Jun 20 19:08:03.967422 extend-filesystems[1457]: Found vda9 Jun 20 19:08:03.967422 extend-filesystems[1457]: Checking size of /dev/vda9 Jun 20 19:08:04.047575 coreos-metadata[1454]: Jun 20 19:08:03.976 INFO Fetch successful Jun 20 19:08:04.047739 update_engine[1467]: I20250620 19:08:03.983997 1467 main.cc:92] Flatcar Update Engine starting Jun 20 19:08:04.047739 update_engine[1467]: I20250620 19:08:03.995155 1467 update_check_scheduler.cc:74] Next update check in 9m51s Jun 20 19:08:04.056327 tar[1479]: linux-amd64/LICENSE Jun 20 19:08:04.056327 tar[1479]: linux-amd64/helm Jun 20 19:08:03.990776 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:08:03.998580 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:08:04.010205 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:08:04.103967 extend-filesystems[1457]: Resized partition /dev/vda9 Jun 20 19:08:04.112940 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Jun 20 19:08:04.119780 systemd-logind[1465]: New seat seat0. Jun 20 19:08:04.122435 systemd-logind[1465]: Watching system buttons on /dev/input/event1 (Power Button) Jun 20 19:08:04.122596 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:08:04.128147 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:08:04.134976 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 20 19:08:04.138143 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:08:04.158812 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:08:04.243191 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1383) Jun 20 19:08:04.199879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:08:04.282317 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:08:04.283625 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:08:04.299534 systemd[1]: Starting sshkeys.service... Jun 20 19:08:04.409761 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:08:04.418853 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 20 19:08:04.423782 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:08:04.442479 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:08:04.442479 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 20 19:08:04.442479 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 20 19:08:04.454672 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Jun 20 19:08:04.454672 extend-filesystems[1457]: Found vdb Jun 20 19:08:04.458364 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:08:04.458598 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:08:04.541295 coreos-metadata[1525]: Jun 20 19:08:04.540 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 20 19:08:04.555165 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:08:04.562756 coreos-metadata[1525]: Jun 20 19:08:04.562 INFO Fetch successful Jun 20 19:08:04.578602 unknown[1525]: wrote ssh authorized keys file for user: core Jun 20 19:08:04.624753 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:08:04.628646 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:08:04.636395 systemd[1]: Finished sshkeys.service. Jun 20 19:08:04.718495 containerd[1485]: time="2025-06-20T19:08:04.718369269Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 19:08:04.782189 systemd-networkd[1375]: eth0: Gained IPv6LL Jun 20 19:08:04.790146 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:08:04.793812 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:08:04.801373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:04.815638 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:08:04.821106 containerd[1485]: time="2025-06-20T19:08:04.821039354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.826249 containerd[1485]: time="2025-06-20T19:08:04.826186760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:04.826407 containerd[1485]: time="2025-06-20T19:08:04.826390231Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 19:08:04.826489 containerd[1485]: time="2025-06-20T19:08:04.826475098Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 19:08:04.826707 containerd[1485]: time="2025-06-20T19:08:04.826690369Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 19:08:04.826797 containerd[1485]: time="2025-06-20T19:08:04.826780506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827018 containerd[1485]: time="2025-06-20T19:08:04.826995201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827102 containerd[1485]: time="2025-06-20T19:08:04.827083598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827933 containerd[1485]: time="2025-06-20T19:08:04.827580817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827933 containerd[1485]: time="2025-06-20T19:08:04.827603040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827933 containerd[1485]: time="2025-06-20T19:08:04.827618196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827933 containerd[1485]: time="2025-06-20T19:08:04.827629225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.827933 containerd[1485]: time="2025-06-20T19:08:04.827719611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.828414 containerd[1485]: time="2025-06-20T19:08:04.828385058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 19:08:04.829076 containerd[1485]: time="2025-06-20T19:08:04.828709434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 19:08:04.829076 containerd[1485]: time="2025-06-20T19:08:04.828732305Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 19:08:04.829076 containerd[1485]: time="2025-06-20T19:08:04.828883703Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 19:08:04.829076 containerd[1485]: time="2025-06-20T19:08:04.829021264Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.835107550Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.835203316Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.835311601Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.835361891Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.835389004Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.835639221Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836055329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836260323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836287979Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836312992Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836337294Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836358092Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836402855Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.837988 containerd[1485]: time="2025-06-20T19:08:04.836428950Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836451622Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836470596Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836494167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836511439Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836544134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836564255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836584500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836604925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.838739 containerd[1485]: time="2025-06-20T19:08:04.836624894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840586141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840657302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840677571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840699198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840730196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840756649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840777324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840832332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840866536Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840930579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840950539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.840961951Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.841018830Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 19:08:04.842939 containerd[1485]: time="2025-06-20T19:08:04.841048456Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 19:08:04.843585 containerd[1485]: time="2025-06-20T19:08:04.841067142Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 19:08:04.843585 containerd[1485]: time="2025-06-20T19:08:04.841087558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 19:08:04.843585 containerd[1485]: time="2025-06-20T19:08:04.841102231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.843585 containerd[1485]: time="2025-06-20T19:08:04.841120697Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 19:08:04.843585 containerd[1485]: time="2025-06-20T19:08:04.841131497Z" level=info msg="NRI interface is disabled by configuration." Jun 20 19:08:04.843585 containerd[1485]: time="2025-06-20T19:08:04.841144156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 19:08:04.843825 containerd[1485]: time="2025-06-20T19:08:04.841524708Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 19:08:04.843825 containerd[1485]: time="2025-06-20T19:08:04.841605777Z" level=info msg="Connect containerd service" Jun 20 19:08:04.843825 containerd[1485]: time="2025-06-20T19:08:04.841691416Z" level=info msg="using legacy CRI server" Jun 20 19:08:04.843825 containerd[1485]: time="2025-06-20T19:08:04.841703953Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:08:04.843825 containerd[1485]: time="2025-06-20T19:08:04.841827592Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 19:08:04.853095 containerd[1485]: time="2025-06-20T19:08:04.851933033Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:08:04.853095 containerd[1485]: time="2025-06-20T19:08:04.853012932Z" level=info msg="Start subscribing containerd event" Jun 20 19:08:04.853095 containerd[1485]: time="2025-06-20T19:08:04.853107054Z" level=info msg="Start recovering state" Jun 20 19:08:04.853331 containerd[1485]: time="2025-06-20T19:08:04.853228526Z" level=info msg="Start event monitor" Jun 20 19:08:04.853331 containerd[1485]: time="2025-06-20T19:08:04.853277895Z" level=info msg="Start snapshots syncer" Jun 20 19:08:04.853331 containerd[1485]: time="2025-06-20T19:08:04.853294954Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:08:04.853331 containerd[1485]: time="2025-06-20T19:08:04.853308360Z" level=info msg="Start streaming server" Jun 20 19:08:04.856662 containerd[1485]: time="2025-06-20T19:08:04.854300559Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:08:04.856662 containerd[1485]: time="2025-06-20T19:08:04.854407473Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:08:04.856662 containerd[1485]: time="2025-06-20T19:08:04.854495644Z" level=info msg="containerd successfully booted in 0.139198s" Jun 20 19:08:04.854698 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:08:04.911428 systemd-networkd[1375]: eth1: Gained IPv6LL Jun 20 19:08:04.923056 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:08:05.250579 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:08:05.321709 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:08:05.333562 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:08:05.344418 systemd[1]: Started sshd@0-146.190.167.30:22-139.178.68.195:35398.service - OpenSSH per-connection server daemon (139.178.68.195:35398). Jun 20 19:08:05.388841 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:08:05.389168 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:08:05.404509 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:08:05.475628 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:08:05.482066 tar[1479]: linux-amd64/README.md Jun 20 19:08:05.506447 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:08:05.521426 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:08:05.524033 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:08:05.527133 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:08:05.537516 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 35398 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:05.540482 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:05.557429 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:08:05.566516 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:08:05.573317 systemd-logind[1465]: New session 1 of user core. Jun 20 19:08:05.603092 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:08:05.615465 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:08:05.628377 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:08:05.633451 systemd-logind[1465]: New session c1 of user core. Jun 20 19:08:05.844099 systemd[1579]: Queued start job for default target default.target. Jun 20 19:08:05.854472 systemd[1579]: Created slice app.slice - User Application Slice. Jun 20 19:08:05.854806 systemd[1579]: Reached target paths.target - Paths. Jun 20 19:08:05.855149 systemd[1579]: Reached target timers.target - Timers. Jun 20 19:08:05.860135 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:08:05.887065 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:08:05.887418 systemd[1579]: Reached target sockets.target - Sockets. Jun 20 19:08:05.887560 systemd[1579]: Reached target basic.target - Basic System. Jun 20 19:08:05.887606 systemd[1579]: Reached target default.target - Main User Target. Jun 20 19:08:05.887638 systemd[1579]: Startup finished in 244ms. Jun 20 19:08:05.887890 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:08:05.900206 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:08:05.987505 systemd[1]: Started sshd@1-146.190.167.30:22-139.178.68.195:35410.service - OpenSSH per-connection server daemon (139.178.68.195:35410). Jun 20 19:08:06.088520 sshd[1590]: Accepted publickey for core from 139.178.68.195 port 35410 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:06.090865 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:06.098999 systemd-logind[1465]: New session 2 of user core. Jun 20 19:08:06.102196 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:08:06.170322 sshd[1592]: Connection closed by 139.178.68.195 port 35410 Jun 20 19:08:06.171430 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Jun 20 19:08:06.185558 systemd[1]: sshd@1-146.190.167.30:22-139.178.68.195:35410.service: Deactivated successfully. Jun 20 19:08:06.189190 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:08:06.192668 systemd-logind[1465]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:08:06.201474 systemd[1]: Started sshd@2-146.190.167.30:22-139.178.68.195:35420.service - OpenSSH per-connection server daemon (139.178.68.195:35420). Jun 20 19:08:06.209821 systemd-logind[1465]: Removed session 2. Jun 20 19:08:06.257416 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 35420 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:06.260047 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:06.269045 systemd-logind[1465]: New session 3 of user core. Jun 20 19:08:06.275283 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:08:06.352005 sshd[1600]: Connection closed by 139.178.68.195 port 35420 Jun 20 19:08:06.353715 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Jun 20 19:08:06.358636 systemd[1]: sshd@2-146.190.167.30:22-139.178.68.195:35420.service: Deactivated successfully. Jun 20 19:08:06.361538 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:08:06.364094 systemd-logind[1465]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:08:06.365421 systemd-logind[1465]: Removed session 3. Jun 20 19:08:06.426149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:06.427622 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:08:06.434090 systemd[1]: Startup finished in 1.361s (kernel) + 8.331s (initrd) + 7.385s (userspace) = 17.078s. Jun 20 19:08:06.440018 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:07.140151 kubelet[1610]: E0620 19:08:07.140073 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:07.143114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:07.143294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:07.144104 systemd[1]: kubelet.service: Consumed 1.362s CPU time, 269M memory peak. Jun 20 19:08:10.525253 systemd-timesyncd[1352]: Contacted time server 45.79.35.159:123 (1.flatcar.pool.ntp.org). Jun 20 19:08:10.525346 systemd-timesyncd[1352]: Initial clock synchronization to Fri 2025-06-20 19:08:10.524992 UTC. Jun 20 19:08:10.525458 systemd-resolved[1336]: Clock change detected. Flushing caches. Jun 20 19:08:17.536433 systemd[1]: Started sshd@3-146.190.167.30:22-139.178.68.195:47648.service - OpenSSH per-connection server daemon (139.178.68.195:47648). Jun 20 19:08:17.584430 sshd[1622]: Accepted publickey for core from 139.178.68.195 port 47648 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:17.586587 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:17.592977 systemd-logind[1465]: New session 4 of user core. Jun 20 19:08:17.599220 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:08:17.661812 sshd[1624]: Connection closed by 139.178.68.195 port 47648 Jun 20 19:08:17.662620 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jun 20 19:08:17.680956 systemd[1]: sshd@3-146.190.167.30:22-139.178.68.195:47648.service: Deactivated successfully. Jun 20 19:08:17.683758 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:08:17.686295 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:08:17.692805 systemd[1]: Started sshd@4-146.190.167.30:22-139.178.68.195:47658.service - OpenSSH per-connection server daemon (139.178.68.195:47658). Jun 20 19:08:17.694625 systemd-logind[1465]: Removed session 4. Jun 20 19:08:17.745361 sshd[1629]: Accepted publickey for core from 139.178.68.195 port 47658 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:17.747332 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:17.757149 systemd-logind[1465]: New session 5 of user core. Jun 20 19:08:17.763240 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:08:17.824724 sshd[1632]: Connection closed by 139.178.68.195 port 47658 Jun 20 19:08:17.826109 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Jun 20 19:08:17.837885 systemd[1]: sshd@4-146.190.167.30:22-139.178.68.195:47658.service: Deactivated successfully. Jun 20 19:08:17.840682 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:08:17.842774 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:08:17.851355 systemd[1]: Started sshd@5-146.190.167.30:22-139.178.68.195:47674.service - OpenSSH per-connection server daemon (139.178.68.195:47674). Jun 20 19:08:17.853634 systemd-logind[1465]: Removed session 5. Jun 20 19:08:17.902795 sshd[1637]: Accepted publickey for core from 139.178.68.195 port 47674 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:17.904964 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:17.912743 systemd-logind[1465]: New session 6 of user core. Jun 20 19:08:17.922323 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:08:17.991886 sshd[1640]: Connection closed by 139.178.68.195 port 47674 Jun 20 19:08:17.990418 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jun 20 19:08:18.009855 systemd[1]: sshd@5-146.190.167.30:22-139.178.68.195:47674.service: Deactivated successfully. Jun 20 19:08:18.012724 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:08:18.015294 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:08:18.022393 systemd[1]: Started sshd@6-146.190.167.30:22-139.178.68.195:47688.service - OpenSSH per-connection server daemon (139.178.68.195:47688). Jun 20 19:08:18.024244 systemd-logind[1465]: Removed session 6. Jun 20 19:08:18.081476 sshd[1645]: Accepted publickey for core from 139.178.68.195 port 47688 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:08:18.083450 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:08:18.090143 systemd-logind[1465]: New session 7 of user core. Jun 20 19:08:18.102327 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:08:18.179687 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:08:18.180202 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:08:18.558239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:08:18.567547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:18.736294 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:08:18.745921 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:08:18.753279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:18.763395 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:18.827861 kubelet[1674]: E0620 19:08:18.826132 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:18.829853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:18.830061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:18.830889 systemd[1]: kubelet.service: Consumed 196ms CPU time, 108.5M memory peak. Jun 20 19:08:19.246337 dockerd[1671]: time="2025-06-20T19:08:19.246123539Z" level=info msg="Starting up" Jun 20 19:08:19.372442 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4200650439-merged.mount: Deactivated successfully. Jun 20 19:08:19.386242 systemd[1]: var-lib-docker-metacopy\x2dcheck145310880-merged.mount: Deactivated successfully. Jun 20 19:08:19.422347 dockerd[1671]: time="2025-06-20T19:08:19.422201365Z" level=info msg="Loading containers: start." Jun 20 19:08:19.640036 kernel: Initializing XFRM netlink socket Jun 20 19:08:19.765726 systemd-networkd[1375]: docker0: Link UP Jun 20 19:08:19.801318 dockerd[1671]: time="2025-06-20T19:08:19.801222294Z" level=info msg="Loading containers: done." Jun 20 19:08:19.819052 dockerd[1671]: time="2025-06-20T19:08:19.818513033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:08:19.819052 dockerd[1671]: time="2025-06-20T19:08:19.818662777Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 19:08:19.819052 dockerd[1671]: time="2025-06-20T19:08:19.818808056Z" level=info msg="Daemon has completed initialization" Jun 20 19:08:19.856281 dockerd[1671]: time="2025-06-20T19:08:19.855794410Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:08:19.856423 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:08:20.366084 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1207673100-merged.mount: Deactivated successfully. Jun 20 19:08:20.807693 containerd[1485]: time="2025-06-20T19:08:20.806909477Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:08:20.813129 systemd-resolved[1336]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jun 20 19:08:21.365334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482782818.mount: Deactivated successfully. Jun 20 19:08:22.636926 containerd[1485]: time="2025-06-20T19:08:22.636053313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:22.637954 containerd[1485]: time="2025-06-20T19:08:22.637894054Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 20 19:08:22.638842 containerd[1485]: time="2025-06-20T19:08:22.638790999Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:22.641633 containerd[1485]: time="2025-06-20T19:08:22.641562693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:22.643139 containerd[1485]: time="2025-06-20T19:08:22.642913040Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.835946109s" Jun 20 19:08:22.643139 containerd[1485]: time="2025-06-20T19:08:22.642961281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:08:22.643578 containerd[1485]: time="2025-06-20T19:08:22.643543647Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:08:23.865365 systemd-resolved[1336]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jun 20 19:08:24.099064 containerd[1485]: time="2025-06-20T19:08:24.098849907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:24.100757 containerd[1485]: time="2025-06-20T19:08:24.100681285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 20 19:08:24.101554 containerd[1485]: time="2025-06-20T19:08:24.100846533Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:24.105171 containerd[1485]: time="2025-06-20T19:08:24.105070115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:24.107828 containerd[1485]: time="2025-06-20T19:08:24.106997267Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.463413788s" Jun 20 19:08:24.107828 containerd[1485]: time="2025-06-20T19:08:24.107064983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:08:24.108363 containerd[1485]: time="2025-06-20T19:08:24.108296836Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:08:25.284143 containerd[1485]: time="2025-06-20T19:08:25.284061306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:25.285452 containerd[1485]: time="2025-06-20T19:08:25.285396447Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 20 19:08:25.286369 containerd[1485]: time="2025-06-20T19:08:25.285830743Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:25.288991 containerd[1485]: time="2025-06-20T19:08:25.288951955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:25.290528 containerd[1485]: time="2025-06-20T19:08:25.290474473Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.182121295s" Jun 20 19:08:25.290629 containerd[1485]: time="2025-06-20T19:08:25.290530476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:08:25.291827 containerd[1485]: time="2025-06-20T19:08:25.291798570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:08:26.427498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699303517.mount: Deactivated successfully. Jun 20 19:08:26.988985 containerd[1485]: time="2025-06-20T19:08:26.987913908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:26.990673 containerd[1485]: time="2025-06-20T19:08:26.990608497Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 20 19:08:26.991846 containerd[1485]: time="2025-06-20T19:08:26.991792869Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:26.995951 containerd[1485]: time="2025-06-20T19:08:26.995212077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:26.996354 containerd[1485]: time="2025-06-20T19:08:26.996176185Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.704204915s" Jun 20 19:08:26.996354 containerd[1485]: time="2025-06-20T19:08:26.996223106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:08:26.997035 containerd[1485]: time="2025-06-20T19:08:26.996987312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:08:26.998956 systemd-resolved[1336]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jun 20 19:08:27.515441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081553020.mount: Deactivated successfully. Jun 20 19:08:28.457723 containerd[1485]: time="2025-06-20T19:08:28.457136596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:28.459018 containerd[1485]: time="2025-06-20T19:08:28.458954652Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:08:28.460174 containerd[1485]: time="2025-06-20T19:08:28.460118428Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:28.464595 containerd[1485]: time="2025-06-20T19:08:28.464533658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:28.467237 containerd[1485]: time="2025-06-20T19:08:28.467171610Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.470135648s" Jun 20 19:08:28.467237 containerd[1485]: time="2025-06-20T19:08:28.467230031Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:08:28.469391 containerd[1485]: time="2025-06-20T19:08:28.469245698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:08:28.900186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:08:28.910297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:28.914846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657002300.mount: Deactivated successfully. Jun 20 19:08:28.920928 containerd[1485]: time="2025-06-20T19:08:28.919279357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:28.921898 containerd[1485]: time="2025-06-20T19:08:28.921671943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:08:28.923697 containerd[1485]: time="2025-06-20T19:08:28.923643652Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:28.928166 containerd[1485]: time="2025-06-20T19:08:28.928107507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:28.929062 containerd[1485]: time="2025-06-20T19:08:28.928945628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 459.642625ms" Jun 20 19:08:28.929301 containerd[1485]: time="2025-06-20T19:08:28.929276757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:08:28.930106 containerd[1485]: time="2025-06-20T19:08:28.930074074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:08:29.075253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:29.086664 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:08:29.150125 kubelet[2010]: E0620 19:08:29.150067 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:08:29.153131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:08:29.153293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:08:29.153909 systemd[1]: kubelet.service: Consumed 197ms CPU time, 108.8M memory peak. Jun 20 19:08:29.415945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840615235.mount: Deactivated successfully. Jun 20 19:08:31.264752 containerd[1485]: time="2025-06-20T19:08:31.264522342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:31.266224 containerd[1485]: time="2025-06-20T19:08:31.266136148Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 20 19:08:31.266824 containerd[1485]: time="2025-06-20T19:08:31.266768140Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:31.270132 containerd[1485]: time="2025-06-20T19:08:31.270053936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:31.271489 containerd[1485]: time="2025-06-20T19:08:31.271354436Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.341119351s" Jun 20 19:08:31.271489 containerd[1485]: time="2025-06-20T19:08:31.271394113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:08:34.107909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:34.108892 systemd[1]: kubelet.service: Consumed 197ms CPU time, 108.8M memory peak. Jun 20 19:08:34.119300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:34.161910 systemd[1]: Reload requested from client PID 2098 ('systemctl') (unit session-7.scope)... Jun 20 19:08:34.161931 systemd[1]: Reloading... Jun 20 19:08:34.329901 zram_generator::config[2142]: No configuration found. Jun 20 19:08:34.462768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:34.599971 systemd[1]: Reloading finished in 437 ms. Jun 20 19:08:34.659742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:34.668472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:34.669237 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:08:34.670001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:34.670064 systemd[1]: kubelet.service: Consumed 127ms CPU time, 98.2M memory peak. Jun 20 19:08:34.676349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:34.861072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:34.874625 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:08:34.929911 kubelet[2198]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:08:34.929911 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:08:34.929911 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:08:34.929911 kubelet[2198]: I0620 19:08:34.929454 2198 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:08:35.357398 kubelet[2198]: I0620 19:08:35.356221 2198 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:08:35.357398 kubelet[2198]: I0620 19:08:35.356270 2198 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:08:35.357398 kubelet[2198]: I0620 19:08:35.356749 2198 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:08:35.386635 kubelet[2198]: E0620 19:08:35.386222 2198 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.167.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:35.388452 kubelet[2198]: I0620 19:08:35.388411 2198 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:08:35.395763 kubelet[2198]: E0620 19:08:35.395722 2198 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:08:35.395978 kubelet[2198]: I0620 19:08:35.395964 2198 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:08:35.400381 kubelet[2198]: I0620 19:08:35.400340 2198 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:08:35.400831 kubelet[2198]: I0620 19:08:35.400790 2198 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:08:35.401528 kubelet[2198]: I0620 19:08:35.400890 2198 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-6-80f26ce993","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:08:35.401528 kubelet[2198]: I0620 19:08:35.401097 2198 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:08:35.401528 kubelet[2198]: I0620 19:08:35.401108 2198 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:08:35.401528 kubelet[2198]: I0620 19:08:35.401241 2198 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:08:35.406622 kubelet[2198]: I0620 19:08:35.406568 2198 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:08:35.406895 kubelet[2198]: I0620 19:08:35.406856 2198 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:08:35.406995 kubelet[2198]: I0620 19:08:35.406983 2198 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:08:35.407073 kubelet[2198]: I0620 19:08:35.407065 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:08:35.412387 kubelet[2198]: W0620 19:08:35.412317 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.167.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-6-80f26ce993&limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:35.412548 kubelet[2198]: E0620 19:08:35.412403 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.167.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-6-80f26ce993&limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:35.415284 kubelet[2198]: W0620 19:08:35.415201 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.167.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:35.415284 kubelet[2198]: E0620 19:08:35.415287 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.167.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:35.417280 kubelet[2198]: I0620 19:08:35.417091 2198 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:08:35.421919 kubelet[2198]: I0620 19:08:35.421706 2198 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:08:35.422474 kubelet[2198]: W0620 19:08:35.422426 2198 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:08:35.428218 kubelet[2198]: I0620 19:08:35.427897 2198 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:08:35.428218 kubelet[2198]: I0620 19:08:35.427956 2198 server.go:1287] "Started kubelet" Jun 20 19:08:35.429177 kubelet[2198]: I0620 19:08:35.429113 2198 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:08:35.430416 kubelet[2198]: I0620 19:08:35.430263 2198 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:08:35.433414 kubelet[2198]: I0620 19:08:35.433331 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:08:35.436942 kubelet[2198]: I0620 19:08:35.435898 2198 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:08:35.436942 kubelet[2198]: I0620 19:08:35.436252 2198 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:08:35.445733 kubelet[2198]: E0620 19:08:35.441567 2198 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.167.30:6443/api/v1/namespaces/default/events\": dial tcp 146.190.167.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-6-80f26ce993.184ad5d4de66e291 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-6-80f26ce993,UID:ci-4230.2.0-6-80f26ce993,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-6-80f26ce993,},FirstTimestamp:2025-06-20 19:08:35.427926673 +0000 UTC m=+0.547926446,LastTimestamp:2025-06-20 19:08:35.427926673 +0000 UTC m=+0.547926446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-6-80f26ce993,}" Jun 20 19:08:35.445733 kubelet[2198]: I0620 19:08:35.445206 2198 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:08:35.446836 kubelet[2198]: I0620 19:08:35.446807 2198 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:08:35.447348 kubelet[2198]: E0620 19:08:35.447314 2198 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-6-80f26ce993\" not found" Jun 20 19:08:35.448310 kubelet[2198]: I0620 19:08:35.447782 2198 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:08:35.448435 kubelet[2198]: I0620 19:08:35.448378 2198 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:08:35.451954 kubelet[2198]: E0620 19:08:35.451918 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.167.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-6-80f26ce993?timeout=10s\": dial tcp 146.190.167.30:6443: connect: connection refused" interval="200ms" Jun 20 19:08:35.452350 kubelet[2198]: I0620 19:08:35.452327 2198 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:08:35.452528 kubelet[2198]: I0620 19:08:35.452512 2198 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:08:35.454188 kubelet[2198]: W0620 19:08:35.454144 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.167.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:35.454535 kubelet[2198]: E0620 19:08:35.454512 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.167.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:35.454998 kubelet[2198]: E0620 19:08:35.454982 2198 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:08:35.455191 kubelet[2198]: I0620 19:08:35.455179 2198 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:08:35.466661 kubelet[2198]: I0620 19:08:35.466590 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:08:35.468506 kubelet[2198]: I0620 19:08:35.468460 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:08:35.468506 kubelet[2198]: I0620 19:08:35.468498 2198 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:08:35.468657 kubelet[2198]: I0620 19:08:35.468525 2198 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:08:35.468657 kubelet[2198]: I0620 19:08:35.468533 2198 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:08:35.468657 kubelet[2198]: E0620 19:08:35.468587 2198 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:08:35.481055 kubelet[2198]: W0620 19:08:35.480982 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.167.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:35.481055 kubelet[2198]: E0620 19:08:35.481065 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.167.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:35.486772 kubelet[2198]: I0620 19:08:35.486741 2198 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:08:35.487057 kubelet[2198]: I0620 19:08:35.487042 2198 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:08:35.487180 kubelet[2198]: I0620 19:08:35.487171 2198 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:08:35.488732 kubelet[2198]: I0620 19:08:35.488708 2198 policy_none.go:49] "None policy: Start" Jun 20 19:08:35.489105 kubelet[2198]: I0620 19:08:35.488850 2198 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:08:35.489105 kubelet[2198]: I0620 19:08:35.488888 2198 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:08:35.496218 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:08:35.512048 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:08:35.517457 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:08:35.530848 kubelet[2198]: I0620 19:08:35.530245 2198 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:08:35.530848 kubelet[2198]: I0620 19:08:35.530527 2198 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:08:35.530848 kubelet[2198]: I0620 19:08:35.530549 2198 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:08:35.530848 kubelet[2198]: I0620 19:08:35.530829 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:08:35.532402 kubelet[2198]: E0620 19:08:35.532373 2198 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:08:35.532520 kubelet[2198]: E0620 19:08:35.532434 2198 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.0-6-80f26ce993\" not found" Jun 20 19:08:35.579253 systemd[1]: Created slice kubepods-burstable-pode9494dd39718a183cc1eb6c3ab425613.slice - libcontainer container kubepods-burstable-pode9494dd39718a183cc1eb6c3ab425613.slice. Jun 20 19:08:35.588063 kubelet[2198]: E0620 19:08:35.588022 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.592827 systemd[1]: Created slice kubepods-burstable-pod3bbf96074476ad067cee261accdcdd53.slice - libcontainer container kubepods-burstable-pod3bbf96074476ad067cee261accdcdd53.slice. Jun 20 19:08:35.595420 kubelet[2198]: E0620 19:08:35.595383 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.609605 systemd[1]: Created slice kubepods-burstable-pod07ab1b52d8543a952c0bd4dd49a57fbb.slice - libcontainer container kubepods-burstable-pod07ab1b52d8543a952c0bd4dd49a57fbb.slice. Jun 20 19:08:35.615402 kubelet[2198]: E0620 19:08:35.615111 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.632289 kubelet[2198]: I0620 19:08:35.631909 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.632532 kubelet[2198]: E0620 19:08:35.632502 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.167.30:6443/api/v1/nodes\": dial tcp 146.190.167.30:6443: connect: connection refused" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.653456 kubelet[2198]: E0620 19:08:35.653395 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.167.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-6-80f26ce993?timeout=10s\": dial tcp 146.190.167.30:6443: connect: connection refused" interval="400ms" Jun 20 19:08:35.749048 kubelet[2198]: I0620 19:08:35.748997 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749048 kubelet[2198]: I0620 19:08:35.749048 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9494dd39718a183cc1eb6c3ab425613-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" (UID: \"e9494dd39718a183cc1eb6c3ab425613\") " pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749296 kubelet[2198]: I0620 19:08:35.749074 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9494dd39718a183cc1eb6c3ab425613-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" (UID: \"e9494dd39718a183cc1eb6c3ab425613\") " pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749296 kubelet[2198]: I0620 19:08:35.749091 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749296 kubelet[2198]: I0620 19:08:35.749110 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749296 kubelet[2198]: I0620 19:08:35.749127 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749296 kubelet[2198]: I0620 19:08:35.749142 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ab1b52d8543a952c0bd4dd49a57fbb-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-6-80f26ce993\" (UID: \"07ab1b52d8543a952c0bd4dd49a57fbb\") " pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749508 kubelet[2198]: I0620 19:08:35.749156 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9494dd39718a183cc1eb6c3ab425613-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" (UID: \"e9494dd39718a183cc1eb6c3ab425613\") " pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.749508 kubelet[2198]: I0620 19:08:35.749173 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.834532 kubelet[2198]: I0620 19:08:35.834469 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.835000 kubelet[2198]: E0620 19:08:35.834958 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.167.30:6443/api/v1/nodes\": dial tcp 146.190.167.30:6443: connect: connection refused" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:35.889459 kubelet[2198]: E0620 19:08:35.889319 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:35.890385 containerd[1485]: time="2025-06-20T19:08:35.890342240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-6-80f26ce993,Uid:e9494dd39718a183cc1eb6c3ab425613,Namespace:kube-system,Attempt:0,}" Jun 20 19:08:35.897176 kubelet[2198]: E0620 19:08:35.896708 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:35.897909 containerd[1485]: time="2025-06-20T19:08:35.897652679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-6-80f26ce993,Uid:3bbf96074476ad067cee261accdcdd53,Namespace:kube-system,Attempt:0,}" Jun 20 19:08:35.916118 kubelet[2198]: E0620 19:08:35.916060 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:35.916704 containerd[1485]: time="2025-06-20T19:08:35.916658716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-6-80f26ce993,Uid:07ab1b52d8543a952c0bd4dd49a57fbb,Namespace:kube-system,Attempt:0,}" Jun 20 19:08:36.054179 kubelet[2198]: E0620 19:08:36.054122 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.167.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-6-80f26ce993?timeout=10s\": dial tcp 146.190.167.30:6443: connect: connection refused" interval="800ms" Jun 20 19:08:36.237090 kubelet[2198]: I0620 19:08:36.236807 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:36.237889 kubelet[2198]: E0620 19:08:36.237817 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.167.30:6443/api/v1/nodes\": dial tcp 146.190.167.30:6443: connect: connection refused" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:36.339643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183313900.mount: Deactivated successfully. Jun 20 19:08:36.344677 containerd[1485]: time="2025-06-20T19:08:36.344596036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:08:36.347066 containerd[1485]: time="2025-06-20T19:08:36.346998312Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:08:36.348438 containerd[1485]: time="2025-06-20T19:08:36.348356395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 20 19:08:36.348779 containerd[1485]: time="2025-06-20T19:08:36.348740161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:08:36.350429 containerd[1485]: time="2025-06-20T19:08:36.350338635Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:08:36.351943 containerd[1485]: time="2025-06-20T19:08:36.351054112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 19:08:36.354418 containerd[1485]: time="2025-06-20T19:08:36.354340940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:08:36.358944 containerd[1485]: time="2025-06-20T19:08:36.357133158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:08:36.361035 containerd[1485]: time="2025-06-20T19:08:36.360971660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.518342ms" Jun 20 19:08:36.365693 containerd[1485]: time="2025-06-20T19:08:36.365508085Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 448.748134ms" Jun 20 19:08:36.370685 containerd[1485]: time="2025-06-20T19:08:36.370333601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.567226ms" Jun 20 19:08:36.417424 kubelet[2198]: W0620 19:08:36.417299 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.167.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-6-80f26ce993&limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:36.417424 kubelet[2198]: E0620 19:08:36.417379 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.167.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-6-80f26ce993&limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:36.550638 containerd[1485]: time="2025-06-20T19:08:36.549295990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:08:36.552729 containerd[1485]: time="2025-06-20T19:08:36.551449363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:08:36.552729 containerd[1485]: time="2025-06-20T19:08:36.551478659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:36.552729 containerd[1485]: time="2025-06-20T19:08:36.551619211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:36.553192 containerd[1485]: time="2025-06-20T19:08:36.551128644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:08:36.553192 containerd[1485]: time="2025-06-20T19:08:36.551209689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:08:36.553192 containerd[1485]: time="2025-06-20T19:08:36.551228863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:36.553192 containerd[1485]: time="2025-06-20T19:08:36.551344824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:36.567040 containerd[1485]: time="2025-06-20T19:08:36.560557169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:08:36.569037 containerd[1485]: time="2025-06-20T19:08:36.568944409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:08:36.572913 containerd[1485]: time="2025-06-20T19:08:36.569128591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:36.572913 containerd[1485]: time="2025-06-20T19:08:36.572670350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:36.600821 systemd[1]: Started cri-containerd-5adfd21d9e6fc195b12920cc3578b96a9cb518986660dcff461cadf5d116e3f1.scope - libcontainer container 5adfd21d9e6fc195b12920cc3578b96a9cb518986660dcff461cadf5d116e3f1. Jun 20 19:08:36.610586 systemd[1]: Started cri-containerd-604b20c699737a5b28ad670c55ee693bdd68788d506ead1f118ed2f6c7ae200d.scope - libcontainer container 604b20c699737a5b28ad670c55ee693bdd68788d506ead1f118ed2f6c7ae200d. Jun 20 19:08:36.618886 systemd[1]: Started cri-containerd-0972a6504684d8763527c19e88f20eb1726bc55d521085b4128c8cc4b200f6e4.scope - libcontainer container 0972a6504684d8763527c19e88f20eb1726bc55d521085b4128c8cc4b200f6e4. Jun 20 19:08:36.694192 containerd[1485]: time="2025-06-20T19:08:36.693569946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-6-80f26ce993,Uid:07ab1b52d8543a952c0bd4dd49a57fbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0972a6504684d8763527c19e88f20eb1726bc55d521085b4128c8cc4b200f6e4\"" Jun 20 19:08:36.696897 kubelet[2198]: E0620 19:08:36.696381 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:36.704997 containerd[1485]: time="2025-06-20T19:08:36.704946017Z" level=info msg="CreateContainer within sandbox \"0972a6504684d8763527c19e88f20eb1726bc55d521085b4128c8cc4b200f6e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:08:36.721146 containerd[1485]: time="2025-06-20T19:08:36.720967027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-6-80f26ce993,Uid:e9494dd39718a183cc1eb6c3ab425613,Namespace:kube-system,Attempt:0,} returns sandbox id \"604b20c699737a5b28ad670c55ee693bdd68788d506ead1f118ed2f6c7ae200d\"" Jun 20 19:08:36.722327 kubelet[2198]: E0620 19:08:36.721996 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:36.725912 containerd[1485]: time="2025-06-20T19:08:36.725779060Z" level=info msg="CreateContainer within sandbox \"604b20c699737a5b28ad670c55ee693bdd68788d506ead1f118ed2f6c7ae200d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:08:36.728763 containerd[1485]: time="2025-06-20T19:08:36.728412433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-6-80f26ce993,Uid:3bbf96074476ad067cee261accdcdd53,Namespace:kube-system,Attempt:0,} returns sandbox id \"5adfd21d9e6fc195b12920cc3578b96a9cb518986660dcff461cadf5d116e3f1\"" Jun 20 19:08:36.729658 kubelet[2198]: E0620 19:08:36.729627 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:36.733600 containerd[1485]: time="2025-06-20T19:08:36.733457122Z" level=info msg="CreateContainer within sandbox \"5adfd21d9e6fc195b12920cc3578b96a9cb518986660dcff461cadf5d116e3f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:08:36.736590 containerd[1485]: time="2025-06-20T19:08:36.736535486Z" level=info msg="CreateContainer within sandbox \"0972a6504684d8763527c19e88f20eb1726bc55d521085b4128c8cc4b200f6e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"291632b6e1e4ae617961f7a78b4f6b22031e12dbce14db9d6865b2181c42964a\"" Jun 20 19:08:36.738723 containerd[1485]: time="2025-06-20T19:08:36.738318862Z" level=info msg="StartContainer for \"291632b6e1e4ae617961f7a78b4f6b22031e12dbce14db9d6865b2181c42964a\"" Jun 20 19:08:36.739531 kubelet[2198]: W0620 19:08:36.738991 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.167.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:36.739531 kubelet[2198]: E0620 19:08:36.739079 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.167.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:36.746195 containerd[1485]: time="2025-06-20T19:08:36.746061450Z" level=info msg="CreateContainer within sandbox \"604b20c699737a5b28ad670c55ee693bdd68788d506ead1f118ed2f6c7ae200d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e483256ffe1a777ef67c2dcb622a9f06f16301921f4ae7a4341b819ea17f9f5\"" Jun 20 19:08:36.747901 containerd[1485]: time="2025-06-20T19:08:36.747179916Z" level=info msg="StartContainer for \"1e483256ffe1a777ef67c2dcb622a9f06f16301921f4ae7a4341b819ea17f9f5\"" Jun 20 19:08:36.786689 systemd[1]: Started cri-containerd-291632b6e1e4ae617961f7a78b4f6b22031e12dbce14db9d6865b2181c42964a.scope - libcontainer container 291632b6e1e4ae617961f7a78b4f6b22031e12dbce14db9d6865b2181c42964a. Jun 20 19:08:36.798323 systemd[1]: Started cri-containerd-1e483256ffe1a777ef67c2dcb622a9f06f16301921f4ae7a4341b819ea17f9f5.scope - libcontainer container 1e483256ffe1a777ef67c2dcb622a9f06f16301921f4ae7a4341b819ea17f9f5. Jun 20 19:08:36.802227 containerd[1485]: time="2025-06-20T19:08:36.801100131Z" level=info msg="CreateContainer within sandbox \"5adfd21d9e6fc195b12920cc3578b96a9cb518986660dcff461cadf5d116e3f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be3269a2fcae77cb397b0b592ce2275ff0bf037a1c1ba03e46e02497e241505f\"" Jun 20 19:08:36.803738 containerd[1485]: time="2025-06-20T19:08:36.803237338Z" level=info msg="StartContainer for \"be3269a2fcae77cb397b0b592ce2275ff0bf037a1c1ba03e46e02497e241505f\"" Jun 20 19:08:36.856989 kubelet[2198]: E0620 19:08:36.855512 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.167.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-6-80f26ce993?timeout=10s\": dial tcp 146.190.167.30:6443: connect: connection refused" interval="1.6s" Jun 20 19:08:36.869582 systemd[1]: Started cri-containerd-be3269a2fcae77cb397b0b592ce2275ff0bf037a1c1ba03e46e02497e241505f.scope - libcontainer container be3269a2fcae77cb397b0b592ce2275ff0bf037a1c1ba03e46e02497e241505f. Jun 20 19:08:36.907406 kubelet[2198]: W0620 19:08:36.907226 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.167.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:36.907406 kubelet[2198]: E0620 19:08:36.907327 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.167.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:36.921271 containerd[1485]: time="2025-06-20T19:08:36.921032255Z" level=info msg="StartContainer for \"291632b6e1e4ae617961f7a78b4f6b22031e12dbce14db9d6865b2181c42964a\" returns successfully" Jun 20 19:08:36.928823 containerd[1485]: time="2025-06-20T19:08:36.928475446Z" level=info msg="StartContainer for \"1e483256ffe1a777ef67c2dcb622a9f06f16301921f4ae7a4341b819ea17f9f5\" returns successfully" Jun 20 19:08:36.970616 containerd[1485]: time="2025-06-20T19:08:36.970443123Z" level=info msg="StartContainer for \"be3269a2fcae77cb397b0b592ce2275ff0bf037a1c1ba03e46e02497e241505f\" returns successfully" Jun 20 19:08:37.017092 kubelet[2198]: W0620 19:08:37.016856 2198 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.167.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.167.30:6443: connect: connection refused Jun 20 19:08:37.017092 kubelet[2198]: E0620 19:08:37.017030 2198 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.167.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.167.30:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:08:37.041659 kubelet[2198]: I0620 19:08:37.041186 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:37.043151 kubelet[2198]: E0620 19:08:37.043096 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.167.30:6443/api/v1/nodes\": dial tcp 146.190.167.30:6443: connect: connection refused" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:37.490918 kubelet[2198]: E0620 19:08:37.490230 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:37.491846 kubelet[2198]: E0620 19:08:37.491670 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:37.496694 kubelet[2198]: E0620 19:08:37.496142 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:37.498628 kubelet[2198]: E0620 19:08:37.498599 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:37.501977 kubelet[2198]: E0620 19:08:37.500680 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:37.501977 kubelet[2198]: E0620 19:08:37.500858 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:38.505940 kubelet[2198]: E0620 19:08:38.505029 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:38.505940 kubelet[2198]: E0620 19:08:38.505233 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:38.505940 kubelet[2198]: E0620 19:08:38.505649 2198 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:38.505940 kubelet[2198]: E0620 19:08:38.505771 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:38.646437 kubelet[2198]: I0620 19:08:38.645404 2198 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.769788 kubelet[2198]: E0620 19:08:39.769733 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.0-6-80f26ce993\" not found" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.833924 kubelet[2198]: I0620 19:08:39.833840 2198 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.834284 kubelet[2198]: E0620 19:08:39.833931 2198 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.0-6-80f26ce993\": node \"ci-4230.2.0-6-80f26ce993\" not found" Jun 20 19:08:39.907305 kubelet[2198]: E0620 19:08:39.906960 2198 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230.2.0-6-80f26ce993.184ad5d4de66e291 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-6-80f26ce993,UID:ci-4230.2.0-6-80f26ce993,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-6-80f26ce993,},FirstTimestamp:2025-06-20 19:08:35.427926673 +0000 UTC m=+0.547926446,LastTimestamp:2025-06-20 19:08:35.427926673 +0000 UTC m=+0.547926446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-6-80f26ce993,}" Jun 20 19:08:39.948142 kubelet[2198]: I0620 19:08:39.948072 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.963252 kubelet[2198]: E0620 19:08:39.963179 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.963252 kubelet[2198]: I0620 19:08:39.963247 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.966907 kubelet[2198]: E0620 19:08:39.966838 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-6-80f26ce993\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.966907 kubelet[2198]: I0620 19:08:39.966900 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:39.971096 kubelet[2198]: E0620 19:08:39.971032 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:40.420463 kubelet[2198]: I0620 19:08:40.420040 2198 apiserver.go:52] "Watching apiserver" Jun 20 19:08:40.448567 kubelet[2198]: I0620 19:08:40.448499 2198 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:08:40.487922 kubelet[2198]: I0620 19:08:40.487735 2198 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:40.491359 kubelet[2198]: E0620 19:08:40.491287 2198 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:40.492026 kubelet[2198]: E0620 19:08:40.491785 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:41.821761 systemd[1]: Reload requested from client PID 2472 ('systemctl') (unit session-7.scope)... Jun 20 19:08:41.821784 systemd[1]: Reloading... Jun 20 19:08:41.970929 zram_generator::config[2528]: No configuration found. Jun 20 19:08:42.115605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:08:42.276397 systemd[1]: Reloading finished in 453 ms. Jun 20 19:08:42.308276 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:42.322779 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:08:42.323207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:42.323299 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 127.3M memory peak. Jun 20 19:08:42.331393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:08:42.511185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:08:42.521073 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:08:42.625486 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:08:42.625486 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:08:42.625486 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:08:42.626127 kubelet[2567]: I0620 19:08:42.625547 2567 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:08:42.637653 kubelet[2567]: I0620 19:08:42.637589 2567 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:08:42.638970 kubelet[2567]: I0620 19:08:42.637937 2567 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:08:42.638970 kubelet[2567]: I0620 19:08:42.638343 2567 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:08:42.642254 kubelet[2567]: I0620 19:08:42.642214 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:08:42.648698 kubelet[2567]: I0620 19:08:42.647908 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:08:42.657202 kubelet[2567]: E0620 19:08:42.657163 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 19:08:42.657499 kubelet[2567]: I0620 19:08:42.657483 2567 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 19:08:42.661470 kubelet[2567]: I0620 19:08:42.661428 2567 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:08:42.661801 kubelet[2567]: I0620 19:08:42.661752 2567 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:08:42.662179 kubelet[2567]: I0620 19:08:42.661803 2567 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-6-80f26ce993","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:08:42.662365 kubelet[2567]: I0620 19:08:42.662189 2567 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:08:42.662365 kubelet[2567]: I0620 19:08:42.662206 2567 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:08:42.662365 kubelet[2567]: I0620 19:08:42.662280 2567 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:08:42.662545 kubelet[2567]: I0620 19:08:42.662516 2567 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:08:42.662615 kubelet[2567]: I0620 19:08:42.662555 2567 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:08:42.662615 kubelet[2567]: I0620 19:08:42.662587 2567 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:08:42.662615 kubelet[2567]: I0620 19:08:42.662603 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:08:42.666124 kubelet[2567]: I0620 19:08:42.666081 2567 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 19:08:42.669367 kubelet[2567]: I0620 19:08:42.669329 2567 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:08:42.670277 kubelet[2567]: I0620 19:08:42.670248 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:08:42.670435 kubelet[2567]: I0620 19:08:42.670424 2567 server.go:1287] "Started kubelet" Jun 20 19:08:42.673740 kubelet[2567]: I0620 19:08:42.673711 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:08:42.683959 kubelet[2567]: I0620 19:08:42.683910 2567 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:08:42.686031 kubelet[2567]: I0620 19:08:42.685999 2567 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:08:42.689022 kubelet[2567]: I0620 19:08:42.688821 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:08:42.689408 kubelet[2567]: I0620 19:08:42.689365 2567 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:08:42.689969 kubelet[2567]: I0620 19:08:42.689840 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:08:42.693988 kubelet[2567]: I0620 19:08:42.693952 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:08:42.695358 kubelet[2567]: E0620 19:08:42.694500 2567 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-6-80f26ce993\" not found" Jun 20 19:08:42.697399 kubelet[2567]: I0620 19:08:42.697362 2567 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:08:42.697840 kubelet[2567]: I0620 19:08:42.697488 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:08:42.699406 kubelet[2567]: I0620 19:08:42.699355 2567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:08:42.699696 kubelet[2567]: I0620 19:08:42.699684 2567 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:08:42.708319 kubelet[2567]: I0620 19:08:42.707815 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:08:42.715081 kubelet[2567]: I0620 19:08:42.714383 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:08:42.715081 kubelet[2567]: I0620 19:08:42.714434 2567 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:08:42.715081 kubelet[2567]: I0620 19:08:42.714468 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:08:42.715081 kubelet[2567]: I0620 19:08:42.714479 2567 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:08:42.715081 kubelet[2567]: E0620 19:08:42.714562 2567 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:08:42.720389 kubelet[2567]: I0620 19:08:42.720178 2567 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:08:42.741449 kubelet[2567]: E0620 19:08:42.738981 2567 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.786982 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787001 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787026 2567 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787225 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787236 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787254 2567 policy_none.go:49] "None policy: Start" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787265 2567 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787275 2567 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:08:42.787563 kubelet[2567]: I0620 19:08:42.787380 2567 state_mem.go:75] "Updated machine memory state" Jun 20 19:08:42.794932 kubelet[2567]: I0620 19:08:42.794891 2567 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:08:42.795219 kubelet[2567]: I0620 19:08:42.795169 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:08:42.795292 kubelet[2567]: I0620 19:08:42.795193 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:08:42.795823 kubelet[2567]: I0620 19:08:42.795637 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:08:42.803850 kubelet[2567]: E0620 19:08:42.803700 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:08:42.819921 kubelet[2567]: I0620 19:08:42.818063 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.819921 kubelet[2567]: I0620 19:08:42.818312 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.820909 kubelet[2567]: I0620 19:08:42.820149 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.850599 kubelet[2567]: W0620 19:08:42.850492 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:08:42.851157 kubelet[2567]: W0620 19:08:42.851131 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:08:42.851618 kubelet[2567]: W0620 19:08:42.851541 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:08:42.903603 kubelet[2567]: I0620 19:08:42.903306 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9494dd39718a183cc1eb6c3ab425613-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" (UID: \"e9494dd39718a183cc1eb6c3ab425613\") " pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903603 kubelet[2567]: I0620 19:08:42.903366 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903603 kubelet[2567]: I0620 19:08:42.903394 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903603 kubelet[2567]: I0620 19:08:42.903413 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903603 kubelet[2567]: I0620 19:08:42.903440 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ab1b52d8543a952c0bd4dd49a57fbb-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-6-80f26ce993\" (UID: \"07ab1b52d8543a952c0bd4dd49a57fbb\") " pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903918 kubelet[2567]: I0620 19:08:42.903466 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9494dd39718a183cc1eb6c3ab425613-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" (UID: \"e9494dd39718a183cc1eb6c3ab425613\") " pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903918 kubelet[2567]: I0620 19:08:42.903493 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9494dd39718a183cc1eb6c3ab425613-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" (UID: \"e9494dd39718a183cc1eb6c3ab425613\") " pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903918 kubelet[2567]: I0620 19:08:42.903511 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.903918 kubelet[2567]: I0620 19:08:42.903530 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bbf96074476ad067cee261accdcdd53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-6-80f26ce993\" (UID: \"3bbf96074476ad067cee261accdcdd53\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.910462 kubelet[2567]: I0620 19:08:42.910177 2567 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.924680 kubelet[2567]: I0620 19:08:42.924012 2567 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:42.924680 kubelet[2567]: I0620 19:08:42.924147 2567 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-6-80f26ce993" Jun 20 19:08:43.153842 kubelet[2567]: E0620 19:08:43.151946 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:43.153842 kubelet[2567]: E0620 19:08:43.152325 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:43.153842 kubelet[2567]: E0620 19:08:43.152457 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:43.663749 kubelet[2567]: I0620 19:08:43.663666 2567 apiserver.go:52] "Watching apiserver" Jun 20 19:08:43.700321 kubelet[2567]: I0620 19:08:43.700268 2567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:08:43.762699 kubelet[2567]: E0620 19:08:43.760623 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:43.762699 kubelet[2567]: I0620 19:08:43.760920 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:43.762699 kubelet[2567]: I0620 19:08:43.761187 2567 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:43.793775 kubelet[2567]: W0620 19:08:43.793277 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:08:43.793775 kubelet[2567]: E0620 19:08:43.793384 2567 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-6-80f26ce993\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:43.793775 kubelet[2567]: E0620 19:08:43.793655 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:43.800344 kubelet[2567]: W0620 19:08:43.798991 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:08:43.800344 kubelet[2567]: E0620 19:08:43.799134 2567 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-6-80f26ce993\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" Jun 20 19:08:43.800344 kubelet[2567]: E0620 19:08:43.799532 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:43.825841 kubelet[2567]: I0620 19:08:43.825745 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.0-6-80f26ce993" podStartSLOduration=1.8256784000000001 podStartE2EDuration="1.8256784s" podCreationTimestamp="2025-06-20 19:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:08:43.825313212 +0000 UTC m=+1.294091416" watchObservedRunningTime="2025-06-20 19:08:43.8256784 +0000 UTC m=+1.294456576" Jun 20 19:08:43.843213 kubelet[2567]: I0620 19:08:43.843139 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.0-6-80f26ce993" podStartSLOduration=1.843115611 podStartE2EDuration="1.843115611s" podCreationTimestamp="2025-06-20 19:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:08:43.841257118 +0000 UTC m=+1.310035295" watchObservedRunningTime="2025-06-20 19:08:43.843115611 +0000 UTC m=+1.311893779" Jun 20 19:08:43.859714 kubelet[2567]: I0620 19:08:43.859501 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.0-6-80f26ce993" podStartSLOduration=1.859479786 podStartE2EDuration="1.859479786s" podCreationTimestamp="2025-06-20 19:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:08:43.859393742 +0000 UTC m=+1.328171920" watchObservedRunningTime="2025-06-20 19:08:43.859479786 +0000 UTC m=+1.328257964" Jun 20 19:08:44.116344 sudo[1649]: pam_unix(sudo:session): session closed for user root Jun 20 19:08:44.123114 sshd[1648]: Connection closed by 139.178.68.195 port 47688 Jun 20 19:08:44.124194 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Jun 20 19:08:44.133813 systemd[1]: sshd@6-146.190.167.30:22-139.178.68.195:47688.service: Deactivated successfully. Jun 20 19:08:44.137199 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:08:44.137572 systemd[1]: session-7.scope: Consumed 4.427s CPU time, 166.4M memory peak. Jun 20 19:08:44.140189 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:08:44.141814 systemd-logind[1465]: Removed session 7. Jun 20 19:08:44.762571 kubelet[2567]: E0620 19:08:44.762536 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:44.763302 kubelet[2567]: E0620 19:08:44.762650 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:45.456001 kubelet[2567]: E0620 19:08:45.455942 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:45.764768 kubelet[2567]: E0620 19:08:45.764624 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:48.088555 kubelet[2567]: I0620 19:08:48.088514 2567 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:08:48.089172 containerd[1485]: time="2025-06-20T19:08:48.088926395Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:08:48.090085 kubelet[2567]: I0620 19:08:48.089651 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:08:48.766819 systemd[1]: Created slice kubepods-besteffort-podba058c07_edf1_4a75_b5a5_463bb663128c.slice - libcontainer container kubepods-besteffort-podba058c07_edf1_4a75_b5a5_463bb663128c.slice. Jun 20 19:08:48.788160 systemd[1]: Created slice kubepods-burstable-pod44d3b1ea_d741_4ec7_89e4_06130f9d1d47.slice - libcontainer container kubepods-burstable-pod44d3b1ea_d741_4ec7_89e4_06130f9d1d47.slice. Jun 20 19:08:48.835480 kubelet[2567]: I0620 19:08:48.835096 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba058c07-edf1-4a75-b5a5-463bb663128c-kube-proxy\") pod \"kube-proxy-nx74s\" (UID: \"ba058c07-edf1-4a75-b5a5-463bb663128c\") " pod="kube-system/kube-proxy-nx74s" Jun 20 19:08:48.835480 kubelet[2567]: I0620 19:08:48.835154 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-cni\") pod \"kube-flannel-ds-9zzpr\" (UID: \"44d3b1ea-d741-4ec7-89e4-06130f9d1d47\") " pod="kube-flannel/kube-flannel-ds-9zzpr" Jun 20 19:08:48.835480 kubelet[2567]: I0620 19:08:48.835190 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-cni-plugin\") pod \"kube-flannel-ds-9zzpr\" (UID: \"44d3b1ea-d741-4ec7-89e4-06130f9d1d47\") " pod="kube-flannel/kube-flannel-ds-9zzpr" Jun 20 19:08:48.835480 kubelet[2567]: I0620 19:08:48.835212 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-xtables-lock\") pod \"kube-flannel-ds-9zzpr\" (UID: \"44d3b1ea-d741-4ec7-89e4-06130f9d1d47\") " pod="kube-flannel/kube-flannel-ds-9zzpr" Jun 20 19:08:48.835480 kubelet[2567]: I0620 19:08:48.835243 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qln6c\" (UniqueName: \"kubernetes.io/projected/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-kube-api-access-qln6c\") pod \"kube-flannel-ds-9zzpr\" (UID: \"44d3b1ea-d741-4ec7-89e4-06130f9d1d47\") " pod="kube-flannel/kube-flannel-ds-9zzpr" Jun 20 19:08:48.835820 kubelet[2567]: I0620 19:08:48.835274 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba058c07-edf1-4a75-b5a5-463bb663128c-lib-modules\") pod \"kube-proxy-nx74s\" (UID: \"ba058c07-edf1-4a75-b5a5-463bb663128c\") " pod="kube-system/kube-proxy-nx74s" Jun 20 19:08:48.835820 kubelet[2567]: I0620 19:08:48.835298 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-flannel-cfg\") pod \"kube-flannel-ds-9zzpr\" (UID: \"44d3b1ea-d741-4ec7-89e4-06130f9d1d47\") " pod="kube-flannel/kube-flannel-ds-9zzpr" Jun 20 19:08:48.835820 kubelet[2567]: I0620 19:08:48.835323 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba058c07-edf1-4a75-b5a5-463bb663128c-xtables-lock\") pod \"kube-proxy-nx74s\" (UID: \"ba058c07-edf1-4a75-b5a5-463bb663128c\") " pod="kube-system/kube-proxy-nx74s" Jun 20 19:08:48.835820 kubelet[2567]: I0620 19:08:48.835358 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntzm\" (UniqueName: \"kubernetes.io/projected/ba058c07-edf1-4a75-b5a5-463bb663128c-kube-api-access-nntzm\") pod \"kube-proxy-nx74s\" (UID: \"ba058c07-edf1-4a75-b5a5-463bb663128c\") " pod="kube-system/kube-proxy-nx74s" Jun 20 19:08:48.835820 kubelet[2567]: I0620 19:08:48.835415 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-run\") pod \"kube-flannel-ds-9zzpr\" (UID: \"44d3b1ea-d741-4ec7-89e4-06130f9d1d47\") " pod="kube-flannel/kube-flannel-ds-9zzpr" Jun 20 19:08:48.946905 kubelet[2567]: E0620 19:08:48.945298 2567 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:08:48.946905 kubelet[2567]: E0620 19:08:48.945343 2567 projected.go:194] Error preparing data for projected volume kube-api-access-qln6c for pod kube-flannel/kube-flannel-ds-9zzpr: configmap "kube-root-ca.crt" not found Jun 20 19:08:48.946905 kubelet[2567]: E0620 19:08:48.945413 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-kube-api-access-qln6c podName:44d3b1ea-d741-4ec7-89e4-06130f9d1d47 nodeName:}" failed. No retries permitted until 2025-06-20 19:08:49.445389878 +0000 UTC m=+6.914168045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qln6c" (UniqueName: "kubernetes.io/projected/44d3b1ea-d741-4ec7-89e4-06130f9d1d47-kube-api-access-qln6c") pod "kube-flannel-ds-9zzpr" (UID: "44d3b1ea-d741-4ec7-89e4-06130f9d1d47") : configmap "kube-root-ca.crt" not found Jun 20 19:08:48.949572 kubelet[2567]: E0620 19:08:48.949457 2567 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:08:48.949572 kubelet[2567]: E0620 19:08:48.949491 2567 projected.go:194] Error preparing data for projected volume kube-api-access-nntzm for pod kube-system/kube-proxy-nx74s: configmap "kube-root-ca.crt" not found Jun 20 19:08:48.949572 kubelet[2567]: E0620 19:08:48.949546 2567 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba058c07-edf1-4a75-b5a5-463bb663128c-kube-api-access-nntzm podName:ba058c07-edf1-4a75-b5a5-463bb663128c nodeName:}" failed. No retries permitted until 2025-06-20 19:08:49.449527151 +0000 UTC m=+6.918305304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nntzm" (UniqueName: "kubernetes.io/projected/ba058c07-edf1-4a75-b5a5-463bb663128c-kube-api-access-nntzm") pod "kube-proxy-nx74s" (UID: "ba058c07-edf1-4a75-b5a5-463bb663128c") : configmap "kube-root-ca.crt" not found Jun 20 19:08:49.681942 kubelet[2567]: E0620 19:08:49.681700 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:49.683067 containerd[1485]: time="2025-06-20T19:08:49.682804599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx74s,Uid:ba058c07-edf1-4a75-b5a5-463bb663128c,Namespace:kube-system,Attempt:0,}" Jun 20 19:08:49.694162 kubelet[2567]: E0620 19:08:49.693309 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:49.694524 containerd[1485]: time="2025-06-20T19:08:49.694495277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9zzpr,Uid:44d3b1ea-d741-4ec7-89e4-06130f9d1d47,Namespace:kube-flannel,Attempt:0,}" Jun 20 19:08:49.726287 containerd[1485]: time="2025-06-20T19:08:49.725990666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:08:49.726287 containerd[1485]: time="2025-06-20T19:08:49.726067423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:08:49.726287 containerd[1485]: time="2025-06-20T19:08:49.726083815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:49.726287 containerd[1485]: time="2025-06-20T19:08:49.726186547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:49.770119 containerd[1485]: time="2025-06-20T19:08:49.768578070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:08:49.770119 containerd[1485]: time="2025-06-20T19:08:49.768672717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:08:49.770119 containerd[1485]: time="2025-06-20T19:08:49.768687728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:49.770119 containerd[1485]: time="2025-06-20T19:08:49.768801726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:08:49.788189 systemd[1]: Started cri-containerd-54ba55a9d32fdc717dc6536990c67428972aa05284f9cd14823b207fcd6cc5a1.scope - libcontainer container 54ba55a9d32fdc717dc6536990c67428972aa05284f9cd14823b207fcd6cc5a1. Jun 20 19:08:49.806243 systemd[1]: Started cri-containerd-adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac.scope - libcontainer container adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac. Jun 20 19:08:49.845643 containerd[1485]: time="2025-06-20T19:08:49.845546511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx74s,Uid:ba058c07-edf1-4a75-b5a5-463bb663128c,Namespace:kube-system,Attempt:0,} returns sandbox id \"54ba55a9d32fdc717dc6536990c67428972aa05284f9cd14823b207fcd6cc5a1\"" Jun 20 19:08:49.847366 kubelet[2567]: E0620 19:08:49.847315 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:49.854563 containerd[1485]: time="2025-06-20T19:08:49.854293790Z" level=info msg="CreateContainer within sandbox \"54ba55a9d32fdc717dc6536990c67428972aa05284f9cd14823b207fcd6cc5a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:08:49.883041 containerd[1485]: time="2025-06-20T19:08:49.882983668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9zzpr,Uid:44d3b1ea-d741-4ec7-89e4-06130f9d1d47,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\"" Jun 20 19:08:49.885956 kubelet[2567]: E0620 19:08:49.885897 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:49.889245 containerd[1485]: time="2025-06-20T19:08:49.889083580Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jun 20 19:08:49.894215 containerd[1485]: time="2025-06-20T19:08:49.894023537Z" level=info msg="CreateContainer within sandbox \"54ba55a9d32fdc717dc6536990c67428972aa05284f9cd14823b207fcd6cc5a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"21498fc5aa9caff4d3e1630856944170141d435c2b5e2f57bd1fb69f22aa9080\"" Jun 20 19:08:49.895603 containerd[1485]: time="2025-06-20T19:08:49.895553211Z" level=info msg="StartContainer for \"21498fc5aa9caff4d3e1630856944170141d435c2b5e2f57bd1fb69f22aa9080\"" Jun 20 19:08:49.938162 systemd[1]: Started cri-containerd-21498fc5aa9caff4d3e1630856944170141d435c2b5e2f57bd1fb69f22aa9080.scope - libcontainer container 21498fc5aa9caff4d3e1630856944170141d435c2b5e2f57bd1fb69f22aa9080. Jun 20 19:08:49.986085 containerd[1485]: time="2025-06-20T19:08:49.985832980Z" level=info msg="StartContainer for \"21498fc5aa9caff4d3e1630856944170141d435c2b5e2f57bd1fb69f22aa9080\" returns successfully" Jun 20 19:08:50.519913 update_engine[1467]: I20250620 19:08:50.519590 1467 update_attempter.cc:509] Updating boot flags... Jun 20 19:08:50.579905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2881) Jun 20 19:08:50.676888 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2779) Jun 20 19:08:50.753974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2779) Jun 20 19:08:50.807547 kubelet[2567]: E0620 19:08:50.807505 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:50.841589 kubelet[2567]: I0620 19:08:50.841515 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nx74s" podStartSLOduration=2.841479278 podStartE2EDuration="2.841479278s" podCreationTimestamp="2025-06-20 19:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:08:50.84123732 +0000 UTC m=+8.310015494" watchObservedRunningTime="2025-06-20 19:08:50.841479278 +0000 UTC m=+8.310257455" Jun 20 19:08:50.916759 kubelet[2567]: E0620 19:08:50.916713 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:51.821386 kubelet[2567]: E0620 19:08:51.821149 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:51.824352 kubelet[2567]: E0620 19:08:51.824177 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:51.933193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866852352.mount: Deactivated successfully. Jun 20 19:08:51.975917 containerd[1485]: time="2025-06-20T19:08:51.975818848Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:51.977367 containerd[1485]: time="2025-06-20T19:08:51.977278707Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jun 20 19:08:51.978174 containerd[1485]: time="2025-06-20T19:08:51.978098812Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:51.981911 containerd[1485]: time="2025-06-20T19:08:51.980610079Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:51.982210 containerd[1485]: time="2025-06-20T19:08:51.982165817Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.093018301s" Jun 20 19:08:51.982323 containerd[1485]: time="2025-06-20T19:08:51.982301510Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jun 20 19:08:51.987544 containerd[1485]: time="2025-06-20T19:08:51.987495948Z" level=info msg="CreateContainer within sandbox \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jun 20 19:08:52.003818 containerd[1485]: time="2025-06-20T19:08:52.002658114Z" level=info msg="CreateContainer within sandbox \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf\"" Jun 20 19:08:52.004428 containerd[1485]: time="2025-06-20T19:08:52.004395642Z" level=info msg="StartContainer for \"652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf\"" Jun 20 19:08:52.004762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765002541.mount: Deactivated successfully. Jun 20 19:08:52.060231 systemd[1]: Started cri-containerd-652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf.scope - libcontainer container 652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf. Jun 20 19:08:52.098772 containerd[1485]: time="2025-06-20T19:08:52.098590485Z" level=info msg="StartContainer for \"652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf\" returns successfully" Jun 20 19:08:52.101846 systemd[1]: cri-containerd-652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf.scope: Deactivated successfully. Jun 20 19:08:52.145683 containerd[1485]: time="2025-06-20T19:08:52.145497576Z" level=info msg="shim disconnected" id=652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf namespace=k8s.io Jun 20 19:08:52.145683 containerd[1485]: time="2025-06-20T19:08:52.145602356Z" level=warning msg="cleaning up after shim disconnected" id=652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf namespace=k8s.io Jun 20 19:08:52.145683 containerd[1485]: time="2025-06-20T19:08:52.145615531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:08:52.816735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-652ada4cb16db32503074e2b71f3e2bcc7cdf3a6a9b05c0d7c9274c5c1c9badf-rootfs.mount: Deactivated successfully. Jun 20 19:08:52.820920 kubelet[2567]: E0620 19:08:52.819071 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:52.829189 kubelet[2567]: E0620 19:08:52.827891 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:52.829189 kubelet[2567]: E0620 19:08:52.828053 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:52.829189 kubelet[2567]: E0620 19:08:52.828462 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:52.829735 containerd[1485]: time="2025-06-20T19:08:52.829041104Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jun 20 19:08:54.920571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912101076.mount: Deactivated successfully. Jun 20 19:08:55.461890 kubelet[2567]: E0620 19:08:55.461832 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:56.496089 containerd[1485]: time="2025-06-20T19:08:56.496017734Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:56.497245 containerd[1485]: time="2025-06-20T19:08:56.496775875Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jun 20 19:08:56.498132 containerd[1485]: time="2025-06-20T19:08:56.497681830Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:56.500373 containerd[1485]: time="2025-06-20T19:08:56.500341547Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:08:56.501397 containerd[1485]: time="2025-06-20T19:08:56.501361016Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.672287266s" Jun 20 19:08:56.501538 containerd[1485]: time="2025-06-20T19:08:56.501398885Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jun 20 19:08:56.504678 containerd[1485]: time="2025-06-20T19:08:56.504645275Z" level=info msg="CreateContainer within sandbox \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:08:56.536190 containerd[1485]: time="2025-06-20T19:08:56.536120923Z" level=info msg="CreateContainer within sandbox \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345\"" Jun 20 19:08:56.538272 containerd[1485]: time="2025-06-20T19:08:56.536933382Z" level=info msg="StartContainer for \"c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345\"" Jun 20 19:08:56.597143 systemd[1]: Started cri-containerd-c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345.scope - libcontainer container c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345. Jun 20 19:08:56.635117 systemd[1]: cri-containerd-c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345.scope: Deactivated successfully. Jun 20 19:08:56.641975 containerd[1485]: time="2025-06-20T19:08:56.641801340Z" level=info msg="StartContainer for \"c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345\" returns successfully" Jun 20 19:08:56.659520 kubelet[2567]: I0620 19:08:56.659482 2567 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:08:56.688499 containerd[1485]: time="2025-06-20T19:08:56.687661276Z" level=info msg="shim disconnected" id=c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345 namespace=k8s.io Jun 20 19:08:56.691815 containerd[1485]: time="2025-06-20T19:08:56.688498358Z" level=warning msg="cleaning up after shim disconnected" id=c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345 namespace=k8s.io Jun 20 19:08:56.691815 containerd[1485]: time="2025-06-20T19:08:56.688520322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:08:56.691706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c644f1858a7b203e3f7b46b0125923084c2ee94cfff168edbc2d3ccf0f139345-rootfs.mount: Deactivated successfully. Jun 20 19:08:56.709511 kubelet[2567]: I0620 19:08:56.709143 2567 status_manager.go:890] "Failed to get status for pod" podUID="98572ce2-e57e-4e79-8854-b90ae7f15270" pod="kube-system/coredns-668d6bf9bc-m8nft" err="pods \"coredns-668d6bf9bc-m8nft\" is forbidden: User \"system:node:ci-4230.2.0-6-80f26ce993\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.0-6-80f26ce993' and this object" Jun 20 19:08:56.711923 kubelet[2567]: W0620 19:08:56.711783 2567 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230.2.0-6-80f26ce993" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230.2.0-6-80f26ce993' and this object Jun 20 19:08:56.711923 kubelet[2567]: E0620 19:08:56.711882 2567 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4230.2.0-6-80f26ce993\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.0-6-80f26ce993' and this object" logger="UnhandledError" Jun 20 19:08:56.720911 systemd[1]: Created slice kubepods-burstable-pod98572ce2_e57e_4e79_8854_b90ae7f15270.slice - libcontainer container kubepods-burstable-pod98572ce2_e57e_4e79_8854_b90ae7f15270.slice. Jun 20 19:08:56.745722 systemd[1]: Created slice kubepods-burstable-podcbabbb4c_8b7a_451f_9103_1df83da62288.slice - libcontainer container kubepods-burstable-podcbabbb4c_8b7a_451f_9103_1df83da62288.slice. Jun 20 19:08:56.795565 kubelet[2567]: I0620 19:08:56.795493 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ch6n\" (UniqueName: \"kubernetes.io/projected/cbabbb4c-8b7a-451f-9103-1df83da62288-kube-api-access-5ch6n\") pod \"coredns-668d6bf9bc-rbr9n\" (UID: \"cbabbb4c-8b7a-451f-9103-1df83da62288\") " pod="kube-system/coredns-668d6bf9bc-rbr9n" Jun 20 19:08:56.795804 kubelet[2567]: I0620 19:08:56.795595 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbabbb4c-8b7a-451f-9103-1df83da62288-config-volume\") pod \"coredns-668d6bf9bc-rbr9n\" (UID: \"cbabbb4c-8b7a-451f-9103-1df83da62288\") " pod="kube-system/coredns-668d6bf9bc-rbr9n" Jun 20 19:08:56.795804 kubelet[2567]: I0620 19:08:56.795650 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98572ce2-e57e-4e79-8854-b90ae7f15270-config-volume\") pod \"coredns-668d6bf9bc-m8nft\" (UID: \"98572ce2-e57e-4e79-8854-b90ae7f15270\") " pod="kube-system/coredns-668d6bf9bc-m8nft" Jun 20 19:08:56.795804 kubelet[2567]: I0620 19:08:56.795685 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrwr\" (UniqueName: \"kubernetes.io/projected/98572ce2-e57e-4e79-8854-b90ae7f15270-kube-api-access-mfrwr\") pod \"coredns-668d6bf9bc-m8nft\" (UID: \"98572ce2-e57e-4e79-8854-b90ae7f15270\") " pod="kube-system/coredns-668d6bf9bc-m8nft" Jun 20 19:08:56.840591 kubelet[2567]: E0620 19:08:56.838914 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:56.844341 containerd[1485]: time="2025-06-20T19:08:56.843610782Z" level=info msg="CreateContainer within sandbox \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jun 20 19:08:56.857742 containerd[1485]: time="2025-06-20T19:08:56.857687325Z" level=info msg="CreateContainer within sandbox \"adae42475e2990aeb3cd7510318532850bfa0ed1f25075cfadba15558b4bcaac\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4975cce96ab7bb3a16eda5178351809ad4bf38d2d0d568b892049805dbd1b99c\"" Jun 20 19:08:56.861885 containerd[1485]: time="2025-06-20T19:08:56.858677246Z" level=info msg="StartContainer for \"4975cce96ab7bb3a16eda5178351809ad4bf38d2d0d568b892049805dbd1b99c\"" Jun 20 19:08:56.897162 systemd[1]: Started cri-containerd-4975cce96ab7bb3a16eda5178351809ad4bf38d2d0d568b892049805dbd1b99c.scope - libcontainer container 4975cce96ab7bb3a16eda5178351809ad4bf38d2d0d568b892049805dbd1b99c. Jun 20 19:08:56.951892 containerd[1485]: time="2025-06-20T19:08:56.950945806Z" level=info msg="StartContainer for \"4975cce96ab7bb3a16eda5178351809ad4bf38d2d0d568b892049805dbd1b99c\" returns successfully" Jun 20 19:08:57.849736 kubelet[2567]: E0620 19:08:57.848553 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:57.866292 kubelet[2567]: I0620 19:08:57.864910 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9zzpr" podStartSLOduration=3.248716051 podStartE2EDuration="9.864886414s" podCreationTimestamp="2025-06-20 19:08:48 +0000 UTC" firstStartedPulling="2025-06-20 19:08:49.886672008 +0000 UTC m=+7.355450176" lastFinishedPulling="2025-06-20 19:08:56.502842383 +0000 UTC m=+13.971620539" observedRunningTime="2025-06-20 19:08:57.8647843 +0000 UTC m=+15.333562477" watchObservedRunningTime="2025-06-20 19:08:57.864886414 +0000 UTC m=+15.333664587" Jun 20 19:08:57.931246 kubelet[2567]: E0620 19:08:57.931124 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:57.932598 containerd[1485]: time="2025-06-20T19:08:57.932562928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m8nft,Uid:98572ce2-e57e-4e79-8854-b90ae7f15270,Namespace:kube-system,Attempt:0,}" Jun 20 19:08:57.957569 kubelet[2567]: E0620 19:08:57.954428 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:57.964937 containerd[1485]: time="2025-06-20T19:08:57.964235751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rbr9n,Uid:cbabbb4c-8b7a-451f-9103-1df83da62288,Namespace:kube-system,Attempt:0,}" Jun 20 19:08:58.048926 systemd-networkd[1375]: flannel.1: Link UP Jun 20 19:08:58.055525 systemd-networkd[1375]: flannel.1: Gained carrier Jun 20 19:08:58.072164 containerd[1485]: time="2025-06-20T19:08:58.071391222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m8nft,Uid:98572ce2-e57e-4e79-8854-b90ae7f15270,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0140ab6172ae7b32cb6ea66640ffe179e8f1320f9e4e6257edec09ca3657241\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jun 20 19:08:58.072845 kubelet[2567]: E0620 19:08:58.072793 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0140ab6172ae7b32cb6ea66640ffe179e8f1320f9e4e6257edec09ca3657241\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jun 20 19:08:58.072996 kubelet[2567]: E0620 19:08:58.072927 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0140ab6172ae7b32cb6ea66640ffe179e8f1320f9e4e6257edec09ca3657241\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-m8nft" Jun 20 19:08:58.072996 kubelet[2567]: E0620 19:08:58.072961 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0140ab6172ae7b32cb6ea66640ffe179e8f1320f9e4e6257edec09ca3657241\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-m8nft" Jun 20 19:08:58.073084 kubelet[2567]: E0620 19:08:58.073023 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m8nft_kube-system(98572ce2-e57e-4e79-8854-b90ae7f15270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m8nft_kube-system(98572ce2-e57e-4e79-8854-b90ae7f15270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0140ab6172ae7b32cb6ea66640ffe179e8f1320f9e4e6257edec09ca3657241\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-m8nft" podUID="98572ce2-e57e-4e79-8854-b90ae7f15270" Jun 20 19:08:58.081289 containerd[1485]: time="2025-06-20T19:08:58.081178414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rbr9n,Uid:cbabbb4c-8b7a-451f-9103-1df83da62288,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efcd9c014668c31da785cc4df72de528f7c16856462121278085d5ce3f16372f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jun 20 19:08:58.082043 kubelet[2567]: E0620 19:08:58.081500 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efcd9c014668c31da785cc4df72de528f7c16856462121278085d5ce3f16372f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jun 20 19:08:58.082043 kubelet[2567]: E0620 19:08:58.081579 2567 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efcd9c014668c31da785cc4df72de528f7c16856462121278085d5ce3f16372f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-rbr9n" Jun 20 19:08:58.082043 kubelet[2567]: E0620 19:08:58.081604 2567 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efcd9c014668c31da785cc4df72de528f7c16856462121278085d5ce3f16372f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-rbr9n" Jun 20 19:08:58.082043 kubelet[2567]: E0620 19:08:58.081658 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rbr9n_kube-system(cbabbb4c-8b7a-451f-9103-1df83da62288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rbr9n_kube-system(cbabbb4c-8b7a-451f-9103-1df83da62288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efcd9c014668c31da785cc4df72de528f7c16856462121278085d5ce3f16372f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-rbr9n" podUID="cbabbb4c-8b7a-451f-9103-1df83da62288" Jun 20 19:08:58.514583 systemd[1]: run-netns-cni\x2d1c73a8ac\x2d4695\x2db3f5\x2d17bb\x2d76883dae8284.mount: Deactivated successfully. Jun 20 19:08:58.514702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efcd9c014668c31da785cc4df72de528f7c16856462121278085d5ce3f16372f-shm.mount: Deactivated successfully. Jun 20 19:08:58.514771 systemd[1]: run-netns-cni\x2dd6befd55\x2d9c5b\x2d9d4d\x2d6f5e\x2d37b8ccbe7567.mount: Deactivated successfully. Jun 20 19:08:58.514841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0140ab6172ae7b32cb6ea66640ffe179e8f1320f9e4e6257edec09ca3657241-shm.mount: Deactivated successfully. Jun 20 19:08:58.850489 kubelet[2567]: E0620 19:08:58.850371 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:08:59.641095 systemd-networkd[1375]: flannel.1: Gained IPv6LL Jun 20 19:09:09.715256 kubelet[2567]: E0620 19:09:09.715199 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:09.716291 containerd[1485]: time="2025-06-20T19:09:09.716240363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rbr9n,Uid:cbabbb4c-8b7a-451f-9103-1df83da62288,Namespace:kube-system,Attempt:0,}" Jun 20 19:09:09.757457 systemd-networkd[1375]: cni0: Link UP Jun 20 19:09:09.757467 systemd-networkd[1375]: cni0: Gained carrier Jun 20 19:09:09.765837 systemd-networkd[1375]: cni0: Lost carrier Jun 20 19:09:09.784140 kernel: cni0: port 1(vethd5153cf8) entered blocking state Jun 20 19:09:09.784349 kernel: cni0: port 1(vethd5153cf8) entered disabled state Jun 20 19:09:09.785975 systemd-networkd[1375]: vethd5153cf8: Link UP Jun 20 19:09:09.788241 kernel: vethd5153cf8: entered allmulticast mode Jun 20 19:09:09.788334 kernel: vethd5153cf8: entered promiscuous mode Jun 20 19:09:09.788356 kernel: cni0: port 1(vethd5153cf8) entered blocking state Jun 20 19:09:09.789292 kernel: cni0: port 1(vethd5153cf8) entered forwarding state Jun 20 19:09:09.792957 kernel: cni0: port 1(vethd5153cf8) entered disabled state Jun 20 19:09:09.801839 kernel: cni0: port 1(vethd5153cf8) entered blocking state Jun 20 19:09:09.801980 kernel: cni0: port 1(vethd5153cf8) entered forwarding state Jun 20 19:09:09.800299 systemd-networkd[1375]: vethd5153cf8: Gained carrier Jun 20 19:09:09.800810 systemd-networkd[1375]: cni0: Gained carrier Jun 20 19:09:09.814588 containerd[1485]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jun 20 19:09:09.814588 containerd[1485]: delegateAdd: netconf sent to delegate plugin: Jun 20 19:09:09.850725 containerd[1485]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-06-20T19:09:09.850390742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:09:09.850725 containerd[1485]: time="2025-06-20T19:09:09.850474127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:09:09.850725 containerd[1485]: time="2025-06-20T19:09:09.850490686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:09:09.850725 containerd[1485]: time="2025-06-20T19:09:09.850626222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:09:09.889250 systemd[1]: Started cri-containerd-90c28deb28be0edc8f41a6efffe8e58dbb9a393a8a806b292f9b1718107652dc.scope - libcontainer container 90c28deb28be0edc8f41a6efffe8e58dbb9a393a8a806b292f9b1718107652dc. Jun 20 19:09:09.954593 containerd[1485]: time="2025-06-20T19:09:09.954537190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rbr9n,Uid:cbabbb4c-8b7a-451f-9103-1df83da62288,Namespace:kube-system,Attempt:0,} returns sandbox id \"90c28deb28be0edc8f41a6efffe8e58dbb9a393a8a806b292f9b1718107652dc\"" Jun 20 19:09:09.956772 kubelet[2567]: E0620 19:09:09.956182 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:09.958857 containerd[1485]: time="2025-06-20T19:09:09.958798549Z" level=info msg="CreateContainer within sandbox \"90c28deb28be0edc8f41a6efffe8e58dbb9a393a8a806b292f9b1718107652dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:09:09.985197 containerd[1485]: time="2025-06-20T19:09:09.984981663Z" level=info msg="CreateContainer within sandbox \"90c28deb28be0edc8f41a6efffe8e58dbb9a393a8a806b292f9b1718107652dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6050a10fb187716dd61e1120cb18b572954140497b1e1d77f50d57bb34a957c\"" Jun 20 19:09:09.987687 containerd[1485]: time="2025-06-20T19:09:09.987422497Z" level=info msg="StartContainer for \"f6050a10fb187716dd61e1120cb18b572954140497b1e1d77f50d57bb34a957c\"" Jun 20 19:09:10.032573 systemd[1]: Started cri-containerd-f6050a10fb187716dd61e1120cb18b572954140497b1e1d77f50d57bb34a957c.scope - libcontainer container f6050a10fb187716dd61e1120cb18b572954140497b1e1d77f50d57bb34a957c. Jun 20 19:09:10.075398 containerd[1485]: time="2025-06-20T19:09:10.075246240Z" level=info msg="StartContainer for \"f6050a10fb187716dd61e1120cb18b572954140497b1e1d77f50d57bb34a957c\" returns successfully" Jun 20 19:09:10.716266 kubelet[2567]: E0620 19:09:10.715532 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:10.718054 containerd[1485]: time="2025-06-20T19:09:10.717396562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m8nft,Uid:98572ce2-e57e-4e79-8854-b90ae7f15270,Namespace:kube-system,Attempt:0,}" Jun 20 19:09:10.730805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882059735.mount: Deactivated successfully. Jun 20 19:09:10.760467 kernel: cni0: port 2(veth7a6d6b93) entered blocking state Jun 20 19:09:10.760619 kernel: cni0: port 2(veth7a6d6b93) entered disabled state Jun 20 19:09:10.759282 systemd-networkd[1375]: veth7a6d6b93: Link UP Jun 20 19:09:10.762978 kernel: veth7a6d6b93: entered allmulticast mode Jun 20 19:09:10.764985 kernel: veth7a6d6b93: entered promiscuous mode Jun 20 19:09:10.765208 kernel: cni0: port 2(veth7a6d6b93) entered blocking state Jun 20 19:09:10.767139 kernel: cni0: port 2(veth7a6d6b93) entered forwarding state Jun 20 19:09:10.771196 kernel: cni0: port 2(veth7a6d6b93) entered disabled state Jun 20 19:09:10.782265 kernel: cni0: port 2(veth7a6d6b93) entered blocking state Jun 20 19:09:10.782379 kernel: cni0: port 2(veth7a6d6b93) entered forwarding state Jun 20 19:09:10.783465 systemd-networkd[1375]: veth7a6d6b93: Gained carrier Jun 20 19:09:10.790581 containerd[1485]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000106628), "name":"cbr0", "type":"bridge"} Jun 20 19:09:10.790581 containerd[1485]: delegateAdd: netconf sent to delegate plugin: Jun 20 19:09:10.825237 containerd[1485]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-06-20T19:09:10.824430457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:09:10.825237 containerd[1485]: time="2025-06-20T19:09:10.824593544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:09:10.825237 containerd[1485]: time="2025-06-20T19:09:10.824616727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:09:10.825969 containerd[1485]: time="2025-06-20T19:09:10.825114944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:09:10.852424 systemd[1]: run-containerd-runc-k8s.io-95edb9d7722486085d75afc733c71e1740a155f60c2ca89ad4c1e03180576f7a-runc.uPbOAI.mount: Deactivated successfully. Jun 20 19:09:10.862179 systemd[1]: Started cri-containerd-95edb9d7722486085d75afc733c71e1740a155f60c2ca89ad4c1e03180576f7a.scope - libcontainer container 95edb9d7722486085d75afc733c71e1740a155f60c2ca89ad4c1e03180576f7a. Jun 20 19:09:10.884746 kubelet[2567]: E0620 19:09:10.884682 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:10.909281 kubelet[2567]: I0620 19:09:10.909200 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rbr9n" podStartSLOduration=21.909176319 podStartE2EDuration="21.909176319s" podCreationTimestamp="2025-06-20 19:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:09:10.903785178 +0000 UTC m=+28.372563355" watchObservedRunningTime="2025-06-20 19:09:10.909176319 +0000 UTC m=+28.377954497" Jun 20 19:09:10.948210 containerd[1485]: time="2025-06-20T19:09:10.947748923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m8nft,Uid:98572ce2-e57e-4e79-8854-b90ae7f15270,Namespace:kube-system,Attempt:0,} returns sandbox id \"95edb9d7722486085d75afc733c71e1740a155f60c2ca89ad4c1e03180576f7a\"" Jun 20 19:09:10.951048 kubelet[2567]: E0620 19:09:10.950562 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:10.957561 containerd[1485]: time="2025-06-20T19:09:10.957347448Z" level=info msg="CreateContainer within sandbox \"95edb9d7722486085d75afc733c71e1740a155f60c2ca89ad4c1e03180576f7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:09:10.978894 containerd[1485]: time="2025-06-20T19:09:10.978689121Z" level=info msg="CreateContainer within sandbox \"95edb9d7722486085d75afc733c71e1740a155f60c2ca89ad4c1e03180576f7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9868b974aedbe00033fa4a83fe68cadb317ce0596ccebdbca9314c2fdc67379e\"" Jun 20 19:09:10.981873 containerd[1485]: time="2025-06-20T19:09:10.980129445Z" level=info msg="StartContainer for \"9868b974aedbe00033fa4a83fe68cadb317ce0596ccebdbca9314c2fdc67379e\"" Jun 20 19:09:11.025109 systemd[1]: Started cri-containerd-9868b974aedbe00033fa4a83fe68cadb317ce0596ccebdbca9314c2fdc67379e.scope - libcontainer container 9868b974aedbe00033fa4a83fe68cadb317ce0596ccebdbca9314c2fdc67379e. Jun 20 19:09:11.033685 systemd-networkd[1375]: cni0: Gained IPv6LL Jun 20 19:09:11.068936 containerd[1485]: time="2025-06-20T19:09:11.068456022Z" level=info msg="StartContainer for \"9868b974aedbe00033fa4a83fe68cadb317ce0596ccebdbca9314c2fdc67379e\" returns successfully" Jun 20 19:09:11.801180 systemd-networkd[1375]: vethd5153cf8: Gained IPv6LL Jun 20 19:09:11.887981 kubelet[2567]: E0620 19:09:11.886587 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:11.887981 kubelet[2567]: E0620 19:09:11.887305 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:11.901892 kubelet[2567]: I0620 19:09:11.900887 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m8nft" podStartSLOduration=22.900845262 podStartE2EDuration="22.900845262s" podCreationTimestamp="2025-06-20 19:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:09:11.900749287 +0000 UTC m=+29.369527477" watchObservedRunningTime="2025-06-20 19:09:11.900845262 +0000 UTC m=+29.369623437" Jun 20 19:09:12.377181 systemd-networkd[1375]: veth7a6d6b93: Gained IPv6LL Jun 20 19:09:12.889051 kubelet[2567]: E0620 19:09:12.888833 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:12.889051 kubelet[2567]: E0620 19:09:12.888952 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:13.891335 kubelet[2567]: E0620 19:09:13.891245 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:23.747331 systemd[1]: Started sshd@7-146.190.167.30:22-139.178.68.195:41160.service - OpenSSH per-connection server daemon (139.178.68.195:41160). Jun 20 19:09:23.801078 sshd[3555]: Accepted publickey for core from 139.178.68.195 port 41160 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:23.802852 sshd-session[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:23.807905 systemd-logind[1465]: New session 8 of user core. Jun 20 19:09:23.819190 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:09:23.980635 sshd[3557]: Connection closed by 139.178.68.195 port 41160 Jun 20 19:09:23.980508 sshd-session[3555]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:23.984762 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:09:23.985019 systemd[1]: sshd@7-146.190.167.30:22-139.178.68.195:41160.service: Deactivated successfully. Jun 20 19:09:23.987566 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:09:23.989817 systemd-logind[1465]: Removed session 8. Jun 20 19:09:28.999274 systemd[1]: Started sshd@8-146.190.167.30:22-139.178.68.195:41176.service - OpenSSH per-connection server daemon (139.178.68.195:41176). Jun 20 19:09:29.063204 sshd[3591]: Accepted publickey for core from 139.178.68.195 port 41176 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:29.065170 sshd-session[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:29.072173 systemd-logind[1465]: New session 9 of user core. Jun 20 19:09:29.077208 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:09:29.212374 sshd[3593]: Connection closed by 139.178.68.195 port 41176 Jun 20 19:09:29.213104 sshd-session[3591]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:29.218210 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:09:29.219558 systemd[1]: sshd@8-146.190.167.30:22-139.178.68.195:41176.service: Deactivated successfully. Jun 20 19:09:29.223272 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:09:29.225200 systemd-logind[1465]: Removed session 9. Jun 20 19:09:34.233409 systemd[1]: Started sshd@9-146.190.167.30:22-139.178.68.195:43688.service - OpenSSH per-connection server daemon (139.178.68.195:43688). Jun 20 19:09:34.286042 sshd[3626]: Accepted publickey for core from 139.178.68.195 port 43688 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:34.288638 sshd-session[3626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:34.295184 systemd-logind[1465]: New session 10 of user core. Jun 20 19:09:34.300146 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:09:34.441931 sshd[3628]: Connection closed by 139.178.68.195 port 43688 Jun 20 19:09:34.442721 sshd-session[3626]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:34.461447 systemd[1]: sshd@9-146.190.167.30:22-139.178.68.195:43688.service: Deactivated successfully. Jun 20 19:09:34.465236 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:09:34.468810 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:09:34.475429 systemd[1]: Started sshd@10-146.190.167.30:22-139.178.68.195:43702.service - OpenSSH per-connection server daemon (139.178.68.195:43702). Jun 20 19:09:34.478591 systemd-logind[1465]: Removed session 10. Jun 20 19:09:34.529346 sshd[3640]: Accepted publickey for core from 139.178.68.195 port 43702 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:34.531508 sshd-session[3640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:34.539090 systemd-logind[1465]: New session 11 of user core. Jun 20 19:09:34.546254 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:09:34.730300 sshd[3643]: Connection closed by 139.178.68.195 port 43702 Jun 20 19:09:34.730829 sshd-session[3640]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:34.752358 systemd[1]: sshd@10-146.190.167.30:22-139.178.68.195:43702.service: Deactivated successfully. Jun 20 19:09:34.758721 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:09:34.762611 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:09:34.778178 systemd[1]: Started sshd@11-146.190.167.30:22-139.178.68.195:43704.service - OpenSSH per-connection server daemon (139.178.68.195:43704). Jun 20 19:09:34.782058 systemd-logind[1465]: Removed session 11. Jun 20 19:09:34.834018 sshd[3652]: Accepted publickey for core from 139.178.68.195 port 43704 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:34.836421 sshd-session[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:34.842283 systemd-logind[1465]: New session 12 of user core. Jun 20 19:09:34.854189 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:09:35.013637 sshd[3655]: Connection closed by 139.178.68.195 port 43704 Jun 20 19:09:35.014702 sshd-session[3652]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:35.019820 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:09:35.022757 systemd[1]: sshd@11-146.190.167.30:22-139.178.68.195:43704.service: Deactivated successfully. Jun 20 19:09:35.026300 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:09:35.028826 systemd-logind[1465]: Removed session 12. Jun 20 19:09:40.039521 systemd[1]: Started sshd@12-146.190.167.30:22-139.178.68.195:43712.service - OpenSSH per-connection server daemon (139.178.68.195:43712). Jun 20 19:09:40.095986 sshd[3691]: Accepted publickey for core from 139.178.68.195 port 43712 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:40.098387 sshd-session[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:40.107192 systemd-logind[1465]: New session 13 of user core. Jun 20 19:09:40.119241 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:09:40.274070 sshd[3693]: Connection closed by 139.178.68.195 port 43712 Jun 20 19:09:40.273938 sshd-session[3691]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:40.278514 systemd[1]: sshd@12-146.190.167.30:22-139.178.68.195:43712.service: Deactivated successfully. Jun 20 19:09:40.282078 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:09:40.284294 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:09:40.285926 systemd-logind[1465]: Removed session 13. Jun 20 19:09:45.294366 systemd[1]: Started sshd@13-146.190.167.30:22-139.178.68.195:56224.service - OpenSSH per-connection server daemon (139.178.68.195:56224). Jun 20 19:09:45.356895 sshd[3728]: Accepted publickey for core from 139.178.68.195 port 56224 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:45.359536 sshd-session[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:45.366823 systemd-logind[1465]: New session 14 of user core. Jun 20 19:09:45.372259 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:09:45.516403 sshd[3730]: Connection closed by 139.178.68.195 port 56224 Jun 20 19:09:45.517504 sshd-session[3728]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:45.522581 systemd[1]: sshd@13-146.190.167.30:22-139.178.68.195:56224.service: Deactivated successfully. Jun 20 19:09:45.526183 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:09:45.527306 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:09:45.529922 systemd-logind[1465]: Removed session 14. Jun 20 19:09:50.539782 systemd[1]: Started sshd@14-146.190.167.30:22-139.178.68.195:56230.service - OpenSSH per-connection server daemon (139.178.68.195:56230). Jun 20 19:09:50.595147 sshd[3765]: Accepted publickey for core from 139.178.68.195 port 56230 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:50.597548 sshd-session[3765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:50.605801 systemd-logind[1465]: New session 15 of user core. Jun 20 19:09:50.612273 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:09:50.755520 sshd[3767]: Connection closed by 139.178.68.195 port 56230 Jun 20 19:09:50.756201 sshd-session[3765]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:50.761405 systemd[1]: sshd@14-146.190.167.30:22-139.178.68.195:56230.service: Deactivated successfully. Jun 20 19:09:50.764329 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:09:50.765923 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:09:50.768251 systemd-logind[1465]: Removed session 15. Jun 20 19:09:55.715619 kubelet[2567]: E0620 19:09:55.715561 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:09:55.777274 systemd[1]: Started sshd@15-146.190.167.30:22-139.178.68.195:53002.service - OpenSSH per-connection server daemon (139.178.68.195:53002). Jun 20 19:09:55.830564 sshd[3802]: Accepted publickey for core from 139.178.68.195 port 53002 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:55.832756 sshd-session[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:55.839348 systemd-logind[1465]: New session 16 of user core. Jun 20 19:09:55.846617 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:09:56.005209 sshd[3804]: Connection closed by 139.178.68.195 port 53002 Jun 20 19:09:56.006079 sshd-session[3802]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:56.018972 systemd[1]: sshd@15-146.190.167.30:22-139.178.68.195:53002.service: Deactivated successfully. Jun 20 19:09:56.023266 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:09:56.027134 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:09:56.031463 systemd[1]: Started sshd@16-146.190.167.30:22-139.178.68.195:53012.service - OpenSSH per-connection server daemon (139.178.68.195:53012). Jun 20 19:09:56.033304 systemd-logind[1465]: Removed session 16. Jun 20 19:09:56.094454 sshd[3815]: Accepted publickey for core from 139.178.68.195 port 53012 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:56.096696 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:56.102723 systemd-logind[1465]: New session 17 of user core. Jun 20 19:09:56.110284 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:09:56.377513 sshd[3818]: Connection closed by 139.178.68.195 port 53012 Jun 20 19:09:56.380251 sshd-session[3815]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:56.392507 systemd[1]: sshd@16-146.190.167.30:22-139.178.68.195:53012.service: Deactivated successfully. Jun 20 19:09:56.395181 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:09:56.397857 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:09:56.411424 systemd[1]: Started sshd@17-146.190.167.30:22-139.178.68.195:53020.service - OpenSSH per-connection server daemon (139.178.68.195:53020). Jun 20 19:09:56.413782 systemd-logind[1465]: Removed session 17. Jun 20 19:09:56.473458 sshd[3826]: Accepted publickey for core from 139.178.68.195 port 53020 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:56.475473 sshd-session[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:56.482855 systemd-logind[1465]: New session 18 of user core. Jun 20 19:09:56.489184 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:09:57.448820 sshd[3829]: Connection closed by 139.178.68.195 port 53020 Jun 20 19:09:57.449300 sshd-session[3826]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:57.464587 systemd[1]: sshd@17-146.190.167.30:22-139.178.68.195:53020.service: Deactivated successfully. Jun 20 19:09:57.469229 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:09:57.472245 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:09:57.480682 systemd[1]: Started sshd@18-146.190.167.30:22-139.178.68.195:53024.service - OpenSSH per-connection server daemon (139.178.68.195:53024). Jun 20 19:09:57.487348 systemd-logind[1465]: Removed session 18. Jun 20 19:09:57.544746 sshd[3846]: Accepted publickey for core from 139.178.68.195 port 53024 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:57.547041 sshd-session[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:57.553622 systemd-logind[1465]: New session 19 of user core. Jun 20 19:09:57.557060 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:09:57.835915 sshd[3849]: Connection closed by 139.178.68.195 port 53024 Jun 20 19:09:57.836206 sshd-session[3846]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:57.851392 systemd[1]: sshd@18-146.190.167.30:22-139.178.68.195:53024.service: Deactivated successfully. Jun 20 19:09:57.854781 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:09:57.858769 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:09:57.864353 systemd[1]: Started sshd@19-146.190.167.30:22-139.178.68.195:53038.service - OpenSSH per-connection server daemon (139.178.68.195:53038). Jun 20 19:09:57.866169 systemd-logind[1465]: Removed session 19. Jun 20 19:09:57.913762 sshd[3858]: Accepted publickey for core from 139.178.68.195 port 53038 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:09:57.915576 sshd-session[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:09:57.924098 systemd-logind[1465]: New session 20 of user core. Jun 20 19:09:57.929079 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:09:58.075063 sshd[3861]: Connection closed by 139.178.68.195 port 53038 Jun 20 19:09:58.075915 sshd-session[3858]: pam_unix(sshd:session): session closed for user core Jun 20 19:09:58.082640 systemd[1]: sshd@19-146.190.167.30:22-139.178.68.195:53038.service: Deactivated successfully. Jun 20 19:09:58.086219 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:09:58.088054 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:09:58.089364 systemd-logind[1465]: Removed session 20. Jun 20 19:10:03.104524 systemd[1]: Started sshd@20-146.190.167.30:22-139.178.68.195:53046.service - OpenSSH per-connection server daemon (139.178.68.195:53046). Jun 20 19:10:03.165193 sshd[3894]: Accepted publickey for core from 139.178.68.195 port 53046 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:10:03.168177 sshd-session[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:03.176355 systemd-logind[1465]: New session 21 of user core. Jun 20 19:10:03.182297 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:10:03.357562 sshd[3896]: Connection closed by 139.178.68.195 port 53046 Jun 20 19:10:03.358439 sshd-session[3894]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:03.363021 systemd[1]: sshd@20-146.190.167.30:22-139.178.68.195:53046.service: Deactivated successfully. Jun 20 19:10:03.366457 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:10:03.369551 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:10:03.371565 systemd-logind[1465]: Removed session 21. Jun 20 19:10:03.715396 kubelet[2567]: E0620 19:10:03.715220 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:10:08.376368 systemd[1]: Started sshd@21-146.190.167.30:22-139.178.68.195:49464.service - OpenSSH per-connection server daemon (139.178.68.195:49464). Jun 20 19:10:08.423772 sshd[3936]: Accepted publickey for core from 139.178.68.195 port 49464 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:10:08.425555 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:08.430598 systemd-logind[1465]: New session 22 of user core. Jun 20 19:10:08.439200 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:10:08.577548 sshd[3938]: Connection closed by 139.178.68.195 port 49464 Jun 20 19:10:08.578930 sshd-session[3936]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:08.585932 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:10:08.587158 systemd[1]: sshd@21-146.190.167.30:22-139.178.68.195:49464.service: Deactivated successfully. Jun 20 19:10:08.590701 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:10:08.593490 systemd-logind[1465]: Removed session 22. Jun 20 19:10:13.599258 systemd[1]: Started sshd@22-146.190.167.30:22-139.178.68.195:60902.service - OpenSSH per-connection server daemon (139.178.68.195:60902). Jun 20 19:10:13.649439 sshd[3972]: Accepted publickey for core from 139.178.68.195 port 60902 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:10:13.652693 sshd-session[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:13.660579 systemd-logind[1465]: New session 23 of user core. Jun 20 19:10:13.666245 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:10:13.808179 sshd[3989]: Connection closed by 139.178.68.195 port 60902 Jun 20 19:10:13.809233 sshd-session[3972]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:13.813027 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:10:13.813291 systemd[1]: sshd@22-146.190.167.30:22-139.178.68.195:60902.service: Deactivated successfully. Jun 20 19:10:13.815506 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:10:13.818417 systemd-logind[1465]: Removed session 23. Jun 20 19:10:17.716789 kubelet[2567]: E0620 19:10:17.716418 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 20 19:10:18.830370 systemd[1]: Started sshd@23-146.190.167.30:22-139.178.68.195:60912.service - OpenSSH per-connection server daemon (139.178.68.195:60912). Jun 20 19:10:18.877522 sshd[4021]: Accepted publickey for core from 139.178.68.195 port 60912 ssh2: RSA SHA256:ypboh8BxHs6BvTJvOlMQNHXaKP7/rFEO8c4FTxi/aak Jun 20 19:10:18.879405 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:10:18.885952 systemd-logind[1465]: New session 24 of user core. Jun 20 19:10:18.892221 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:10:19.058633 sshd[4023]: Connection closed by 139.178.68.195 port 60912 Jun 20 19:10:19.059345 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Jun 20 19:10:19.063570 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:10:19.064844 systemd[1]: sshd@23-146.190.167.30:22-139.178.68.195:60912.service: Deactivated successfully. Jun 20 19:10:19.067313 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:10:19.068682 systemd-logind[1465]: Removed session 24.