Jul 2 06:55:20.136325 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 06:55:20.136355 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:55:20.136372 kernel: BIOS-provided physical RAM map: Jul 2 06:55:20.136381 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 06:55:20.136389 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 06:55:20.136397 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 06:55:20.136407 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jul 2 06:55:20.136416 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jul 2 06:55:20.136442 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 06:55:20.136456 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 06:55:20.136466 kernel: NX (Execute Disable) protection: active Jul 2 06:55:20.136475 kernel: SMBIOS 2.8 present. Jul 2 06:55:20.136484 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jul 2 06:55:20.136493 kernel: Hypervisor detected: KVM Jul 2 06:55:20.136506 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 06:55:20.136520 kernel: kvm-clock: using sched offset of 5726485221 cycles Jul 2 06:55:20.136532 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 06:55:20.136543 kernel: tsc: Detected 2494.140 MHz processor Jul 2 06:55:20.136554 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 06:55:20.136565 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 06:55:20.136575 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jul 2 06:55:20.136586 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 06:55:20.136600 kernel: ACPI: Early table checksum verification disabled Jul 2 06:55:20.136611 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jul 2 06:55:20.136627 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136639 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136649 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136660 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 06:55:20.136670 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136681 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136692 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136702 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:55:20.136717 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jul 2 06:55:20.136728 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jul 2 06:55:20.136740 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 06:55:20.136751 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jul 2 06:55:20.136762 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jul 2 06:55:20.136772 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jul 2 06:55:20.136782 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jul 2 06:55:20.136793 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 06:55:20.136814 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 06:55:20.136825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 06:55:20.136837 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 06:55:20.136849 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jul 2 06:55:20.136860 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jul 2 06:55:20.136871 kernel: Zone ranges: Jul 2 06:55:20.136883 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 06:55:20.136898 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jul 2 06:55:20.136910 kernel: Normal empty Jul 2 06:55:20.136921 kernel: Movable zone start for each node Jul 2 06:55:20.136932 kernel: Early memory node ranges Jul 2 06:55:20.136943 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 06:55:20.136955 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jul 2 06:55:20.136966 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jul 2 06:55:20.136996 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 06:55:20.137008 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 06:55:20.137024 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jul 2 06:55:20.137036 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 06:55:20.137048 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 06:55:20.137059 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 06:55:20.137071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 06:55:20.137083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 06:55:20.137095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 06:55:20.137107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 06:55:20.137119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 06:55:20.137136 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 06:55:20.137148 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 06:55:20.137159 kernel: TSC deadline timer available Jul 2 06:55:20.137171 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 06:55:20.137182 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 06:55:20.137194 kernel: Booting paravirtualized kernel on KVM Jul 2 06:55:20.137206 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 06:55:20.137218 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 06:55:20.137230 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jul 2 06:55:20.137247 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jul 2 06:55:20.137258 kernel: pcpu-alloc: [0] 0 1 Jul 2 06:55:20.137269 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 06:55:20.137281 kernel: Fallback order for Node 0: 0 Jul 2 06:55:20.137293 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jul 2 06:55:20.137304 kernel: Policy zone: DMA32 Jul 2 06:55:20.137318 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:55:20.137331 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 06:55:20.137347 kernel: random: crng init done Jul 2 06:55:20.137359 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 06:55:20.137370 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 06:55:20.137382 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 06:55:20.137394 kernel: Memory: 1967112K/2096600K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129228K reserved, 0K cma-reserved) Jul 2 06:55:20.137407 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 06:55:20.137418 kernel: Kernel/User page tables isolation: enabled Jul 2 06:55:20.137430 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 06:55:20.137441 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 06:55:20.137458 kernel: Dynamic Preempt: voluntary Jul 2 06:55:20.137471 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 06:55:20.137484 kernel: rcu: RCU event tracing is enabled. Jul 2 06:55:20.137496 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 06:55:20.137507 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 06:55:20.137520 kernel: Rude variant of Tasks RCU enabled. Jul 2 06:55:20.137531 kernel: Tracing variant of Tasks RCU enabled. Jul 2 06:55:20.137543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 06:55:20.137555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 06:55:20.137571 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 06:55:20.137584 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 06:55:20.137598 kernel: Console: colour VGA+ 80x25 Jul 2 06:55:20.137609 kernel: printk: console [tty0] enabled Jul 2 06:55:20.137622 kernel: printk: console [ttyS0] enabled Jul 2 06:55:20.137633 kernel: ACPI: Core revision 20220331 Jul 2 06:55:20.137645 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 06:55:20.137659 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 06:55:20.137671 kernel: x2apic enabled Jul 2 06:55:20.137687 kernel: Switched APIC routing to physical x2apic. Jul 2 06:55:20.137698 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 06:55:20.137710 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 06:55:20.137722 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jul 2 06:55:20.137734 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 06:55:20.137745 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 06:55:20.137757 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 06:55:20.137769 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 06:55:20.137781 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 06:55:20.137811 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 06:55:20.137824 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 06:55:20.137837 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 06:55:20.137852 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 06:55:20.137864 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 06:55:20.137875 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 06:55:20.137888 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 06:55:20.137900 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 06:55:20.137912 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 06:55:20.137928 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 06:55:20.137941 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 06:55:20.137954 kernel: Freeing SMP alternatives memory: 32K Jul 2 06:55:20.137965 kernel: pid_max: default: 32768 minimum: 301 Jul 2 06:55:20.137991 kernel: LSM: Security Framework initializing Jul 2 06:55:20.138003 kernel: SELinux: Initializing. Jul 2 06:55:20.138016 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 06:55:20.138032 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 06:55:20.138044 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jul 2 06:55:20.138056 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:55:20.138068 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 06:55:20.138080 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:55:20.138093 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 06:55:20.138104 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:55:20.138116 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 06:55:20.138128 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jul 2 06:55:20.138139 kernel: signal: max sigframe size: 1776 Jul 2 06:55:20.138158 kernel: rcu: Hierarchical SRCU implementation. Jul 2 06:55:20.138171 kernel: rcu: Max phase no-delay instances is 400. Jul 2 06:55:20.138183 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 06:55:20.138195 kernel: smp: Bringing up secondary CPUs ... Jul 2 06:55:20.138207 kernel: x86: Booting SMP configuration: Jul 2 06:55:20.138219 kernel: .... node #0, CPUs: #1 Jul 2 06:55:20.138231 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 06:55:20.138244 kernel: smpboot: Max logical packages: 1 Jul 2 06:55:20.138257 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jul 2 06:55:20.138274 kernel: devtmpfs: initialized Jul 2 06:55:20.138286 kernel: x86/mm: Memory block size: 128MB Jul 2 06:55:20.138298 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 06:55:20.138311 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 06:55:20.138324 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 06:55:20.138336 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 06:55:20.138350 kernel: audit: initializing netlink subsys (disabled) Jul 2 06:55:20.138362 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 06:55:20.138375 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 06:55:20.138391 kernel: cpuidle: using governor menu Jul 2 06:55:20.138404 kernel: audit: type=2000 audit(1719903318.664:1): state=initialized audit_enabled=0 res=1 Jul 2 06:55:20.138416 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 06:55:20.138429 kernel: dca service started, version 1.12.1 Jul 2 06:55:20.138442 kernel: PCI: Using configuration type 1 for base access Jul 2 06:55:20.138454 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 06:55:20.138466 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 06:55:20.138478 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 06:55:20.138490 kernel: ACPI: Added _OSI(Module Device) Jul 2 06:55:20.138507 kernel: ACPI: Added _OSI(Processor Device) Jul 2 06:55:20.138520 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 06:55:20.138532 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 06:55:20.138545 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 06:55:20.138557 kernel: ACPI: Interpreter enabled Jul 2 06:55:20.138570 kernel: ACPI: PM: (supports S0 S5) Jul 2 06:55:20.138583 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 06:55:20.138595 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 06:55:20.138607 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 06:55:20.138623 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 06:55:20.138636 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 06:55:20.139047 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 06:55:20.139230 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 06:55:20.139367 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 06:55:20.139385 kernel: acpiphp: Slot [3] registered Jul 2 06:55:20.139399 kernel: acpiphp: Slot [4] registered Jul 2 06:55:20.139421 kernel: acpiphp: Slot [5] registered Jul 2 06:55:20.139433 kernel: acpiphp: Slot [6] registered Jul 2 06:55:20.139446 kernel: acpiphp: Slot [7] registered Jul 2 06:55:20.139458 kernel: acpiphp: Slot [8] registered Jul 2 06:55:20.139471 kernel: acpiphp: Slot [9] registered Jul 2 06:55:20.139483 kernel: acpiphp: Slot [10] registered Jul 2 06:55:20.139496 kernel: acpiphp: Slot [11] registered Jul 2 06:55:20.139508 kernel: acpiphp: Slot [12] registered Jul 2 06:55:20.139521 kernel: acpiphp: Slot [13] registered Jul 2 06:55:20.139533 kernel: acpiphp: Slot [14] registered Jul 2 06:55:20.139550 kernel: acpiphp: Slot [15] registered Jul 2 06:55:20.139562 kernel: acpiphp: Slot [16] registered Jul 2 06:55:20.139575 kernel: acpiphp: Slot [17] registered Jul 2 06:55:20.139587 kernel: acpiphp: Slot [18] registered Jul 2 06:55:20.139600 kernel: acpiphp: Slot [19] registered Jul 2 06:55:20.139612 kernel: acpiphp: Slot [20] registered Jul 2 06:55:20.139625 kernel: acpiphp: Slot [21] registered Jul 2 06:55:20.139638 kernel: acpiphp: Slot [22] registered Jul 2 06:55:20.139661 kernel: acpiphp: Slot [23] registered Jul 2 06:55:20.139678 kernel: acpiphp: Slot [24] registered Jul 2 06:55:20.139690 kernel: acpiphp: Slot [25] registered Jul 2 06:55:20.139703 kernel: acpiphp: Slot [26] registered Jul 2 06:55:20.139715 kernel: acpiphp: Slot [27] registered Jul 2 06:55:20.139728 kernel: acpiphp: Slot [28] registered Jul 2 06:55:20.139740 kernel: acpiphp: Slot [29] registered Jul 2 06:55:20.139752 kernel: acpiphp: Slot [30] registered Jul 2 06:55:20.139765 kernel: acpiphp: Slot [31] registered Jul 2 06:55:20.139777 kernel: PCI host bridge to bus 0000:00 Jul 2 06:55:20.139942 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 06:55:20.140130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 06:55:20.140310 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 06:55:20.140450 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 06:55:20.140563 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 06:55:20.140675 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 06:55:20.140830 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 06:55:20.141001 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 06:55:20.141150 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 06:55:20.141291 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jul 2 06:55:20.141430 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 06:55:20.141577 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 06:55:20.141727 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 06:55:20.141878 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 06:55:20.142058 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jul 2 06:55:20.142201 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jul 2 06:55:20.142346 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 06:55:20.142483 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 06:55:20.142623 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 06:55:20.142812 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 06:55:20.142999 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 06:55:20.143151 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 06:55:20.143290 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jul 2 06:55:20.143437 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 06:55:20.143583 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 06:55:20.143763 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 06:55:20.143907 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jul 2 06:55:20.144061 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jul 2 06:55:20.144192 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 06:55:20.144343 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 06:55:20.144501 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jul 2 06:55:20.144639 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jul 2 06:55:20.144775 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 06:55:20.144931 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jul 2 06:55:20.145096 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jul 2 06:55:20.145230 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jul 2 06:55:20.145365 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 06:55:20.145512 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jul 2 06:55:20.145647 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 06:55:20.145785 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jul 2 06:55:20.145920 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 06:55:20.146092 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jul 2 06:55:20.146230 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jul 2 06:55:20.146366 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jul 2 06:55:20.146511 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jul 2 06:55:20.146662 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 06:55:20.146817 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jul 2 06:55:20.146975 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jul 2 06:55:20.146992 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 06:55:20.147021 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 06:55:20.147033 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 06:55:20.147046 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 06:55:20.147059 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 06:55:20.147072 kernel: iommu: Default domain type: Translated Jul 2 06:55:20.147084 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 06:55:20.147103 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 06:55:20.147117 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 06:55:20.147129 kernel: PTP clock support registered Jul 2 06:55:20.147144 kernel: PCI: Using ACPI for IRQ routing Jul 2 06:55:20.147158 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 06:55:20.147171 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 06:55:20.147184 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jul 2 06:55:20.147341 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 06:55:20.147484 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 06:55:20.147635 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 06:55:20.147654 kernel: vgaarb: loaded Jul 2 06:55:20.147667 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 06:55:20.147682 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 06:55:20.147696 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 06:55:20.147710 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 06:55:20.147724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 06:55:20.147737 kernel: pnp: PnP ACPI init Jul 2 06:55:20.147752 kernel: pnp: PnP ACPI: found 4 devices Jul 2 06:55:20.147771 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 06:55:20.147785 kernel: NET: Registered PF_INET protocol family Jul 2 06:55:20.147797 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 06:55:20.147811 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 06:55:20.147825 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 06:55:20.147838 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 06:55:20.147852 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 06:55:20.147867 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 06:55:20.147882 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 06:55:20.147900 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 06:55:20.147914 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 06:55:20.147927 kernel: NET: Registered PF_XDP protocol family Jul 2 06:55:20.148112 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 06:55:20.148251 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 06:55:20.148379 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 06:55:20.148532 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 06:55:20.148666 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 06:55:20.148839 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 06:55:20.149018 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 06:55:20.149041 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 06:55:20.149192 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 46866 usecs Jul 2 06:55:20.149212 kernel: PCI: CLS 0 bytes, default 64 Jul 2 06:55:20.149226 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 06:55:20.149239 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jul 2 06:55:20.149257 kernel: Initialise system trusted keyrings Jul 2 06:55:20.149277 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 06:55:20.149289 kernel: Key type asymmetric registered Jul 2 06:55:20.149302 kernel: Asymmetric key parser 'x509' registered Jul 2 06:55:20.149314 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 06:55:20.149327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 06:55:20.149339 kernel: io scheduler mq-deadline registered Jul 2 06:55:20.149352 kernel: io scheduler kyber registered Jul 2 06:55:20.149364 kernel: io scheduler bfq registered Jul 2 06:55:20.149376 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 06:55:20.149393 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 06:55:20.149405 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 06:55:20.149418 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 06:55:20.149430 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 06:55:20.149443 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 06:55:20.149455 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 06:55:20.149468 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 06:55:20.149480 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 06:55:20.149493 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 06:55:20.149701 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 06:55:20.149832 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 06:55:20.149957 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T06:55:19 UTC (1719903319) Jul 2 06:55:20.150093 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 06:55:20.150109 kernel: intel_pstate: CPU model not supported Jul 2 06:55:20.150122 kernel: NET: Registered PF_INET6 protocol family Jul 2 06:55:20.150135 kernel: Segment Routing with IPv6 Jul 2 06:55:20.150148 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 06:55:20.150168 kernel: NET: Registered PF_PACKET protocol family Jul 2 06:55:20.150180 kernel: Key type dns_resolver registered Jul 2 06:55:20.150192 kernel: IPI shorthand broadcast: enabled Jul 2 06:55:20.150205 kernel: sched_clock: Marking stable (1400548747, 127146424)->(1733115214, -205420043) Jul 2 06:55:20.150217 kernel: registered taskstats version 1 Jul 2 06:55:20.150230 kernel: Loading compiled-in X.509 certificates Jul 2 06:55:20.150242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 06:55:20.150255 kernel: Key type .fscrypt registered Jul 2 06:55:20.150267 kernel: Key type fscrypt-provisioning registered Jul 2 06:55:20.150284 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 06:55:20.150296 kernel: ima: Allocated hash algorithm: sha1 Jul 2 06:55:20.150310 kernel: ima: No architecture policies found Jul 2 06:55:20.150324 kernel: clk: Disabling unused clocks Jul 2 06:55:20.150364 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 06:55:20.150382 kernel: Write protecting the kernel read-only data: 34816k Jul 2 06:55:20.150396 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 06:55:20.150410 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 06:55:20.150423 kernel: Run /init as init process Jul 2 06:55:20.150441 kernel: with arguments: Jul 2 06:55:20.150455 kernel: /init Jul 2 06:55:20.150467 kernel: with environment: Jul 2 06:55:20.150482 kernel: HOME=/ Jul 2 06:55:20.150495 kernel: TERM=linux Jul 2 06:55:20.150509 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 06:55:20.150527 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:55:20.150543 systemd[1]: Detected virtualization kvm. Jul 2 06:55:20.150562 systemd[1]: Detected architecture x86-64. Jul 2 06:55:20.150577 systemd[1]: Running in initrd. Jul 2 06:55:20.150592 systemd[1]: No hostname configured, using default hostname. Jul 2 06:55:20.150605 systemd[1]: Hostname set to . Jul 2 06:55:20.150619 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:55:20.150633 systemd[1]: Queued start job for default target initrd.target. Jul 2 06:55:20.150647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:55:20.150661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:55:20.150679 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:55:20.150693 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:55:20.150707 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:55:20.150722 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:55:20.150738 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:55:20.150752 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:55:20.150766 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 06:55:20.150786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 06:55:20.150804 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 06:55:20.150819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:55:20.150832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:55:20.150846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:55:20.150865 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:55:20.150880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:55:20.150894 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 06:55:20.150909 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 06:55:20.150923 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:55:20.150937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:55:20.150951 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 06:55:20.150965 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:55:20.150996 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 06:55:20.151015 kernel: audit: type=1130 audit(1719903320.149:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.151040 systemd-journald[180]: Journal started Jul 2 06:55:20.151136 systemd-journald[180]: Runtime Journal (/run/log/journal/63e43403bda140ba8de0b5e175c47174) is 4.9M, max 39.3M, 34.4M free. Jul 2 06:55:20.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.159779 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:55:20.159861 kernel: audit: type=1130 audit(1719903320.157:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.260106 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 06:55:20.260182 kernel: Bridge firewalling registered Jul 2 06:55:20.260205 kernel: SCSI subsystem initialized Jul 2 06:55:20.260221 kernel: audit: type=1130 audit(1719903320.255:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.260239 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 06:55:20.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.163047 systemd-modules-load[181]: Inserted module 'overlay' Jul 2 06:55:20.272193 kernel: device-mapper: uevent: version 1.0.3 Jul 2 06:55:20.272230 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 06:55:20.163383 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:55:20.203840 systemd-modules-load[181]: Inserted module 'br_netfilter' Jul 2 06:55:20.250505 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:55:20.306271 kernel: audit: type=1130 audit(1719903320.277:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.306313 kernel: audit: type=1130 audit(1719903320.277:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.306331 kernel: audit: type=1130 audit(1719903320.277:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.252712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:55:20.274594 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:55:20.274709 systemd-modules-load[181]: Inserted module 'dm_multipath' Jul 2 06:55:20.285795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:55:20.286467 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:55:20.311859 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:55:20.326578 kernel: audit: type=1334 audit(1719903320.321:8): prog-id=6 op=LOAD Jul 2 06:55:20.321000 audit: BPF prog-id=6 op=LOAD Jul 2 06:55:20.326904 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:55:20.329203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:55:20.425618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:55:20.439783 kernel: audit: type=1130 audit(1719903320.425:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.440359 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:55:20.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.452849 systemd-resolved[193]: Positive Trust Anchors: Jul 2 06:55:20.463951 kernel: audit: type=1130 audit(1719903320.440:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.452867 systemd-resolved[193]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:55:20.452920 systemd-resolved[193]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:55:20.456836 systemd-resolved[193]: Defaulting to hostname 'linux'. Jul 2 06:55:20.461472 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 06:55:20.462275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:55:20.462892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:55:20.507092 dracut-cmdline[205]: dracut-dracut-053 Jul 2 06:55:20.515004 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:55:20.739010 kernel: Loading iSCSI transport class v2.0-870. Jul 2 06:55:20.763250 kernel: iscsi: registered transport (tcp) Jul 2 06:55:20.803013 kernel: iscsi: registered transport (qla4xxx) Jul 2 06:55:20.803107 kernel: QLogic iSCSI HBA Driver Jul 2 06:55:20.908750 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 06:55:20.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:20.915036 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 06:55:21.047074 kernel: raid6: avx2x4 gen() 12952 MB/s Jul 2 06:55:21.084503 kernel: raid6: avx2x2 gen() 13255 MB/s Jul 2 06:55:21.103905 kernel: raid6: avx2x1 gen() 9750 MB/s Jul 2 06:55:21.104004 kernel: raid6: using algorithm avx2x2 gen() 13255 MB/s Jul 2 06:55:21.132095 kernel: raid6: .... xor() 11538 MB/s, rmw enabled Jul 2 06:55:21.132214 kernel: raid6: using avx2x2 recovery algorithm Jul 2 06:55:21.132235 kernel: xor: automatically using best checksumming function avx Jul 2 06:55:21.440096 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 06:55:21.511736 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:55:21.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:21.512000 audit: BPF prog-id=7 op=LOAD Jul 2 06:55:21.512000 audit: BPF prog-id=8 op=LOAD Jul 2 06:55:21.519300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:55:21.569497 systemd-udevd[381]: Using default interface naming scheme 'v252'. Jul 2 06:55:21.578983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:55:21.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:21.596285 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 06:55:21.630724 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Jul 2 06:55:21.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:21.723332 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:55:21.734235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:55:21.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:21.855488 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:55:22.074078 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 06:55:22.083334 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jul 2 06:55:22.178972 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 06:55:22.179188 kernel: scsi host0: Virtio SCSI HBA Jul 2 06:55:22.179353 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 06:55:22.179387 kernel: GPT:9289727 != 125829119 Jul 2 06:55:22.179405 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 06:55:22.179423 kernel: GPT:9289727 != 125829119 Jul 2 06:55:22.179441 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 06:55:22.179459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:55:22.179477 kernel: libata version 3.00 loaded. Jul 2 06:55:22.179495 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 06:55:22.274950 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jul 2 06:55:22.275218 kernel: virtio_blk virtio5: [vdb] 952 512-byte logical blocks (487 kB/476 KiB) Jul 2 06:55:22.275376 kernel: scsi host1: ata_piix Jul 2 06:55:22.275609 kernel: scsi host2: ata_piix Jul 2 06:55:22.275791 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jul 2 06:55:22.275812 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jul 2 06:55:22.319011 kernel: ACPI: bus type USB registered Jul 2 06:55:22.320009 kernel: usbcore: registered new interface driver usbfs Jul 2 06:55:22.320077 kernel: usbcore: registered new interface driver hub Jul 2 06:55:22.320109 kernel: usbcore: registered new device driver usb Jul 2 06:55:22.325016 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 06:55:22.325115 kernel: AES CTR mode by8 optimization enabled Jul 2 06:55:22.579037 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) Jul 2 06:55:22.579692 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 06:55:22.595619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:55:22.608009 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (430) Jul 2 06:55:22.608363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:55:22.618166 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jul 2 06:55:22.632379 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jul 2 06:55:22.632679 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jul 2 06:55:22.632817 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jul 2 06:55:22.632945 kernel: hub 1-0:1.0: USB hub found Jul 2 06:55:22.633132 kernel: hub 1-0:1.0: 2 ports detected Jul 2 06:55:22.623898 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 06:55:22.625118 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 06:55:22.631406 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 06:55:22.655428 disk-uuid[517]: Primary Header is updated. Jul 2 06:55:22.655428 disk-uuid[517]: Secondary Entries is updated. Jul 2 06:55:22.655428 disk-uuid[517]: Secondary Header is updated. Jul 2 06:55:22.664005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:55:22.672009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:55:23.682162 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:55:23.682252 disk-uuid[518]: The operation has completed successfully. Jul 2 06:55:23.839653 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 06:55:23.849649 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 06:55:23.863106 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 2 06:55:23.863198 kernel: audit: type=1130 audit(1719903323.852:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:23.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:23.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:23.863988 kernel: audit: type=1131 audit(1719903323.852:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:23.870702 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 06:55:23.878195 sh[530]: Success Jul 2 06:55:23.912012 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 06:55:24.051354 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 06:55:24.074718 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 06:55:24.080692 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 06:55:24.088032 kernel: audit: type=1130 audit(1719903324.083:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.158656 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 06:55:24.158747 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:55:24.158767 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 06:55:24.161547 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 06:55:24.161639 kernel: BTRFS info (device dm-0): using free space tree Jul 2 06:55:24.194313 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 06:55:24.195762 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 06:55:24.203644 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 06:55:24.207685 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 06:55:24.244046 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:55:24.244131 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:55:24.244151 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:55:24.271731 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 06:55:24.274191 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:55:24.291798 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 06:55:24.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.298067 kernel: audit: type=1130 audit(1719903324.291:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.298441 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 06:55:24.565704 ignition[613]: Ignition 2.15.0 Jul 2 06:55:24.566618 ignition[613]: Stage: fetch-offline Jul 2 06:55:24.567172 ignition[613]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:24.567691 ignition[613]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:24.568954 ignition[613]: parsed url from cmdline: "" Jul 2 06:55:24.569070 ignition[613]: no config URL provided Jul 2 06:55:24.569589 ignition[613]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:55:24.570533 ignition[613]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:55:24.571145 ignition[613]: failed to fetch config: resource requires networking Jul 2 06:55:24.571671 ignition[613]: Ignition finished successfully Jul 2 06:55:24.571408 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:55:24.580366 kernel: audit: type=1130 audit(1719903324.574:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.574785 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:55:24.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.595673 kernel: audit: type=1130 audit(1719903324.581:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.595757 kernel: audit: type=1334 audit(1719903324.583:25): prog-id=9 op=LOAD Jul 2 06:55:24.583000 audit: BPF prog-id=9 op=LOAD Jul 2 06:55:24.596336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:55:24.650479 systemd-networkd[715]: lo: Link UP Jul 2 06:55:24.651445 systemd-networkd[715]: lo: Gained carrier Jul 2 06:55:24.652923 systemd-networkd[715]: Enumeration completed Jul 2 06:55:24.653676 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:55:24.654247 systemd[1]: Reached target network.target - Network. Jul 2 06:55:24.657201 systemd-networkd[715]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:55:24.658061 systemd-networkd[715]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:55:24.663761 kernel: audit: type=1130 audit(1719903324.651:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.665431 systemd-networkd[715]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 06:55:24.666479 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jul 2 06:55:24.669949 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 06:55:24.671936 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:55:24.688891 systemd-networkd[715]: eth1: Link UP Jul 2 06:55:24.688907 systemd-networkd[715]: eth1: Gained carrier Jul 2 06:55:24.688931 systemd-networkd[715]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:55:24.694508 systemd-networkd[715]: eth0: Link UP Jul 2 06:55:24.694523 systemd-networkd[715]: eth0: Gained carrier Jul 2 06:55:24.694539 systemd-networkd[715]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jul 2 06:55:24.704401 ignition[717]: Ignition 2.15.0 Jul 2 06:55:24.704439 ignition[717]: Stage: fetch Jul 2 06:55:24.704633 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:24.704651 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:24.704833 ignition[717]: parsed url from cmdline: "" Jul 2 06:55:24.707025 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:55:24.704840 ignition[717]: no config URL provided Jul 2 06:55:24.704849 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:55:24.704865 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:55:24.704902 ignition[717]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jul 2 06:55:24.705443 ignition[717]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 2 06:55:24.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.730443 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 06:55:24.736567 kernel: audit: type=1130 audit(1719903324.706:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.745178 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:55:24.745178 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 06:55:24.745178 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 06:55:24.745178 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 06:55:24.745178 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:55:24.745178 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 06:55:24.764314 kernel: audit: type=1130 audit(1719903324.738:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.739221 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 06:55:24.745795 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 06:55:24.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.778727 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 06:55:24.779469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:55:24.781118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:55:24.781604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:55:24.798121 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 06:55:24.806990 systemd-networkd[715]: eth1: DHCPv4 address 10.124.0.15/20 acquired from 169.254.169.253 Jul 2 06:55:24.813177 systemd-networkd[715]: eth0: DHCPv4 address 143.110.155.161/20, gateway 143.110.144.1 acquired from 169.254.169.253 Jul 2 06:55:24.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:24.816493 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:55:24.906120 ignition[717]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Jul 2 06:55:24.993564 ignition[717]: GET result: OK Jul 2 06:55:24.993749 ignition[717]: parsing config with SHA512: 58aa5a052cde8065e076143e2da859d9d3e4714066202603efbeb822c3c797641382c2111511fa19f56b3de4420093d9dea5b2bd92e7071a4f8e7b0765e0cd89 Jul 2 06:55:25.002139 unknown[717]: fetched base config from "system" Jul 2 06:55:25.002152 unknown[717]: fetched base config from "system" Jul 2 06:55:25.002161 unknown[717]: fetched user config from "digitalocean" Jul 2 06:55:25.007429 ignition[717]: fetch: fetch complete Jul 2 06:55:25.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:25.014503 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 06:55:25.007442 ignition[717]: fetch: fetch passed Jul 2 06:55:25.044490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 06:55:25.007537 ignition[717]: Ignition finished successfully Jul 2 06:55:25.112481 ignition[740]: Ignition 2.15.0 Jul 2 06:55:25.112497 ignition[740]: Stage: kargs Jul 2 06:55:25.112768 ignition[740]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:25.112789 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:25.115121 ignition[740]: kargs: kargs passed Jul 2 06:55:25.115253 ignition[740]: Ignition finished successfully Jul 2 06:55:25.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:25.156803 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 06:55:25.174518 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 06:55:25.219353 ignition[746]: Ignition 2.15.0 Jul 2 06:55:25.219371 ignition[746]: Stage: disks Jul 2 06:55:25.219617 ignition[746]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:25.219636 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:25.221587 ignition[746]: disks: disks passed Jul 2 06:55:25.221673 ignition[746]: Ignition finished successfully Jul 2 06:55:25.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:25.222998 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 06:55:25.223784 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 06:55:25.224324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:55:25.226522 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:55:25.227337 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:55:25.232945 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:55:25.257575 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 06:55:25.310910 systemd-fsck[754]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 06:55:25.321664 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 06:55:25.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:25.326349 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 06:55:25.531179 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 06:55:25.531057 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 06:55:25.531814 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 06:55:25.556850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:55:25.559524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 06:55:25.562325 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jul 2 06:55:25.589564 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 06:55:25.590221 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 06:55:25.590283 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:55:25.592431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 06:55:25.611819 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 06:55:25.640048 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (760) Jul 2 06:55:25.679708 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:55:25.679810 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:55:25.679831 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:55:25.727089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:55:25.909647 initrd-setup-root[790]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 06:55:25.914122 coreos-metadata[763]: Jul 02 06:55:25.914 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 06:55:25.925219 initrd-setup-root[797]: cut: /sysroot/etc/group: No such file or directory Jul 2 06:55:25.926961 coreos-metadata[763]: Jul 02 06:55:25.926 INFO Fetch successful Jul 2 06:55:25.934396 coreos-metadata[763]: Jul 02 06:55:25.934 INFO wrote hostname ci-3815.2.5-b-18394828d7 to /sysroot/etc/hostname Jul 2 06:55:25.936518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 06:55:25.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:25.966517 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 06:55:25.973417 coreos-metadata[762]: Jul 02 06:55:25.973 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 06:55:25.986657 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 06:55:26.002495 coreos-metadata[762]: Jul 02 06:55:26.002 INFO Fetch successful Jul 2 06:55:26.018384 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jul 2 06:55:26.018540 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jul 2 06:55:26.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:26.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:26.042533 systemd-networkd[715]: eth1: Gained IPv6LL Jul 2 06:55:26.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:26.341739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 06:55:26.356013 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 06:55:26.359775 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 06:55:26.380306 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 06:55:26.389629 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:55:26.430905 ignition[877]: INFO : Ignition 2.15.0 Jul 2 06:55:26.430905 ignition[877]: INFO : Stage: mount Jul 2 06:55:26.432813 ignition[877]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:26.432813 ignition[877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:26.434281 ignition[877]: INFO : mount: mount passed Jul 2 06:55:26.434281 ignition[877]: INFO : Ignition finished successfully Jul 2 06:55:26.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:26.435108 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 06:55:26.460685 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 06:55:26.492703 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:55:26.495030 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 06:55:26.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:26.534007 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (887) Jul 2 06:55:26.548724 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:55:26.548802 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:55:26.548822 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:55:26.581966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:55:26.630080 systemd-networkd[715]: eth0: Gained IPv6LL Jul 2 06:55:26.815140 ignition[905]: INFO : Ignition 2.15.0 Jul 2 06:55:26.815140 ignition[905]: INFO : Stage: files Jul 2 06:55:26.817012 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:26.817012 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:26.819222 ignition[905]: DEBUG : files: compiled without relabeling support, skipping Jul 2 06:55:26.826302 ignition[905]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 06:55:26.826302 ignition[905]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 06:55:26.849550 ignition[905]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 06:55:26.850818 ignition[905]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 06:55:26.852938 unknown[905]: wrote ssh authorized keys file for user: core Jul 2 06:55:26.853970 ignition[905]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 06:55:26.858740 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:55:26.860267 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 06:55:26.915480 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 06:55:26.994666 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:55:26.996071 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 06:55:26.997372 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 06:55:27.009267 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:55:27.010762 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:55:27.010762 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:55:27.012850 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 06:55:27.313404 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 06:55:27.930528 ignition[905]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 06:55:28.000514 ignition[905]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 06:55:28.048354 ignition[905]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:55:28.048354 ignition[905]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:55:28.048354 ignition[905]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 06:55:28.048354 ignition[905]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 06:55:28.048354 ignition[905]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 06:55:28.048354 ignition[905]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:55:28.048354 ignition[905]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:55:28.048354 ignition[905]: INFO : files: files passed Jul 2 06:55:28.048354 ignition[905]: INFO : Ignition finished successfully Jul 2 06:55:28.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.048926 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 06:55:28.070291 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 06:55:28.082563 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 06:55:28.113784 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 06:55:28.113971 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 06:55:28.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.151156 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:55:28.151156 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:55:28.153831 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:55:28.156802 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:55:28.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.163358 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 06:55:28.179392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 06:55:28.237793 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 06:55:28.239090 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 06:55:28.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.241308 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 06:55:28.242668 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 06:55:28.252585 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 06:55:28.256590 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 06:55:28.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.322323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:55:28.333414 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 06:55:28.355919 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 06:55:28.356703 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 06:55:28.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.361485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:55:28.362195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:55:28.363597 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 06:55:28.364138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 06:55:28.364247 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:55:28.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.369467 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 06:55:28.370199 systemd[1]: Stopped target basic.target - Basic System. Jul 2 06:55:28.371234 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 06:55:28.372283 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:55:28.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.378786 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 06:55:28.379374 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 06:55:28.379894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:55:28.380482 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 06:55:28.380995 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 06:55:28.381476 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:55:28.381968 systemd[1]: Stopped target swap.target - Swaps. Jul 2 06:55:28.382427 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 06:55:28.382535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:55:28.383245 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:55:28.383789 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 06:55:28.383875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 06:55:28.384567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 06:55:28.384631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:55:28.409056 iscsid[726]: iscsid shutting down. Jul 2 06:55:28.385188 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 06:55:28.385243 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 06:55:28.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.453543 ignition[950]: INFO : Ignition 2.15.0 Jul 2 06:55:28.453543 ignition[950]: INFO : Stage: umount Jul 2 06:55:28.453543 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:55:28.453543 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jul 2 06:55:28.453543 ignition[950]: INFO : umount: umount passed Jul 2 06:55:28.453543 ignition[950]: INFO : Ignition finished successfully Jul 2 06:55:28.385765 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 06:55:28.385817 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 06:55:28.395681 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 06:55:28.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.399393 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jul 2 06:55:28.402832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 06:55:28.404278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 06:55:28.404530 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:55:28.449121 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 06:55:28.449242 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:55:28.479753 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 06:55:28.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.479923 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jul 2 06:55:28.496269 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 06:55:28.498587 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 06:55:28.498769 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 06:55:28.499505 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 06:55:28.499598 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 06:55:28.500156 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 06:55:28.500221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 06:55:28.507957 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 06:55:28.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.508128 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 06:55:28.511005 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 06:55:28.511101 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:55:28.511678 systemd[1]: Stopped target paths.target - Path Units. Jul 2 06:55:28.512073 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 06:55:28.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.517501 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:55:28.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.518170 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 06:55:28.518919 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 06:55:28.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.519454 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 06:55:28.519534 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:55:28.520093 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 06:55:28.520174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 06:55:28.527612 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:55:28.529343 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 06:55:28.529502 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:55:28.530463 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 06:55:28.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.530604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 06:55:28.531454 systemd[1]: Stopped target network.target - Network. Jul 2 06:55:28.560000 audit: BPF prog-id=6 op=UNLOAD Jul 2 06:55:28.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.531935 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 06:55:28.531998 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:55:28.536143 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 06:55:28.536247 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 06:55:28.537589 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 06:55:28.538491 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 06:55:28.541199 systemd-networkd[715]: eth1: DHCPv6 lease lost Jul 2 06:55:28.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.546148 systemd-networkd[715]: eth0: DHCPv6 lease lost Jul 2 06:55:28.589000 audit: BPF prog-id=9 op=UNLOAD Jul 2 06:55:28.546860 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 06:55:28.547041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 06:55:28.547907 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 06:55:28.548059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 06:55:28.556913 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 06:55:28.556958 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:55:28.561498 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 06:55:28.562277 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 06:55:28.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.562384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:55:28.563217 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 06:55:28.563296 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:55:28.563919 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 06:55:28.563997 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 06:55:28.564570 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 06:55:28.564633 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:55:28.565488 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:55:28.575849 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 06:55:28.577942 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 06:55:28.579308 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 06:55:28.580412 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:55:28.597102 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 06:55:28.597193 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 06:55:28.602293 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 06:55:28.602360 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:55:28.602954 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 06:55:28.603049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:55:28.603860 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 06:55:28.603929 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 06:55:28.604480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 06:55:28.604536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:55:28.617839 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 06:55:28.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.646705 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 06:55:28.646960 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:55:28.647751 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 06:55:28.647828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:55:28.648847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 06:55:28.648926 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:55:28.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:28.650759 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 06:55:28.651551 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 06:55:28.651677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 06:55:28.657137 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 06:55:28.657297 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 06:55:28.658116 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 06:55:28.680492 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 06:55:28.710685 systemd[1]: Switching root. Jul 2 06:55:28.738556 systemd-journald[180]: Journal stopped Jul 2 06:55:31.254817 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jul 2 06:55:31.254927 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 06:55:31.254966 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 06:55:31.254985 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 06:55:31.261508 kernel: SELinux: policy capability open_perms=1 Jul 2 06:55:31.261545 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 06:55:31.261567 kernel: SELinux: policy capability always_check_network=0 Jul 2 06:55:31.261599 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 06:55:31.261617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 06:55:31.261631 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 06:55:31.261665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 06:55:31.261680 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 2 06:55:31.261699 kernel: audit: type=1403 audit(1719903328.987:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 06:55:31.261729 systemd[1]: Successfully loaded SELinux policy in 86.720ms. Jul 2 06:55:31.261750 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.734ms. Jul 2 06:55:31.261771 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:55:31.261792 systemd[1]: Detected virtualization kvm. Jul 2 06:55:31.261810 systemd[1]: Detected architecture x86-64. Jul 2 06:55:31.261827 systemd[1]: Detected first boot. Jul 2 06:55:31.261845 systemd[1]: Hostname set to . Jul 2 06:55:31.261862 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:55:31.261885 kernel: audit: type=1334 audit(1719903329.131:87): prog-id=10 op=LOAD Jul 2 06:55:31.261902 kernel: audit: type=1334 audit(1719903329.132:88): prog-id=10 op=UNLOAD Jul 2 06:55:31.261921 kernel: audit: type=1334 audit(1719903329.134:89): prog-id=11 op=LOAD Jul 2 06:55:31.261943 kernel: audit: type=1334 audit(1719903329.134:90): prog-id=11 op=UNLOAD Jul 2 06:55:31.261962 systemd[1]: Populated /etc with preset unit settings. Jul 2 06:55:31.264408 kernel: audit: type=1334 audit(1719903330.649:91): prog-id=12 op=LOAD Jul 2 06:55:31.264457 kernel: audit: type=1334 audit(1719903330.649:92): prog-id=3 op=UNLOAD Jul 2 06:55:31.264489 kernel: audit: type=1334 audit(1719903330.649:93): prog-id=13 op=LOAD Jul 2 06:55:31.264523 kernel: audit: type=1334 audit(1719903330.649:94): prog-id=14 op=LOAD Jul 2 06:55:31.264542 kernel: audit: type=1334 audit(1719903330.649:95): prog-id=4 op=UNLOAD Jul 2 06:55:31.264561 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 06:55:31.264581 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 06:55:31.264599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 06:55:31.264617 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 06:55:31.264637 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 06:55:31.264655 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 06:55:31.264680 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 06:55:31.264699 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 06:55:31.264718 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 06:55:31.264736 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 06:55:31.264756 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 06:55:31.264777 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:55:31.264804 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 06:55:31.264826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 06:55:31.264847 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 06:55:31.264870 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 06:55:31.264893 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 06:55:31.264912 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 06:55:31.264931 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 06:55:31.264966 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:55:31.265015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:55:31.265042 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:55:31.265061 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:55:31.265081 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 06:55:31.265101 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 06:55:31.265122 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 06:55:31.265144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:55:31.265163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:55:31.265187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:55:31.265211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 06:55:31.265236 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 06:55:31.265256 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 06:55:31.265275 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 06:55:31.265295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:31.265316 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 06:55:31.265337 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 06:55:31.265357 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 06:55:31.265376 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 06:55:31.265399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:55:31.265430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:55:31.265450 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 06:55:31.265469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:55:31.265489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:55:31.265508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:55:31.265526 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 06:55:31.265544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:55:31.265562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 06:55:31.265587 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 06:55:31.265605 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 06:55:31.265623 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 06:55:31.265641 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 06:55:31.265659 systemd[1]: Stopped systemd-journald.service - Journal Service. Jul 2 06:55:31.265676 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:55:31.265693 kernel: fuse: init (API version 7.37) Jul 2 06:55:31.265714 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:55:31.265733 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 06:55:31.265757 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 06:55:31.265778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:55:31.265816 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 06:55:31.265836 systemd[1]: Stopped verity-setup.service. Jul 2 06:55:31.265860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:31.265879 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 06:55:31.265900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 06:55:31.265938 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 06:55:31.265962 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 06:55:31.266004 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 06:55:31.266025 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 06:55:31.266048 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:55:31.266069 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 06:55:31.266092 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 06:55:31.266112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:55:31.266143 systemd-journald[1048]: Journal started Jul 2 06:55:31.266239 systemd-journald[1048]: Runtime Journal (/run/log/journal/63e43403bda140ba8de0b5e175c47174) is 4.9M, max 39.3M, 34.4M free. Jul 2 06:55:28.987000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 06:55:29.131000 audit: BPF prog-id=10 op=LOAD Jul 2 06:55:29.132000 audit: BPF prog-id=10 op=UNLOAD Jul 2 06:55:29.134000 audit: BPF prog-id=11 op=LOAD Jul 2 06:55:29.134000 audit: BPF prog-id=11 op=UNLOAD Jul 2 06:55:30.649000 audit: BPF prog-id=12 op=LOAD Jul 2 06:55:30.649000 audit: BPF prog-id=3 op=UNLOAD Jul 2 06:55:30.649000 audit: BPF prog-id=13 op=LOAD Jul 2 06:55:30.649000 audit: BPF prog-id=14 op=LOAD Jul 2 06:55:30.649000 audit: BPF prog-id=4 op=UNLOAD Jul 2 06:55:30.649000 audit: BPF prog-id=5 op=UNLOAD Jul 2 06:55:30.657000 audit: BPF prog-id=15 op=LOAD Jul 2 06:55:30.657000 audit: BPF prog-id=12 op=UNLOAD Jul 2 06:55:30.657000 audit: BPF prog-id=16 op=LOAD Jul 2 06:55:30.657000 audit: BPF prog-id=17 op=LOAD Jul 2 06:55:30.657000 audit: BPF prog-id=13 op=UNLOAD Jul 2 06:55:30.660000 audit: BPF prog-id=14 op=UNLOAD Jul 2 06:55:30.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:30.692000 audit: BPF prog-id=15 op=UNLOAD Jul 2 06:55:30.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:30.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.128000 audit: BPF prog-id=18 op=LOAD Jul 2 06:55:31.128000 audit: BPF prog-id=19 op=LOAD Jul 2 06:55:31.128000 audit: BPF prog-id=20 op=LOAD Jul 2 06:55:31.139000 audit: BPF prog-id=17 op=UNLOAD Jul 2 06:55:31.139000 audit: BPF prog-id=16 op=UNLOAD Jul 2 06:55:31.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.251000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 06:55:31.251000 audit[1048]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd885753a0 a2=4000 a3=7ffd8857543c items=0 ppid=1 pid=1048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:31.276427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:55:31.276510 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:55:31.251000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 06:55:31.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:30.639109 systemd[1]: Queued start job for default target multi-user.target. Jul 2 06:55:31.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:30.639132 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 06:55:30.661477 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 06:55:31.274279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:55:31.274966 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:55:31.277208 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 06:55:31.277505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 06:55:31.279571 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:55:31.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.310088 kernel: loop: module loaded Jul 2 06:55:31.304738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 06:55:31.308310 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 06:55:31.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.313111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:55:31.322106 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 06:55:31.325232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 06:55:31.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.331522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:55:31.332079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:55:31.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.333479 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 06:55:31.336251 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 06:55:31.338747 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 06:55:31.340193 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 06:55:31.358336 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 06:55:31.374266 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 06:55:31.377539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:55:31.381123 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 06:55:31.382489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:55:31.435650 systemd-journald[1048]: Time spent on flushing to /var/log/journal/63e43403bda140ba8de0b5e175c47174 is 77.871ms for 1130 entries. Jul 2 06:55:31.435650 systemd-journald[1048]: System Journal (/var/log/journal/63e43403bda140ba8de0b5e175c47174) is 8.0M, max 195.6M, 187.6M free. Jul 2 06:55:31.542946 systemd-journald[1048]: Received client request to flush runtime journal. Jul 2 06:55:31.543034 kernel: ACPI: bus type drm_connector registered Jul 2 06:55:31.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.454639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:55:31.469618 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 06:55:31.470484 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 06:55:31.489711 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:55:31.489964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:55:31.544485 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 06:55:31.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.577989 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 06:55:31.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.578849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:55:31.609414 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 06:55:31.615013 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 06:55:31.651301 udevadm[1086]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 06:55:31.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:31.888014 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 06:55:31.903114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:55:31.973775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:55:31.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:32.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:32.948000 audit: BPF prog-id=21 op=LOAD Jul 2 06:55:32.948000 audit: BPF prog-id=22 op=LOAD Jul 2 06:55:32.948000 audit: BPF prog-id=7 op=UNLOAD Jul 2 06:55:32.948000 audit: BPF prog-id=8 op=UNLOAD Jul 2 06:55:32.950181 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 06:55:32.972932 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:55:33.019011 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Jul 2 06:55:33.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.089000 audit: BPF prog-id=23 op=LOAD Jul 2 06:55:33.088782 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:55:33.097269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:55:33.108000 audit: BPF prog-id=24 op=LOAD Jul 2 06:55:33.108000 audit: BPF prog-id=25 op=LOAD Jul 2 06:55:33.108000 audit: BPF prog-id=26 op=LOAD Jul 2 06:55:33.115328 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 06:55:33.183026 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1104) Jul 2 06:55:33.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.206435 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 06:55:33.219884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:33.220258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:55:33.227355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:55:33.233248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:55:33.240954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:55:33.242106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 06:55:33.242240 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 06:55:33.242407 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:33.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.245274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:55:33.245576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:55:33.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.253122 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 06:55:33.253654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:55:33.253879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:55:33.257602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:55:33.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.278751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:55:33.279026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:55:33.279917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:55:33.403049 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1105) Jul 2 06:55:33.410302 systemd-networkd[1093]: lo: Link UP Jul 2 06:55:33.410685 systemd-networkd[1093]: lo: Gained carrier Jul 2 06:55:33.411470 systemd-networkd[1093]: Enumeration completed Jul 2 06:55:33.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:33.411723 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:55:33.416108 systemd-networkd[1093]: eth0: Configuring with /run/systemd/network/10-5e:36:d8:35:66:2c.network. Jul 2 06:55:33.417327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 06:55:33.432690 systemd-networkd[1093]: eth0: Link UP Jul 2 06:55:33.436138 systemd-networkd[1093]: eth0: Gained carrier Jul 2 06:55:33.565060 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 06:55:33.596754 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 06:55:33.596929 kernel: ACPI: button: Power Button [PWRF] Jul 2 06:55:33.613021 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 06:55:33.686604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:55:33.688870 systemd-networkd[1093]: eth1: Configuring with /run/systemd/network/10-36:5f:47:a5:bf:81.network. Jul 2 06:55:33.691658 systemd-networkd[1093]: eth1: Link UP Jul 2 06:55:33.691674 systemd-networkd[1093]: eth1: Gained carrier Jul 2 06:55:33.735039 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 06:55:33.966591 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 2 06:55:33.966743 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 2 06:55:33.991290 kernel: Console: switching to colour dummy device 80x25 Jul 2 06:55:33.992017 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 2 06:55:33.992075 kernel: [drm] features: -context_init Jul 2 06:55:33.995007 kernel: [drm] number of scanouts: 1 Jul 2 06:55:33.996012 kernel: [drm] number of cap sets: 0 Jul 2 06:55:33.999027 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 2 06:55:34.004006 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 2 06:55:34.006012 kernel: virtio-pci 0000:00:02.0: [drm] drm_plane_enable_fb_damage_clips() not called Jul 2 06:55:34.006337 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 06:55:34.044545 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 2 06:55:34.090058 kernel: EDAC MC: Ver: 3.0.0 Jul 2 06:55:34.144841 kernel: kauditd_printk_skb: 66 callbacks suppressed Jul 2 06:55:34.145021 kernel: audit: type=1130 audit(1719903334.140:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.140813 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 06:55:34.156561 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 06:55:34.191606 lvm[1131]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:55:34.272841 kernel: audit: type=1130 audit(1719903334.267:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.267888 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 06:55:34.268230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:55:34.276652 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 06:55:34.286624 lvm[1132]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:55:34.330325 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 06:55:34.330745 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:55:34.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.339166 kernel: audit: type=1130 audit(1719903334.329:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.344333 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jul 2 06:55:34.346207 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 06:55:34.346330 systemd[1]: Reached target machines.target - Containers. Jul 2 06:55:34.361219 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 06:55:34.390026 kernel: ISO 9660 Extensions: RRIP_1991A Jul 2 06:55:34.393556 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jul 2 06:55:34.395324 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:55:34.429348 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 06:55:34.433751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:55:34.433878 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:55:34.440729 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 06:55:34.455282 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 06:55:34.474314 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 06:55:34.509892 kernel: audit: type=1130 audit(1719903334.487:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.485288 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 06:55:34.540338 kernel: loop0: detected capacity change from 0 to 8 Jul 2 06:55:34.536896 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1138 (bootctl) Jul 2 06:55:34.557721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 06:55:34.630105 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 06:55:34.638066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 06:55:34.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.648114 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 06:55:34.669210 kernel: audit: type=1130 audit(1719903334.645:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:34.715719 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 06:55:34.856330 kernel: loop2: detected capacity change from 0 to 139360 Jul 2 06:55:34.992557 systemd-fsck[1145]: fsck.fat 4.2 (2021-01-31) Jul 2 06:55:34.992557 systemd-fsck[1145]: /dev/vda1: 808 files, 120378/258078 clusters Jul 2 06:55:35.010007 kernel: loop3: detected capacity change from 0 to 80600 Jul 2 06:55:35.043813 kernel: audit: type=1130 audit(1719903335.035:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.017611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:55:35.054362 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 06:55:35.103961 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 06:55:35.142254 systemd-networkd[1093]: eth0: Gained IPv6LL Jul 2 06:55:35.159213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 06:55:35.185025 kernel: audit: type=1130 audit(1719903335.177:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.194444 systemd-networkd[1093]: eth1: Gained IPv6LL Jul 2 06:55:35.203725 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 06:55:35.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.223080 kernel: audit: type=1130 audit(1719903335.213:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.255209 kernel: loop4: detected capacity change from 0 to 8 Jul 2 06:55:35.263031 kernel: loop5: detected capacity change from 0 to 210664 Jul 2 06:55:35.346541 kernel: loop6: detected capacity change from 0 to 139360 Jul 2 06:55:35.448832 kernel: loop7: detected capacity change from 0 to 80600 Jul 2 06:55:35.530738 (sd-sysext)[1150]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jul 2 06:55:35.534203 (sd-sysext)[1150]: Merged extensions into '/usr'. Jul 2 06:55:35.557819 kernel: audit: type=1130 audit(1719903335.536:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:35.539306 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 06:55:35.560316 systemd[1]: Starting ensure-sysext.service... Jul 2 06:55:35.601412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:55:35.623943 systemd[1]: Reloading. Jul 2 06:55:35.695133 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 06:55:35.776715 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 06:55:35.777282 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 06:55:35.778968 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 06:55:36.270533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:55:36.524338 kernel: audit: type=1334 audit(1719903336.511:169): prog-id=27 op=LOAD Jul 2 06:55:36.511000 audit: BPF prog-id=27 op=LOAD Jul 2 06:55:36.511000 audit: BPF prog-id=23 op=UNLOAD Jul 2 06:55:36.511000 audit: BPF prog-id=28 op=LOAD Jul 2 06:55:36.511000 audit: BPF prog-id=24 op=UNLOAD Jul 2 06:55:36.511000 audit: BPF prog-id=29 op=LOAD Jul 2 06:55:36.511000 audit: BPF prog-id=30 op=LOAD Jul 2 06:55:36.511000 audit: BPF prog-id=25 op=UNLOAD Jul 2 06:55:36.511000 audit: BPF prog-id=26 op=UNLOAD Jul 2 06:55:36.511000 audit: BPF prog-id=31 op=LOAD Jul 2 06:55:36.511000 audit: BPF prog-id=32 op=LOAD Jul 2 06:55:36.520000 audit: BPF prog-id=21 op=UNLOAD Jul 2 06:55:36.520000 audit: BPF prog-id=22 op=UNLOAD Jul 2 06:55:36.525000 audit: BPF prog-id=33 op=LOAD Jul 2 06:55:36.525000 audit: BPF prog-id=18 op=UNLOAD Jul 2 06:55:36.525000 audit: BPF prog-id=34 op=LOAD Jul 2 06:55:36.525000 audit: BPF prog-id=35 op=LOAD Jul 2 06:55:36.525000 audit: BPF prog-id=19 op=UNLOAD Jul 2 06:55:36.525000 audit: BPF prog-id=20 op=UNLOAD Jul 2 06:55:36.588923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:55:36.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.625403 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:55:36.669452 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 06:55:36.679879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 06:55:36.697000 audit: BPF prog-id=36 op=LOAD Jul 2 06:55:36.704355 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 06:55:36.711592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:55:36.729000 audit: BPF prog-id=37 op=LOAD Jul 2 06:55:36.743611 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 06:55:36.756871 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 06:55:36.762638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 06:55:36.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.773121 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:36.774246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:55:36.781950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:55:36.802883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:55:36.825708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:55:36.827775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:55:36.828164 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:55:36.831282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:36.840935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:55:36.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.853000 audit[1231]: SYSTEM_BOOT pid=1231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.841238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:55:36.848823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:55:36.849076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:55:36.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.866880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:36.867433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:55:36.893587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:55:36.893877 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:55:36.894204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:55:36.894511 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:36.916853 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:55:36.919185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:55:36.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:36.924619 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:36.925215 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:55:36.942010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:55:36.966353 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:55:37.017305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:55:37.057517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:55:37.057661 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:55:37.057813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:55:37.066177 systemd[1]: Finished ensure-sysext.service. Jul 2 06:55:37.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:37.067482 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 06:55:37.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:55:37.068834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:55:37.070000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 06:55:37.071260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:55:37.070000 audit[1239]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd9fb89430 a2=420 a3=0 items=0 ppid=1215 pid=1239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:55:37.070000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 06:55:37.072113 augenrules[1239]: No rules Jul 2 06:55:37.072550 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:55:37.072757 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:55:37.078258 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:55:37.079553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:55:37.079771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:55:37.087376 systemd-resolved[1224]: Positive Trust Anchors: Jul 2 06:55:37.087817 systemd-resolved[1224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:55:37.087942 systemd-resolved[1224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:55:37.101430 systemd-resolved[1224]: Using system hostname 'ci-3815.2.5-b-18394828d7'. Jul 2 06:55:37.109383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:55:37.138781 systemd[1]: Reached target network.target - Network. Jul 2 06:55:37.139599 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 06:55:37.140345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:55:37.171626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:55:37.171744 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:55:37.231326 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 06:55:37.232355 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 06:55:37.270631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 06:55:37.320020 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 06:55:38.036450 systemd-timesyncd[1225]: Contacted time server 104.167.241.253:123 (0.flatcar.pool.ntp.org). Jul 2 06:55:38.036490 systemd-resolved[1224]: Clock change detected. Flushing caches. Jul 2 06:55:38.036685 systemd-timesyncd[1225]: Initial clock synchronization to Tue 2024-07-02 06:55:38.036060 UTC. Jul 2 06:55:38.064285 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 06:55:38.113597 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 06:55:38.114502 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:55:38.114564 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:55:38.115486 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 06:55:38.118812 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 06:55:38.119854 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 06:55:38.120824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 06:55:38.121542 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 06:55:38.122270 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 06:55:38.122323 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:55:38.122957 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:55:38.127489 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 06:55:38.138366 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 06:55:38.164299 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 06:55:38.178533 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:55:38.179527 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 06:55:38.180421 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:55:38.181127 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:55:38.181974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:55:38.182024 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:55:38.203626 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 06:55:38.212845 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 06:55:38.233355 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 06:55:38.267067 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 06:55:38.302003 jq[1255]: false Jul 2 06:55:38.312520 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 06:55:38.342083 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 06:55:38.370425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:38.378445 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 06:55:38.393171 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 06:55:38.409666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 06:55:38.425158 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 06:55:38.432076 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 06:55:38.447037 dbus-daemon[1254]: [system] SELinux support is enabled Jul 2 06:55:38.447509 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 06:55:38.448455 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:55:38.448579 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 06:55:38.449537 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 06:55:38.452405 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 06:55:38.474382 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 06:55:38.476993 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 06:55:38.492178 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 06:55:38.492512 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 06:55:38.501408 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 06:55:38.501521 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 06:55:38.507437 jq[1273]: true Jul 2 06:55:38.512374 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 06:55:38.558473 update_engine[1272]: I0702 06:55:38.523967 1272 main.cc:92] Flatcar Update Engine starting Jul 2 06:55:38.512580 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jul 2 06:55:38.512624 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 06:55:38.550002 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 06:55:38.550382 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 06:55:38.596585 systemd[1]: Started update-engine.service - Update Engine. Jul 2 06:55:38.604156 update_engine[1272]: I0702 06:55:38.602051 1272 update_check_scheduler.cc:74] Next update check in 10m27s Jul 2 06:55:38.604349 jq[1279]: true Jul 2 06:55:38.606040 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 06:55:38.616910 coreos-metadata[1251]: Jul 02 06:55:38.616 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 06:55:38.619376 tar[1277]: linux-amd64/helm Jul 2 06:55:38.628382 extend-filesystems[1256]: Found loop4 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found loop5 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found loop6 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found loop7 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda1 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda2 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda3 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found usr Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda4 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda6 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda7 Jul 2 06:55:38.628382 extend-filesystems[1256]: Found vda9 Jul 2 06:55:38.628382 extend-filesystems[1256]: Checking size of /dev/vda9 Jul 2 06:55:38.673438 coreos-metadata[1251]: Jul 02 06:55:38.641 INFO Fetch successful Jul 2 06:55:38.665171 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 06:55:38.665472 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 06:55:38.788938 extend-filesystems[1256]: Resized partition /dev/vda9 Jul 2 06:55:38.800327 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 06:55:38.805590 systemd-logind[1271]: New seat seat0. Jul 2 06:55:38.827546 systemd-logind[1271]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 06:55:38.827577 systemd-logind[1271]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 06:55:38.830235 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 06:55:38.843672 extend-filesystems[1304]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 06:55:38.886895 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 06:55:38.887881 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 06:55:38.902725 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jul 2 06:55:39.129626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1316) Jul 2 06:55:39.209601 bash[1312]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:55:39.211467 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 06:55:39.242431 systemd[1]: Starting sshkeys.service... Jul 2 06:55:39.354554 locksmithd[1284]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 06:55:39.372351 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 06:55:39.403227 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 06:55:39.440142 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 06:55:39.560565 extend-filesystems[1304]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 06:55:39.560565 extend-filesystems[1304]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 06:55:39.560565 extend-filesystems[1304]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 06:55:39.587612 extend-filesystems[1256]: Resized filesystem in /dev/vda9 Jul 2 06:55:39.587612 extend-filesystems[1256]: Found vdb Jul 2 06:55:39.562990 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 06:55:39.563326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 06:55:39.815612 coreos-metadata[1324]: Jul 02 06:55:39.815 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jul 2 06:55:39.842548 coreos-metadata[1324]: Jul 02 06:55:39.842 INFO Fetch successful Jul 2 06:55:39.860696 unknown[1324]: wrote ssh authorized keys file for user: core Jul 2 06:55:39.900856 update-ssh-keys[1332]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:55:39.901988 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 06:55:39.908201 systemd[1]: Finished sshkeys.service. Jul 2 06:55:40.133545 containerd[1278]: time="2024-07-02T06:55:40.133333747Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 06:55:40.355376 containerd[1278]: time="2024-07-02T06:55:40.353862627Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 06:55:40.355376 containerd[1278]: time="2024-07-02T06:55:40.353960057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.375129 containerd[1278]: time="2024-07-02T06:55:40.375016010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:55:40.375397 containerd[1278]: time="2024-07-02T06:55:40.375364856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.376057 containerd[1278]: time="2024-07-02T06:55:40.376002306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:55:40.376273 containerd[1278]: time="2024-07-02T06:55:40.376247534Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 06:55:40.376597 containerd[1278]: time="2024-07-02T06:55:40.376560132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.378189 containerd[1278]: time="2024-07-02T06:55:40.378133018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:55:40.378352 containerd[1278]: time="2024-07-02T06:55:40.378331812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.378567 containerd[1278]: time="2024-07-02T06:55:40.378546547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.385649 containerd[1278]: time="2024-07-02T06:55:40.385544987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.385833 containerd[1278]: time="2024-07-02T06:55:40.385812851Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 06:55:40.385893 containerd[1278]: time="2024-07-02T06:55:40.385880393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:55:40.386417 containerd[1278]: time="2024-07-02T06:55:40.386386326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:55:40.386537 containerd[1278]: time="2024-07-02T06:55:40.386522477Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 06:55:40.386699 containerd[1278]: time="2024-07-02T06:55:40.386681588Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 06:55:40.386800 containerd[1278]: time="2024-07-02T06:55:40.386768900Z" level=info msg="metadata content store policy set" policy=shared Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.439909322Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.439988015Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440007788Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440063414Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440161601Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440177805Z" level=info msg="NRI interface is disabled by configuration." Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440193883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440414005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440434314Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440453718Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440474129Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440499881Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440522249Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441410 containerd[1278]: time="2024-07-02T06:55:40.440538872Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.440555896Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.440575104Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.440593764Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.440611710Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.440628314Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.440752131Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.441173951Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.441217576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.441235246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 06:55:40.441978 containerd[1278]: time="2024-07-02T06:55:40.441267103Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442352921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442458275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442501992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442517858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442534905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442553209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442582091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442600672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442620639Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442818926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442841768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442860858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442876765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442912647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443135 containerd[1278]: time="2024-07-02T06:55:40.442936037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443702 containerd[1278]: time="2024-07-02T06:55:40.442951682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.443702 containerd[1278]: time="2024-07-02T06:55:40.442966005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 06:55:40.444057 containerd[1278]: time="2024-07-02T06:55:40.443983069Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 06:55:40.445010 containerd[1278]: time="2024-07-02T06:55:40.444451248Z" level=info msg="Connect containerd service" Jul 2 06:55:40.445010 containerd[1278]: time="2024-07-02T06:55:40.444534220Z" level=info msg="using legacy CRI server" Jul 2 06:55:40.445010 containerd[1278]: time="2024-07-02T06:55:40.444545097Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 06:55:40.445010 containerd[1278]: time="2024-07-02T06:55:40.444581791Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 06:55:40.445815 containerd[1278]: time="2024-07-02T06:55:40.445779232Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:55:40.445971 containerd[1278]: time="2024-07-02T06:55:40.445952691Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 06:55:40.446168 containerd[1278]: time="2024-07-02T06:55:40.446142350Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 06:55:40.446252 containerd[1278]: time="2024-07-02T06:55:40.446238439Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 06:55:40.446338 containerd[1278]: time="2024-07-02T06:55:40.446324662Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 06:55:40.446514 containerd[1278]: time="2024-07-02T06:55:40.446065041Z" level=info msg="Start subscribing containerd event" Jul 2 06:55:40.446606 containerd[1278]: time="2024-07-02T06:55:40.446593347Z" level=info msg="Start recovering state" Jul 2 06:55:40.446747 containerd[1278]: time="2024-07-02T06:55:40.446731983Z" level=info msg="Start event monitor" Jul 2 06:55:40.446807 containerd[1278]: time="2024-07-02T06:55:40.446796401Z" level=info msg="Start snapshots syncer" Jul 2 06:55:40.446865 containerd[1278]: time="2024-07-02T06:55:40.446854280Z" level=info msg="Start cni network conf syncer for default" Jul 2 06:55:40.446933 containerd[1278]: time="2024-07-02T06:55:40.446922276Z" level=info msg="Start streaming server" Jul 2 06:55:40.447965 containerd[1278]: time="2024-07-02T06:55:40.447937838Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 06:55:40.448436 containerd[1278]: time="2024-07-02T06:55:40.448416173Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 06:55:40.461365 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 06:55:40.470291 containerd[1278]: time="2024-07-02T06:55:40.470226104Z" level=info msg="containerd successfully booted in 0.345994s" Jul 2 06:55:40.693071 tar[1277]: linux-amd64/LICENSE Jul 2 06:55:40.693838 tar[1277]: linux-amd64/README.md Jul 2 06:55:40.706784 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 06:55:41.506866 sshd_keygen[1286]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 06:55:41.583205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 06:55:41.602025 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 06:55:41.634602 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 06:55:41.635262 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 06:55:41.653102 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 06:55:41.675115 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 06:55:41.683229 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 06:55:41.706774 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 06:55:41.712349 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 06:55:41.878268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:41.882298 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 06:55:41.898860 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 06:55:41.931629 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 06:55:41.931909 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 06:55:41.935535 systemd[1]: Startup finished in 1.648s (kernel) + 9.111s (initrd) + 12.323s (userspace) = 23.083s. Jul 2 06:55:43.808793 kubelet[1354]: E0702 06:55:43.808680 1354 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:55:43.819789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:55:43.820013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:55:43.820403 systemd[1]: kubelet.service: Consumed 1.479s CPU time. Jul 2 06:55:44.784868 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 06:55:44.822763 systemd[1]: Started sshd@0-143.110.155.161:22-147.75.109.163:59916.service - OpenSSH per-connection server daemon (147.75.109.163:59916). Jul 2 06:55:45.017085 sshd[1362]: Accepted publickey for core from 147.75.109.163 port 59916 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:55:45.021562 sshd[1362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:55:45.039822 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 06:55:45.049841 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 06:55:45.056555 systemd-logind[1271]: New session 1 of user core. Jul 2 06:55:45.078943 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 06:55:45.087875 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 06:55:45.103822 (systemd)[1365]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:55:45.338414 systemd[1365]: Queued start job for default target default.target. Jul 2 06:55:45.351843 systemd[1365]: Reached target paths.target - Paths. Jul 2 06:55:45.352169 systemd[1365]: Reached target sockets.target - Sockets. Jul 2 06:55:45.353930 systemd[1365]: Reached target timers.target - Timers. Jul 2 06:55:45.354147 systemd[1365]: Reached target basic.target - Basic System. Jul 2 06:55:45.354371 systemd[1365]: Reached target default.target - Main User Target. Jul 2 06:55:45.354582 systemd[1365]: Startup finished in 231ms. Jul 2 06:55:45.354921 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 06:55:45.357739 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 06:55:45.487248 systemd[1]: Started sshd@1-143.110.155.161:22-147.75.109.163:59918.service - OpenSSH per-connection server daemon (147.75.109.163:59918). Jul 2 06:55:45.637306 sshd[1374]: Accepted publickey for core from 147.75.109.163 port 59918 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:55:45.644740 sshd[1374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:55:45.661676 systemd-logind[1271]: New session 2 of user core. Jul 2 06:55:45.668718 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 06:55:45.789860 sshd[1374]: pam_unix(sshd:session): session closed for user core Jul 2 06:55:45.805286 systemd[1]: sshd@1-143.110.155.161:22-147.75.109.163:59918.service: Deactivated successfully. Jul 2 06:55:45.806713 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 06:55:45.811769 systemd-logind[1271]: Session 2 logged out. Waiting for processes to exit. Jul 2 06:55:45.842652 systemd[1]: Started sshd@2-143.110.155.161:22-147.75.109.163:59922.service - OpenSSH per-connection server daemon (147.75.109.163:59922). Jul 2 06:55:45.865926 systemd-logind[1271]: Removed session 2. Jul 2 06:55:45.942624 sshd[1380]: Accepted publickey for core from 147.75.109.163 port 59922 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:55:45.946192 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:55:45.975960 systemd-logind[1271]: New session 3 of user core. Jul 2 06:55:45.994417 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 06:55:46.076274 sshd[1380]: pam_unix(sshd:session): session closed for user core Jul 2 06:55:46.096570 systemd[1]: sshd@2-143.110.155.161:22-147.75.109.163:59922.service: Deactivated successfully. Jul 2 06:55:46.099932 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 06:55:46.102684 systemd-logind[1271]: Session 3 logged out. Waiting for processes to exit. Jul 2 06:55:46.117674 systemd[1]: Started sshd@3-143.110.155.161:22-147.75.109.163:59930.service - OpenSSH per-connection server daemon (147.75.109.163:59930). Jul 2 06:55:46.120505 systemd-logind[1271]: Removed session 3. Jul 2 06:55:46.220479 sshd[1387]: Accepted publickey for core from 147.75.109.163 port 59930 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:55:46.223706 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:55:46.235136 systemd-logind[1271]: New session 4 of user core. Jul 2 06:55:46.242068 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 06:55:46.334465 sshd[1387]: pam_unix(sshd:session): session closed for user core Jul 2 06:55:46.357175 systemd[1]: sshd@3-143.110.155.161:22-147.75.109.163:59930.service: Deactivated successfully. Jul 2 06:55:46.361423 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 06:55:46.369272 systemd-logind[1271]: Session 4 logged out. Waiting for processes to exit. Jul 2 06:55:46.382846 systemd[1]: Started sshd@4-143.110.155.161:22-147.75.109.163:59932.service - OpenSSH per-connection server daemon (147.75.109.163:59932). Jul 2 06:55:46.384975 systemd-logind[1271]: Removed session 4. Jul 2 06:55:46.449301 sshd[1393]: Accepted publickey for core from 147.75.109.163 port 59932 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:55:46.451917 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:55:46.472980 systemd-logind[1271]: New session 5 of user core. Jul 2 06:55:46.484432 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 06:55:46.614291 sudo[1396]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 06:55:46.614840 sudo[1396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:55:47.010810 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 06:55:47.976081 dockerd[1405]: time="2024-07-02T06:55:47.975990423Z" level=info msg="Starting up" Jul 2 06:55:48.147473 systemd[1]: var-lib-docker-metacopy\x2dcheck2672005572-merged.mount: Deactivated successfully. Jul 2 06:55:48.194953 dockerd[1405]: time="2024-07-02T06:55:48.194887041Z" level=info msg="Loading containers: start." Jul 2 06:55:48.597755 kernel: Initializing XFRM netlink socket Jul 2 06:55:49.011802 systemd-networkd[1093]: docker0: Link UP Jul 2 06:55:49.056777 dockerd[1405]: time="2024-07-02T06:55:49.056688124Z" level=info msg="Loading containers: done." Jul 2 06:55:49.374952 dockerd[1405]: time="2024-07-02T06:55:49.360439303Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 06:55:49.374952 dockerd[1405]: time="2024-07-02T06:55:49.360797858Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 06:55:49.374952 dockerd[1405]: time="2024-07-02T06:55:49.360963523Z" level=info msg="Daemon has completed initialization" Jul 2 06:55:49.484812 dockerd[1405]: time="2024-07-02T06:55:49.484227423Z" level=info msg="API listen on /run/docker.sock" Jul 2 06:55:49.485540 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 06:55:51.192738 containerd[1278]: time="2024-07-02T06:55:51.192658200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 06:55:52.221716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484134605.mount: Deactivated successfully. Jul 2 06:55:53.909369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 06:55:53.910862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:53.910936 systemd[1]: kubelet.service: Consumed 1.479s CPU time. Jul 2 06:55:53.920255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:55:54.269788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:55:54.507006 kubelet[1602]: E0702 06:55:54.506937 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:55:54.518312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:55:54.518515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:55:56.957632 containerd[1278]: time="2024-07-02T06:55:56.957256690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:56.963454 containerd[1278]: time="2024-07-02T06:55:56.963372759Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 06:55:56.975768 containerd[1278]: time="2024-07-02T06:55:56.975698864Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:56.981958 containerd[1278]: time="2024-07-02T06:55:56.981892868Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:56.987833 containerd[1278]: time="2024-07-02T06:55:56.986925717Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 5.794197057s" Jul 2 06:55:56.988928 containerd[1278]: time="2024-07-02T06:55:56.988867661Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 06:55:56.989257 containerd[1278]: time="2024-07-02T06:55:56.988827478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:55:57.053219 containerd[1278]: time="2024-07-02T06:55:57.053156506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 06:55:57.608825 systemd[1]: Started sshd@5-143.110.155.161:22-180.101.88.240:40434.service - OpenSSH per-connection server daemon (180.101.88.240:40434). Jul 2 06:55:58.812723 sshd[1621]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:56:00.872690 sshd[1615]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:56:01.185712 sshd[1622]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:56:01.469047 containerd[1278]: time="2024-07-02T06:56:01.466712626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:01.475976 containerd[1278]: time="2024-07-02T06:56:01.475876781Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 06:56:01.489546 containerd[1278]: time="2024-07-02T06:56:01.489463995Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:01.511174 containerd[1278]: time="2024-07-02T06:56:01.511074415Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:01.525685 containerd[1278]: time="2024-07-02T06:56:01.524927311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:01.526623 containerd[1278]: time="2024-07-02T06:56:01.526551382Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 4.472861607s" Jul 2 06:56:01.527330 containerd[1278]: time="2024-07-02T06:56:01.526625139Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 06:56:01.648850 containerd[1278]: time="2024-07-02T06:56:01.648786018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 06:56:03.663076 sshd[1615]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:56:03.995717 sshd[1632]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:56:04.660320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 06:56:04.660726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:04.673334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:04.986224 containerd[1278]: time="2024-07-02T06:56:04.984829220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:05.014763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:05.024938 containerd[1278]: time="2024-07-02T06:56:05.024774788Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 06:56:05.034918 containerd[1278]: time="2024-07-02T06:56:05.034687856Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:05.042234 containerd[1278]: time="2024-07-02T06:56:05.042141483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:05.064705 containerd[1278]: time="2024-07-02T06:56:05.064622878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:05.068145 containerd[1278]: time="2024-07-02T06:56:05.068030938Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 3.418851473s" Jul 2 06:56:05.068145 containerd[1278]: time="2024-07-02T06:56:05.068119311Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 06:56:05.185557 containerd[1278]: time="2024-07-02T06:56:05.182015335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 06:56:05.233281 kubelet[1637]: E0702 06:56:05.233206 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:56:05.254312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:56:05.254540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:56:06.409856 sshd[1615]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:56:06.564210 sshd[1615]: Received disconnect from 180.101.88.240 port 40434:11: [preauth] Jul 2 06:56:06.564210 sshd[1615]: Disconnected from authenticating user root 180.101.88.240 port 40434 [preauth] Jul 2 06:56:06.578420 systemd[1]: sshd@5-143.110.155.161:22-180.101.88.240:40434.service: Deactivated successfully. Jul 2 06:56:07.931939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234513462.mount: Deactivated successfully. Jul 2 06:56:10.038524 containerd[1278]: time="2024-07-02T06:56:10.038435379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.040992 containerd[1278]: time="2024-07-02T06:56:10.040909751Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 06:56:10.042987 containerd[1278]: time="2024-07-02T06:56:10.042898490Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.046400 containerd[1278]: time="2024-07-02T06:56:10.046332113Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.050606 containerd[1278]: time="2024-07-02T06:56:10.050495308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:10.052601 containerd[1278]: time="2024-07-02T06:56:10.052523489Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 4.870442935s" Jul 2 06:56:10.052896 containerd[1278]: time="2024-07-02T06:56:10.052847136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 06:56:10.178201 containerd[1278]: time="2024-07-02T06:56:10.176394564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 06:56:11.146688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653730774.mount: Deactivated successfully. Jul 2 06:56:14.640753 containerd[1278]: time="2024-07-02T06:56:14.640678354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.645313 containerd[1278]: time="2024-07-02T06:56:14.645190280Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 06:56:14.651852 containerd[1278]: time="2024-07-02T06:56:14.651741146Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.671285 containerd[1278]: time="2024-07-02T06:56:14.671205188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.690419 containerd[1278]: time="2024-07-02T06:56:14.690327082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:14.693527 containerd[1278]: time="2024-07-02T06:56:14.693413730Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 4.516730357s" Jul 2 06:56:14.694229 containerd[1278]: time="2024-07-02T06:56:14.694033010Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 06:56:14.825835 containerd[1278]: time="2024-07-02T06:56:14.825772569Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 06:56:15.416838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 06:56:15.417826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:15.427331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:15.752266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:15.768415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261158156.mount: Deactivated successfully. Jul 2 06:56:15.833140 containerd[1278]: time="2024-07-02T06:56:15.832128783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:15.842017 containerd[1278]: time="2024-07-02T06:56:15.841691849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 06:56:15.867999 containerd[1278]: time="2024-07-02T06:56:15.866864363Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:15.873543 containerd[1278]: time="2024-07-02T06:56:15.873478360Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:15.878549 containerd[1278]: time="2024-07-02T06:56:15.878481791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:15.880351 containerd[1278]: time="2024-07-02T06:56:15.880274308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.054087987s" Jul 2 06:56:15.880683 containerd[1278]: time="2024-07-02T06:56:15.880644190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 06:56:15.956909 kubelet[1723]: E0702 06:56:15.955400 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:56:15.960618 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:56:15.960894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:56:15.983136 containerd[1278]: time="2024-07-02T06:56:15.982845724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 06:56:16.864900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954646022.mount: Deactivated successfully. Jul 2 06:56:23.241925 containerd[1278]: time="2024-07-02T06:56:23.239502355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:23.245234 containerd[1278]: time="2024-07-02T06:56:23.243583016Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 06:56:23.255558 containerd[1278]: time="2024-07-02T06:56:23.255344849Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:23.270907 containerd[1278]: time="2024-07-02T06:56:23.270834264Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:23.311111 containerd[1278]: time="2024-07-02T06:56:23.311024792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:56:23.312755 containerd[1278]: time="2024-07-02T06:56:23.312652272Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 7.329733872s" Jul 2 06:56:23.313073 containerd[1278]: time="2024-07-02T06:56:23.313027894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 06:56:23.891238 update_engine[1272]: I0702 06:56:23.889384 1272 update_attempter.cc:509] Updating boot flags... Jul 2 06:56:24.091128 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1811) Jul 2 06:56:26.159408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 06:56:26.159709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:26.201708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:26.449880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:26.628614 kubelet[1856]: E0702 06:56:26.628540 1856 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:56:26.631850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:56:26.632115 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:56:29.748571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:29.782219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:29.866988 systemd[1]: Reloading. Jul 2 06:56:30.466308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:56:30.680883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:30.710401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:30.726263 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:56:30.726583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:30.754333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:31.101902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:31.327352 kubelet[1941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:56:31.327966 kubelet[1941]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:56:31.328079 kubelet[1941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:56:31.328392 kubelet[1941]: I0702 06:56:31.328338 1941 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:56:32.462989 kubelet[1941]: I0702 06:56:32.461347 1941 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 06:56:32.462989 kubelet[1941]: I0702 06:56:32.461427 1941 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:56:32.462989 kubelet[1941]: I0702 06:56:32.462279 1941 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 06:56:32.500504 kubelet[1941]: E0702 06:56:32.500445 1941 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.110.155.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.502848 kubelet[1941]: I0702 06:56:32.502640 1941 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:56:32.562178 kubelet[1941]: I0702 06:56:32.556509 1941 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:56:32.580641 kubelet[1941]: I0702 06:56:32.577427 1941 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:56:32.580641 kubelet[1941]: I0702 06:56:32.580209 1941 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3815.2.5-b-18394828d7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:56:32.582618 kubelet[1941]: I0702 06:56:32.581920 1941 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:56:32.582618 kubelet[1941]: I0702 06:56:32.581993 1941 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:56:32.582618 kubelet[1941]: I0702 06:56:32.582242 1941 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:56:32.584219 kubelet[1941]: I0702 06:56:32.583641 1941 kubelet.go:400] "Attempting to sync node with API server" Jul 2 06:56:32.584219 kubelet[1941]: I0702 06:56:32.583680 1941 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:56:32.584219 kubelet[1941]: I0702 06:56:32.583714 1941 kubelet.go:312] "Adding apiserver pod source" Jul 2 06:56:32.584219 kubelet[1941]: I0702 06:56:32.583738 1941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:56:32.588962 kubelet[1941]: W0702 06:56:32.588316 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.155.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.588962 kubelet[1941]: E0702 06:56:32.588434 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.155.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.593491 kubelet[1941]: I0702 06:56:32.590367 1941 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:56:32.601685 kubelet[1941]: I0702 06:56:32.598518 1941 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 06:56:32.601685 kubelet[1941]: W0702 06:56:32.601604 1941 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 06:56:32.603059 kubelet[1941]: I0702 06:56:32.602972 1941 server.go:1264] "Started kubelet" Jul 2 06:56:32.616518 kubelet[1941]: W0702 06:56:32.610545 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.155.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-b-18394828d7&limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.616518 kubelet[1941]: E0702 06:56:32.616428 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.155.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-b-18394828d7&limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.630561 kubelet[1941]: I0702 06:56:32.621696 1941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:56:32.642055 kubelet[1941]: I0702 06:56:32.635576 1941 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:56:32.642055 kubelet[1941]: I0702 06:56:32.637604 1941 server.go:455] "Adding debug handlers to kubelet server" Jul 2 06:56:32.658516 kubelet[1941]: I0702 06:56:32.649150 1941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 06:56:32.658516 kubelet[1941]: I0702 06:56:32.649534 1941 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:56:32.658516 kubelet[1941]: E0702 06:56:32.649797 1941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.155.161:6443/api/v1/namespaces/default/events\": dial tcp 143.110.155.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.5-b-18394828d7.17de5305ec163c64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-b-18394828d7,UID:ci-3815.2.5-b-18394828d7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-b-18394828d7,},FirstTimestamp:2024-07-02 06:56:32.602930276 +0000 UTC m=+1.464052910,LastTimestamp:2024-07-02 06:56:32.602930276 +0000 UTC m=+1.464052910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-b-18394828d7,}" Jul 2 06:56:32.658516 kubelet[1941]: I0702 06:56:32.653194 1941 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:56:32.658516 kubelet[1941]: I0702 06:56:32.653832 1941 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 06:56:32.658516 kubelet[1941]: I0702 06:56:32.653966 1941 reconciler.go:26] "Reconciler: start to sync state" Jul 2 06:56:32.658516 kubelet[1941]: W0702 06:56:32.655707 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.155.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.659123 kubelet[1941]: E0702 06:56:32.655794 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.155.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.659123 kubelet[1941]: E0702 06:56:32.655918 1941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.155.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-b-18394828d7?timeout=10s\": dial tcp 143.110.155.161:6443: connect: connection refused" interval="200ms" Jul 2 06:56:32.710008 kubelet[1941]: E0702 06:56:32.709708 1941 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:56:32.710008 kubelet[1941]: I0702 06:56:32.709992 1941 factory.go:221] Registration of the containerd container factory successfully Jul 2 06:56:32.710008 kubelet[1941]: I0702 06:56:32.710011 1941 factory.go:221] Registration of the systemd container factory successfully Jul 2 06:56:32.710404 kubelet[1941]: I0702 06:56:32.710155 1941 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 06:56:32.785400 kubelet[1941]: I0702 06:56:32.757464 1941 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.785400 kubelet[1941]: E0702 06:56:32.758157 1941 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.155.161:6443/api/v1/nodes\": dial tcp 143.110.155.161:6443: connect: connection refused" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.785400 kubelet[1941]: I0702 06:56:32.760005 1941 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:56:32.785400 kubelet[1941]: I0702 06:56:32.760050 1941 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:56:32.785400 kubelet[1941]: I0702 06:56:32.760100 1941 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:56:32.786058 kubelet[1941]: I0702 06:56:32.786004 1941 policy_none.go:49] "None policy: Start" Jul 2 06:56:32.789486 kubelet[1941]: I0702 06:56:32.789447 1941 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 06:56:32.789708 kubelet[1941]: I0702 06:56:32.789691 1941 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:56:32.824448 kubelet[1941]: I0702 06:56:32.823747 1941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:56:32.825295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 06:56:32.831396 kubelet[1941]: I0702 06:56:32.831352 1941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:56:32.831788 kubelet[1941]: I0702 06:56:32.831766 1941 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:56:32.831984 kubelet[1941]: I0702 06:56:32.831971 1941 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 06:56:32.832353 kubelet[1941]: E0702 06:56:32.832305 1941 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:56:32.841685 kubelet[1941]: W0702 06:56:32.841611 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.155.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.841685 kubelet[1941]: E0702 06:56:32.841685 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.155.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:32.856958 kubelet[1941]: E0702 06:56:32.856907 1941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.155.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-b-18394828d7?timeout=10s\": dial tcp 143.110.155.161:6443: connect: connection refused" interval="400ms" Jul 2 06:56:32.859836 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 06:56:32.870637 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 06:56:32.886853 kubelet[1941]: I0702 06:56:32.886805 1941 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:56:32.887588 kubelet[1941]: I0702 06:56:32.887522 1941 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 06:56:32.895689 kubelet[1941]: I0702 06:56:32.895269 1941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:56:32.897707 kubelet[1941]: E0702 06:56:32.897663 1941 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:32.932964 kubelet[1941]: I0702 06:56:32.932857 1941 topology_manager.go:215] "Topology Admit Handler" podUID="3c12ec9c76a417621f817f654c1566dc" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.935717 kubelet[1941]: I0702 06:56:32.935567 1941 topology_manager.go:215] "Topology Admit Handler" podUID="a79d1dbf6e326820698d576f1f823270" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.938962 kubelet[1941]: I0702 06:56:32.938896 1941 topology_manager.go:215] "Topology Admit Handler" podUID="5ae42faf3be5ac935f19c452725fd41b" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.959979 kubelet[1941]: I0702 06:56:32.959939 1941 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.965353 kubelet[1941]: E0702 06:56:32.965310 1941 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.155.161:6443/api/v1/nodes\": dial tcp 143.110.155.161:6443: connect: connection refused" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:32.970322 systemd[1]: Created slice kubepods-burstable-pod3c12ec9c76a417621f817f654c1566dc.slice - libcontainer container kubepods-burstable-pod3c12ec9c76a417621f817f654c1566dc.slice. Jul 2 06:56:33.008709 systemd[1]: Created slice kubepods-burstable-pod5ae42faf3be5ac935f19c452725fd41b.slice - libcontainer container kubepods-burstable-pod5ae42faf3be5ac935f19c452725fd41b.slice. Jul 2 06:56:33.022849 systemd[1]: Created slice kubepods-burstable-poda79d1dbf6e326820698d576f1f823270.slice - libcontainer container kubepods-burstable-poda79d1dbf6e326820698d576f1f823270.slice. Jul 2 06:56:33.058245 kubelet[1941]: I0702 06:56:33.057622 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-ca-certs\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058245 kubelet[1941]: I0702 06:56:33.057689 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058245 kubelet[1941]: I0702 06:56:33.057723 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ae42faf3be5ac935f19c452725fd41b-kubeconfig\") pod \"kube-scheduler-ci-3815.2.5-b-18394828d7\" (UID: \"5ae42faf3be5ac935f19c452725fd41b\") " pod="kube-system/kube-scheduler-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058245 kubelet[1941]: I0702 06:56:33.057752 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c12ec9c76a417621f817f654c1566dc-ca-certs\") pod \"kube-apiserver-ci-3815.2.5-b-18394828d7\" (UID: \"3c12ec9c76a417621f817f654c1566dc\") " pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058245 kubelet[1941]: I0702 06:56:33.057780 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c12ec9c76a417621f817f654c1566dc-k8s-certs\") pod \"kube-apiserver-ci-3815.2.5-b-18394828d7\" (UID: \"3c12ec9c76a417621f817f654c1566dc\") " pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058630 kubelet[1941]: I0702 06:56:33.057808 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c12ec9c76a417621f817f654c1566dc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.5-b-18394828d7\" (UID: \"3c12ec9c76a417621f817f654c1566dc\") " pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058630 kubelet[1941]: I0702 06:56:33.057839 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058630 kubelet[1941]: I0702 06:56:33.057864 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.058630 kubelet[1941]: I0702 06:56:33.057892 1941 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.258595 kubelet[1941]: E0702 06:56:33.258533 1941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.155.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-b-18394828d7?timeout=10s\": dial tcp 143.110.155.161:6443: connect: connection refused" interval="800ms" Jul 2 06:56:33.302053 kubelet[1941]: E0702 06:56:33.301986 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:33.312198 containerd[1278]: time="2024-07-02T06:56:33.311969193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.5-b-18394828d7,Uid:3c12ec9c76a417621f817f654c1566dc,Namespace:kube-system,Attempt:0,}" Jul 2 06:56:33.317695 kubelet[1941]: E0702 06:56:33.316465 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:33.321852 containerd[1278]: time="2024-07-02T06:56:33.321778359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.5-b-18394828d7,Uid:5ae42faf3be5ac935f19c452725fd41b,Namespace:kube-system,Attempt:0,}" Jul 2 06:56:33.331205 kubelet[1941]: E0702 06:56:33.331161 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:33.332205 containerd[1278]: time="2024-07-02T06:56:33.332143575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.5-b-18394828d7,Uid:a79d1dbf6e326820698d576f1f823270,Namespace:kube-system,Attempt:0,}" Jul 2 06:56:33.387704 kubelet[1941]: I0702 06:56:33.385465 1941 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.387704 kubelet[1941]: E0702 06:56:33.387638 1941 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.155.161:6443/api/v1/nodes\": dial tcp 143.110.155.161:6443: connect: connection refused" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:33.719424 kubelet[1941]: W0702 06:56:33.719060 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.155.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:33.719424 kubelet[1941]: E0702 06:56:33.719363 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.155.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:33.989315 kubelet[1941]: W0702 06:56:33.988973 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.155.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-b-18394828d7&limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:33.989315 kubelet[1941]: E0702 06:56:33.989157 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.155.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-b-18394828d7&limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:34.060401 kubelet[1941]: E0702 06:56:34.060318 1941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.155.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-b-18394828d7?timeout=10s\": dial tcp 143.110.155.161:6443: connect: connection refused" interval="1.6s" Jul 2 06:56:34.183150 kubelet[1941]: W0702 06:56:34.182988 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.155.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:34.183150 kubelet[1941]: E0702 06:56:34.183056 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.155.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:34.188357 kubelet[1941]: W0702 06:56:34.187691 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.155.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:34.188357 kubelet[1941]: E0702 06:56:34.187789 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.155.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:34.189578 kubelet[1941]: I0702 06:56:34.189437 1941 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:34.190409 kubelet[1941]: E0702 06:56:34.190362 1941 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.155.161:6443/api/v1/nodes\": dial tcp 143.110.155.161:6443: connect: connection refused" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:34.234960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978239787.mount: Deactivated successfully. Jul 2 06:56:34.284802 containerd[1278]: time="2024-07-02T06:56:34.284407171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.291847 containerd[1278]: time="2024-07-02T06:56:34.291438118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 06:56:34.294255 containerd[1278]: time="2024-07-02T06:56:34.294204305Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.300764 containerd[1278]: time="2024-07-02T06:56:34.300476404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:56:34.303510 containerd[1278]: time="2024-07-02T06:56:34.303432101Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.323804 containerd[1278]: time="2024-07-02T06:56:34.323676400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:56:34.328765 containerd[1278]: time="2024-07-02T06:56:34.328679802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.334645 containerd[1278]: time="2024-07-02T06:56:34.334559003Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.360226 containerd[1278]: time="2024-07-02T06:56:34.355013709Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.360226 containerd[1278]: time="2024-07-02T06:56:34.356885181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.043705447s" Jul 2 06:56:34.366983 containerd[1278]: time="2024-07-02T06:56:34.365505414Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.370134 containerd[1278]: time="2024-07-02T06:56:34.369600911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.377443 containerd[1278]: time="2024-07-02T06:56:34.373896493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.377443 containerd[1278]: time="2024-07-02T06:56:34.375795726Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.397901 containerd[1278]: time="2024-07-02T06:56:34.397827802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.400012 containerd[1278]: time="2024-07-02T06:56:34.399935924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.066974861s" Jul 2 06:56:34.407341 containerd[1278]: time="2024-07-02T06:56:34.407269611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:56:34.420297 containerd[1278]: time="2024-07-02T06:56:34.420225086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.098002828s" Jul 2 06:56:34.627376 kubelet[1941]: E0702 06:56:34.627210 1941 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.110.155.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:34.934082 containerd[1278]: time="2024-07-02T06:56:34.933750770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:34.934082 containerd[1278]: time="2024-07-02T06:56:34.933834594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:34.934082 containerd[1278]: time="2024-07-02T06:56:34.933853847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:34.934082 containerd[1278]: time="2024-07-02T06:56:34.933867078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:34.945576 containerd[1278]: time="2024-07-02T06:56:34.945228937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:34.945576 containerd[1278]: time="2024-07-02T06:56:34.945318856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:34.945576 containerd[1278]: time="2024-07-02T06:56:34.945344008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:34.945576 containerd[1278]: time="2024-07-02T06:56:34.945360180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:34.947521 containerd[1278]: time="2024-07-02T06:56:34.947386229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:34.947751 containerd[1278]: time="2024-07-02T06:56:34.947708404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:34.947891 containerd[1278]: time="2024-07-02T06:56:34.947862959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:34.948024 containerd[1278]: time="2024-07-02T06:56:34.947997326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:35.029493 systemd[1]: Started cri-containerd-d6ac2cec923625eed40124fa787d925f0a02a97c25780a5d05be265f41ae7c44.scope - libcontainer container d6ac2cec923625eed40124fa787d925f0a02a97c25780a5d05be265f41ae7c44. Jul 2 06:56:35.051436 systemd[1]: Started cri-containerd-05861b350f2f98aaea8a653bb947d946f600441c87622d65bdb9c4ab312e8575.scope - libcontainer container 05861b350f2f98aaea8a653bb947d946f600441c87622d65bdb9c4ab312e8575. Jul 2 06:56:35.063850 systemd[1]: Started cri-containerd-de0b5f9c35a16a5ba22256b17646bf16eeff46fdcfa93d5fa26a5f5c0e655010.scope - libcontainer container de0b5f9c35a16a5ba22256b17646bf16eeff46fdcfa93d5fa26a5f5c0e655010. Jul 2 06:56:35.210159 containerd[1278]: time="2024-07-02T06:56:35.209976891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.5-b-18394828d7,Uid:5ae42faf3be5ac935f19c452725fd41b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ac2cec923625eed40124fa787d925f0a02a97c25780a5d05be265f41ae7c44\"" Jul 2 06:56:35.216308 kubelet[1941]: E0702 06:56:35.214858 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:35.221429 containerd[1278]: time="2024-07-02T06:56:35.221110340Z" level=info msg="CreateContainer within sandbox \"d6ac2cec923625eed40124fa787d925f0a02a97c25780a5d05be265f41ae7c44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 06:56:35.256488 containerd[1278]: time="2024-07-02T06:56:35.256421414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.5-b-18394828d7,Uid:a79d1dbf6e326820698d576f1f823270,Namespace:kube-system,Attempt:0,} returns sandbox id \"de0b5f9c35a16a5ba22256b17646bf16eeff46fdcfa93d5fa26a5f5c0e655010\"" Jul 2 06:56:35.270499 kubelet[1941]: E0702 06:56:35.263988 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:35.269174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042849948.mount: Deactivated successfully. Jul 2 06:56:35.285696 containerd[1278]: time="2024-07-02T06:56:35.282601950Z" level=info msg="CreateContainer within sandbox \"de0b5f9c35a16a5ba22256b17646bf16eeff46fdcfa93d5fa26a5f5c0e655010\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 06:56:35.298933 containerd[1278]: time="2024-07-02T06:56:35.298218831Z" level=info msg="CreateContainer within sandbox \"d6ac2cec923625eed40124fa787d925f0a02a97c25780a5d05be265f41ae7c44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f732fcf89d45152a1f81f1bc27fbccf453fa7e02b5d830f8d3f1f48e92f810eb\"" Jul 2 06:56:35.304603 containerd[1278]: time="2024-07-02T06:56:35.303906966Z" level=info msg="StartContainer for \"f732fcf89d45152a1f81f1bc27fbccf453fa7e02b5d830f8d3f1f48e92f810eb\"" Jul 2 06:56:35.331478 containerd[1278]: time="2024-07-02T06:56:35.330996427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.5-b-18394828d7,Uid:3c12ec9c76a417621f817f654c1566dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"05861b350f2f98aaea8a653bb947d946f600441c87622d65bdb9c4ab312e8575\"" Jul 2 06:56:35.333533 kubelet[1941]: E0702 06:56:35.333480 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:35.344671 containerd[1278]: time="2024-07-02T06:56:35.344567281Z" level=info msg="CreateContainer within sandbox \"de0b5f9c35a16a5ba22256b17646bf16eeff46fdcfa93d5fa26a5f5c0e655010\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"791822cb072d4894a7da3d3a764e07a4b38b3706da7f9db2df0bc70c8fc79516\"" Jul 2 06:56:35.351857 containerd[1278]: time="2024-07-02T06:56:35.351795205Z" level=info msg="CreateContainer within sandbox \"05861b350f2f98aaea8a653bb947d946f600441c87622d65bdb9c4ab312e8575\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 06:56:35.352568 containerd[1278]: time="2024-07-02T06:56:35.351880919Z" level=info msg="StartContainer for \"791822cb072d4894a7da3d3a764e07a4b38b3706da7f9db2df0bc70c8fc79516\"" Jul 2 06:56:35.508646 systemd[1]: Started cri-containerd-f732fcf89d45152a1f81f1bc27fbccf453fa7e02b5d830f8d3f1f48e92f810eb.scope - libcontainer container f732fcf89d45152a1f81f1bc27fbccf453fa7e02b5d830f8d3f1f48e92f810eb. Jul 2 06:56:35.534435 systemd[1]: Started cri-containerd-791822cb072d4894a7da3d3a764e07a4b38b3706da7f9db2df0bc70c8fc79516.scope - libcontainer container 791822cb072d4894a7da3d3a764e07a4b38b3706da7f9db2df0bc70c8fc79516. Jul 2 06:56:35.538520 containerd[1278]: time="2024-07-02T06:56:35.538405156Z" level=info msg="CreateContainer within sandbox \"05861b350f2f98aaea8a653bb947d946f600441c87622d65bdb9c4ab312e8575\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aebef919f46b2f999177f0e6cc209729b09c41d6b50f2b955ef331361e9000f8\"" Jul 2 06:56:35.540328 containerd[1278]: time="2024-07-02T06:56:35.540268925Z" level=info msg="StartContainer for \"aebef919f46b2f999177f0e6cc209729b09c41d6b50f2b955ef331361e9000f8\"" Jul 2 06:56:35.661283 kubelet[1941]: E0702 06:56:35.661187 1941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.155.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-b-18394828d7?timeout=10s\": dial tcp 143.110.155.161:6443: connect: connection refused" interval="3.2s" Jul 2 06:56:35.661694 kubelet[1941]: E0702 06:56:35.661559 1941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.155.161:6443/api/v1/namespaces/default/events\": dial tcp 143.110.155.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.5-b-18394828d7.17de5305ec163c64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-b-18394828d7,UID:ci-3815.2.5-b-18394828d7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-b-18394828d7,},FirstTimestamp:2024-07-02 06:56:32.602930276 +0000 UTC m=+1.464052910,LastTimestamp:2024-07-02 06:56:32.602930276 +0000 UTC m=+1.464052910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-b-18394828d7,}" Jul 2 06:56:35.674430 kubelet[1941]: W0702 06:56:35.673314 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.155.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:35.674430 kubelet[1941]: E0702 06:56:35.673421 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.155.161:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:35.676380 systemd[1]: Started cri-containerd-aebef919f46b2f999177f0e6cc209729b09c41d6b50f2b955ef331361e9000f8.scope - libcontainer container aebef919f46b2f999177f0e6cc209729b09c41d6b50f2b955ef331361e9000f8. Jul 2 06:56:35.680491 containerd[1278]: time="2024-07-02T06:56:35.680139918Z" level=info msg="StartContainer for \"f732fcf89d45152a1f81f1bc27fbccf453fa7e02b5d830f8d3f1f48e92f810eb\" returns successfully" Jul 2 06:56:35.702796 containerd[1278]: time="2024-07-02T06:56:35.702723438Z" level=info msg="StartContainer for \"791822cb072d4894a7da3d3a764e07a4b38b3706da7f9db2df0bc70c8fc79516\" returns successfully" Jul 2 06:56:35.793195 kubelet[1941]: I0702 06:56:35.793022 1941 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:35.794366 kubelet[1941]: E0702 06:56:35.793509 1941 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.155.161:6443/api/v1/nodes\": dial tcp 143.110.155.161:6443: connect: connection refused" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:35.796982 containerd[1278]: time="2024-07-02T06:56:35.796916340Z" level=info msg="StartContainer for \"aebef919f46b2f999177f0e6cc209729b09c41d6b50f2b955ef331361e9000f8\" returns successfully" Jul 2 06:56:35.857178 kubelet[1941]: E0702 06:56:35.852388 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:35.890578 kubelet[1941]: E0702 06:56:35.888188 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:35.894850 kubelet[1941]: E0702 06:56:35.894817 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:36.309047 kubelet[1941]: W0702 06:56:36.279087 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.155.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-b-18394828d7&limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:36.309047 kubelet[1941]: E0702 06:56:36.279220 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.155.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-b-18394828d7&limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:36.325437 kubelet[1941]: W0702 06:56:36.325341 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.155.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:36.325437 kubelet[1941]: E0702 06:56:36.325444 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.155.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:36.409660 kubelet[1941]: W0702 06:56:36.409565 1941 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.155.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:36.409660 kubelet[1941]: E0702 06:56:36.409658 1941 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.155.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.155.161:6443: connect: connection refused Jul 2 06:56:36.898694 kubelet[1941]: E0702 06:56:36.898648 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:38.995733 kubelet[1941]: I0702 06:56:38.995238 1941 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:39.704193 kubelet[1941]: E0702 06:56:39.704138 1941 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.5-b-18394828d7\" not found" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:39.982354 kubelet[1941]: I0702 06:56:39.982189 1941 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:40.004235 kubelet[1941]: E0702 06:56:40.004187 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.110063 kubelet[1941]: E0702 06:56:40.110012 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.210604 kubelet[1941]: E0702 06:56:40.210544 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.312047 kubelet[1941]: E0702 06:56:40.311814 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.412993 kubelet[1941]: E0702 06:56:40.412929 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.515241 kubelet[1941]: E0702 06:56:40.515190 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.618228 kubelet[1941]: E0702 06:56:40.618070 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.718588 kubelet[1941]: E0702 06:56:40.718522 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.819835 kubelet[1941]: E0702 06:56:40.819783 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:40.925425 kubelet[1941]: E0702 06:56:40.925376 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.031106 kubelet[1941]: E0702 06:56:41.031027 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.132877 kubelet[1941]: E0702 06:56:41.132830 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.234068 kubelet[1941]: E0702 06:56:41.233926 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.338501 kubelet[1941]: E0702 06:56:41.338441 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.441410 kubelet[1941]: E0702 06:56:41.441345 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.543296 kubelet[1941]: E0702 06:56:41.543148 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.644517 kubelet[1941]: E0702 06:56:41.644469 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.746257 kubelet[1941]: E0702 06:56:41.746195 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.854191 kubelet[1941]: E0702 06:56:41.854016 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:41.955764 kubelet[1941]: E0702 06:56:41.955703 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:42.059487 kubelet[1941]: E0702 06:56:42.058267 1941 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-b-18394828d7\" not found" Jul 2 06:56:42.608507 kubelet[1941]: I0702 06:56:42.608454 1941 apiserver.go:52] "Watching apiserver" Jul 2 06:56:42.654471 kubelet[1941]: I0702 06:56:42.654413 1941 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 06:56:43.250280 systemd[1]: Reloading. Jul 2 06:56:43.546931 kubelet[1941]: W0702 06:56:43.531279 1941 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:56:43.546931 kubelet[1941]: E0702 06:56:43.532366 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:43.947505 kubelet[1941]: E0702 06:56:43.947460 1941 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:44.006635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:56:44.277808 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:44.279583 kubelet[1941]: E0702 06:56:44.279347 1941 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3815.2.5-b-18394828d7.17de5305ec163c64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-b-18394828d7,UID:ci-3815.2.5-b-18394828d7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-b-18394828d7,},FirstTimestamp:2024-07-02 06:56:32.602930276 +0000 UTC m=+1.464052910,LastTimestamp:2024-07-02 06:56:32.602930276 +0000 UTC m=+1.464052910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-b-18394828d7,}" Jul 2 06:56:44.299560 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:56:44.299894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:44.299985 systemd[1]: kubelet.service: Consumed 1.558s CPU time. Jul 2 06:56:44.308442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:44.608435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:44.964877 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:56:44.965627 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:56:44.965776 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:56:44.966034 kubelet[2294]: I0702 06:56:44.965977 2294 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:56:44.995752 kubelet[2294]: I0702 06:56:44.995704 2294 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 06:56:44.995962 kubelet[2294]: I0702 06:56:44.995951 2294 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:56:44.996384 kubelet[2294]: I0702 06:56:44.996369 2294 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 06:56:44.998779 kubelet[2294]: I0702 06:56:44.998736 2294 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 06:56:45.001433 kubelet[2294]: I0702 06:56:45.001401 2294 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:56:45.047979 kubelet[2294]: I0702 06:56:45.047927 2294 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:56:45.048689 kubelet[2294]: I0702 06:56:45.048623 2294 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:56:45.049214 kubelet[2294]: I0702 06:56:45.048877 2294 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3815.2.5-b-18394828d7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:56:45.049476 kubelet[2294]: I0702 06:56:45.049456 2294 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:56:45.049583 kubelet[2294]: I0702 06:56:45.049573 2294 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:56:45.049781 kubelet[2294]: I0702 06:56:45.049767 2294 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:56:45.050030 kubelet[2294]: I0702 06:56:45.050017 2294 kubelet.go:400] "Attempting to sync node with API server" Jul 2 06:56:45.050184 kubelet[2294]: I0702 06:56:45.050167 2294 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:56:45.050312 kubelet[2294]: I0702 06:56:45.050301 2294 kubelet.go:312] "Adding apiserver pod source" Jul 2 06:56:45.050406 kubelet[2294]: I0702 06:56:45.050396 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:56:45.056768 kubelet[2294]: I0702 06:56:45.053251 2294 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:56:45.056768 kubelet[2294]: I0702 06:56:45.053589 2294 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 06:56:45.056768 kubelet[2294]: I0702 06:56:45.054519 2294 server.go:1264] "Started kubelet" Jul 2 06:56:45.057735 kubelet[2294]: I0702 06:56:45.057699 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:56:45.069608 kubelet[2294]: I0702 06:56:45.069549 2294 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:56:45.071906 kubelet[2294]: I0702 06:56:45.071866 2294 server.go:455] "Adding debug handlers to kubelet server" Jul 2 06:56:45.076479 kubelet[2294]: I0702 06:56:45.076397 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 06:56:45.080216 kubelet[2294]: I0702 06:56:45.080174 2294 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:56:45.091519 kubelet[2294]: I0702 06:56:45.091481 2294 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:56:45.096327 kubelet[2294]: I0702 06:56:45.096213 2294 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 06:56:45.096982 kubelet[2294]: I0702 06:56:45.096956 2294 reconciler.go:26] "Reconciler: start to sync state" Jul 2 06:56:45.122437 kubelet[2294]: I0702 06:56:45.118752 2294 factory.go:221] Registration of the systemd container factory successfully Jul 2 06:56:45.122437 kubelet[2294]: I0702 06:56:45.118928 2294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 06:56:45.126557 kubelet[2294]: I0702 06:56:45.126483 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:56:45.136153 kubelet[2294]: I0702 06:56:45.135501 2294 factory.go:221] Registration of the containerd container factory successfully Jul 2 06:56:45.156773 kubelet[2294]: I0702 06:56:45.156336 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:56:45.156773 kubelet[2294]: I0702 06:56:45.156396 2294 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:56:45.156773 kubelet[2294]: I0702 06:56:45.156432 2294 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 06:56:45.156773 kubelet[2294]: E0702 06:56:45.156511 2294 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:56:45.180790 kubelet[2294]: E0702 06:56:45.175801 2294 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:56:45.192796 kubelet[2294]: E0702 06:56:45.192743 2294 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jul 2 06:56:45.243146 kubelet[2294]: I0702 06:56:45.228632 2294 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.257285 kubelet[2294]: E0702 06:56:45.256786 2294 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:56:45.273568 kubelet[2294]: I0702 06:56:45.269460 2294 kubelet_node_status.go:112] "Node was previously registered" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.273568 kubelet[2294]: I0702 06:56:45.269602 2294 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.334800 kubelet[2294]: I0702 06:56:45.334762 2294 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:56:45.335129 kubelet[2294]: I0702 06:56:45.335081 2294 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:56:45.335240 kubelet[2294]: I0702 06:56:45.335229 2294 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:56:45.335556 kubelet[2294]: I0702 06:56:45.335533 2294 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 06:56:45.335679 kubelet[2294]: I0702 06:56:45.335646 2294 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 06:56:45.335774 kubelet[2294]: I0702 06:56:45.335761 2294 policy_none.go:49] "None policy: Start" Jul 2 06:56:45.339820 kubelet[2294]: I0702 06:56:45.337804 2294 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 06:56:45.339820 kubelet[2294]: I0702 06:56:45.337873 2294 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:56:45.339820 kubelet[2294]: I0702 06:56:45.338446 2294 state_mem.go:75] "Updated machine memory state" Jul 2 06:56:45.370654 kubelet[2294]: I0702 06:56:45.369929 2294 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:56:45.370654 kubelet[2294]: I0702 06:56:45.370250 2294 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 06:56:45.373066 kubelet[2294]: I0702 06:56:45.373039 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:56:45.471000 kubelet[2294]: I0702 06:56:45.470941 2294 topology_manager.go:215] "Topology Admit Handler" podUID="a79d1dbf6e326820698d576f1f823270" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.471392 kubelet[2294]: I0702 06:56:45.471362 2294 topology_manager.go:215] "Topology Admit Handler" podUID="5ae42faf3be5ac935f19c452725fd41b" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.471580 kubelet[2294]: I0702 06:56:45.471560 2294 topology_manager.go:215] "Topology Admit Handler" podUID="3c12ec9c76a417621f817f654c1566dc" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.523422 kubelet[2294]: I0702 06:56:45.523287 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-ca-certs\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.523668 kubelet[2294]: I0702 06:56:45.523645 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.523768 kubelet[2294]: I0702 06:56:45.523752 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.523878 kubelet[2294]: I0702 06:56:45.523862 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.523966 kubelet[2294]: I0702 06:56:45.523952 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ae42faf3be5ac935f19c452725fd41b-kubeconfig\") pod \"kube-scheduler-ci-3815.2.5-b-18394828d7\" (UID: \"5ae42faf3be5ac935f19c452725fd41b\") " pod="kube-system/kube-scheduler-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.524059 kubelet[2294]: I0702 06:56:45.524046 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c12ec9c76a417621f817f654c1566dc-ca-certs\") pod \"kube-apiserver-ci-3815.2.5-b-18394828d7\" (UID: \"3c12ec9c76a417621f817f654c1566dc\") " pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.524215 kubelet[2294]: I0702 06:56:45.524197 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c12ec9c76a417621f817f654c1566dc-k8s-certs\") pod \"kube-apiserver-ci-3815.2.5-b-18394828d7\" (UID: \"3c12ec9c76a417621f817f654c1566dc\") " pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.524327 kubelet[2294]: I0702 06:56:45.524311 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c12ec9c76a417621f817f654c1566dc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.5-b-18394828d7\" (UID: \"3c12ec9c76a417621f817f654c1566dc\") " pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.524422 kubelet[2294]: I0702 06:56:45.524408 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a79d1dbf6e326820698d576f1f823270-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.5-b-18394828d7\" (UID: \"a79d1dbf6e326820698d576f1f823270\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.524976 kubelet[2294]: W0702 06:56:45.524948 2294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:56:45.525233 kubelet[2294]: E0702 06:56:45.525193 2294 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3815.2.5-b-18394828d7\" already exists" pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" Jul 2 06:56:45.525450 kubelet[2294]: W0702 06:56:45.525433 2294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:56:45.536384 kubelet[2294]: W0702 06:56:45.536348 2294 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 06:56:45.828755 kubelet[2294]: E0702 06:56:45.828099 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:45.850892 kubelet[2294]: E0702 06:56:45.850837 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:45.852501 kubelet[2294]: E0702 06:56:45.852452 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:46.065554 kubelet[2294]: I0702 06:56:46.065155 2294 apiserver.go:52] "Watching apiserver" Jul 2 06:56:46.097781 kubelet[2294]: I0702 06:56:46.097655 2294 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 06:56:46.280777 kubelet[2294]: E0702 06:56:46.280736 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:46.285055 kubelet[2294]: E0702 06:56:46.285009 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:46.289691 kubelet[2294]: E0702 06:56:46.289648 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:46.397872 kubelet[2294]: I0702 06:56:46.395917 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.5-b-18394828d7" podStartSLOduration=3.395890557 podStartE2EDuration="3.395890557s" podCreationTimestamp="2024-07-02 06:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:46.370462201 +0000 UTC m=+1.661907958" watchObservedRunningTime="2024-07-02 06:56:46.395890557 +0000 UTC m=+1.687336306" Jul 2 06:56:46.414371 kubelet[2294]: I0702 06:56:46.414175 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.5-b-18394828d7" podStartSLOduration=1.414142675 podStartE2EDuration="1.414142675s" podCreationTimestamp="2024-07-02 06:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:46.412146408 +0000 UTC m=+1.703592163" watchObservedRunningTime="2024-07-02 06:56:46.414142675 +0000 UTC m=+1.705588432" Jul 2 06:56:46.414371 kubelet[2294]: I0702 06:56:46.414366 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.5-b-18394828d7" podStartSLOduration=1.414354899 podStartE2EDuration="1.414354899s" podCreationTimestamp="2024-07-02 06:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:56:46.39638596 +0000 UTC m=+1.687831790" watchObservedRunningTime="2024-07-02 06:56:46.414354899 +0000 UTC m=+1.705800657" Jul 2 06:56:47.301520 kubelet[2294]: E0702 06:56:47.283932 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:47.301520 kubelet[2294]: E0702 06:56:47.285125 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:47.850212 systemd[1]: Started sshd@6-143.110.155.161:22-60.191.20.210:23456.service - OpenSSH per-connection server daemon (60.191.20.210:23456). Jul 2 06:56:48.133263 sudo[1396]: pam_unix(sudo:session): session closed for user root Jul 2 06:56:48.185994 sshd[1393]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:48.191514 systemd[1]: sshd@4-143.110.155.161:22-147.75.109.163:59932.service: Deactivated successfully. Jul 2 06:56:48.192814 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 06:56:48.193296 systemd[1]: session-5.scope: Consumed 6.946s CPU time. Jul 2 06:56:48.197509 systemd-logind[1271]: Session 5 logged out. Waiting for processes to exit. Jul 2 06:56:48.206830 systemd-logind[1271]: Removed session 5. Jul 2 06:56:48.312715 kubelet[2294]: E0702 06:56:48.310950 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:50.684549 kubelet[2294]: E0702 06:56:50.679962 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:51.319940 kubelet[2294]: E0702 06:56:51.319572 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:53.484609 kubelet[2294]: E0702 06:56:53.484566 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:54.340558 kubelet[2294]: E0702 06:56:54.340501 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:55.157562 systemd[1]: Started sshd@7-143.110.155.161:22-180.101.88.240:12016.service - OpenSSH per-connection server daemon (180.101.88.240:12016). Jul 2 06:56:55.337666 kubelet[2294]: E0702 06:56:55.337623 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:56.373652 sshd[2354]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:56:56.644546 kubelet[2294]: I0702 06:56:56.644040 2294 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 06:56:56.646232 containerd[1278]: time="2024-07-02T06:56:56.646162292Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 06:56:56.648142 kubelet[2294]: I0702 06:56:56.647529 2294 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 06:56:57.435372 kubelet[2294]: I0702 06:56:57.435308 2294 topology_manager.go:215] "Topology Admit Handler" podUID="b8cf34fa-4afd-4d15-ae28-3f3d71015e38" podNamespace="kube-system" podName="kube-proxy-mbbms" Jul 2 06:56:57.451218 kubelet[2294]: I0702 06:56:57.451163 2294 topology_manager.go:215] "Topology Admit Handler" podUID="869e3041-21a1-42d1-a4fa-dc6dd66f9958" podNamespace="kube-flannel" podName="kube-flannel-ds-hbtm8" Jul 2 06:56:57.456897 systemd[1]: Created slice kubepods-besteffort-podb8cf34fa_4afd_4d15_ae28_3f3d71015e38.slice - libcontainer container kubepods-besteffort-podb8cf34fa_4afd_4d15_ae28_3f3d71015e38.slice. Jul 2 06:56:57.467401 kubelet[2294]: W0702 06:56:57.467326 2294 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.467719 kubelet[2294]: E0702 06:56:57.467692 2294 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.470035 kubelet[2294]: W0702 06:56:57.469986 2294 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.470620 kubelet[2294]: E0702 06:56:57.470590 2294 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.479569 kubelet[2294]: W0702 06:56:57.479522 2294 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.479868 kubelet[2294]: E0702 06:56:57.479847 2294 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.482010 systemd[1]: Created slice kubepods-burstable-pod869e3041_21a1_42d1_a4fa_dc6dd66f9958.slice - libcontainer container kubepods-burstable-pod869e3041_21a1_42d1_a4fa_dc6dd66f9958.slice. Jul 2 06:56:57.485623 kubelet[2294]: W0702 06:56:57.485566 2294 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.485830 kubelet[2294]: E0702 06:56:57.485643 2294 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.5-b-18394828d7" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-b-18394828d7' and this object Jul 2 06:56:57.504838 kubelet[2294]: I0702 06:56:57.504781 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8cf34fa-4afd-4d15-ae28-3f3d71015e38-xtables-lock\") pod \"kube-proxy-mbbms\" (UID: \"b8cf34fa-4afd-4d15-ae28-3f3d71015e38\") " pod="kube-system/kube-proxy-mbbms" Jul 2 06:56:57.505239 kubelet[2294]: I0702 06:56:57.505197 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4nwn\" (UniqueName: \"kubernetes.io/projected/b8cf34fa-4afd-4d15-ae28-3f3d71015e38-kube-api-access-f4nwn\") pod \"kube-proxy-mbbms\" (UID: \"b8cf34fa-4afd-4d15-ae28-3f3d71015e38\") " pod="kube-system/kube-proxy-mbbms" Jul 2 06:56:57.505420 kubelet[2294]: I0702 06:56:57.505393 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/869e3041-21a1-42d1-a4fa-dc6dd66f9958-cni-plugin\") pod \"kube-flannel-ds-hbtm8\" (UID: \"869e3041-21a1-42d1-a4fa-dc6dd66f9958\") " pod="kube-flannel/kube-flannel-ds-hbtm8" Jul 2 06:56:57.505544 kubelet[2294]: I0702 06:56:57.505527 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/869e3041-21a1-42d1-a4fa-dc6dd66f9958-cni\") pod \"kube-flannel-ds-hbtm8\" (UID: \"869e3041-21a1-42d1-a4fa-dc6dd66f9958\") " pod="kube-flannel/kube-flannel-ds-hbtm8" Jul 2 06:56:57.505641 kubelet[2294]: I0702 06:56:57.505625 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/869e3041-21a1-42d1-a4fa-dc6dd66f9958-flannel-cfg\") pod \"kube-flannel-ds-hbtm8\" (UID: \"869e3041-21a1-42d1-a4fa-dc6dd66f9958\") " pod="kube-flannel/kube-flannel-ds-hbtm8" Jul 2 06:56:57.505881 kubelet[2294]: I0702 06:56:57.505726 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/869e3041-21a1-42d1-a4fa-dc6dd66f9958-xtables-lock\") pod \"kube-flannel-ds-hbtm8\" (UID: \"869e3041-21a1-42d1-a4fa-dc6dd66f9958\") " pod="kube-flannel/kube-flannel-ds-hbtm8" Jul 2 06:56:57.506038 kubelet[2294]: I0702 06:56:57.506016 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qxj\" (UniqueName: \"kubernetes.io/projected/869e3041-21a1-42d1-a4fa-dc6dd66f9958-kube-api-access-g5qxj\") pod \"kube-flannel-ds-hbtm8\" (UID: \"869e3041-21a1-42d1-a4fa-dc6dd66f9958\") " pod="kube-flannel/kube-flannel-ds-hbtm8" Jul 2 06:56:57.506191 kubelet[2294]: I0702 06:56:57.506170 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8cf34fa-4afd-4d15-ae28-3f3d71015e38-kube-proxy\") pod \"kube-proxy-mbbms\" (UID: \"b8cf34fa-4afd-4d15-ae28-3f3d71015e38\") " pod="kube-system/kube-proxy-mbbms" Jul 2 06:56:57.506329 kubelet[2294]: I0702 06:56:57.506302 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8cf34fa-4afd-4d15-ae28-3f3d71015e38-lib-modules\") pod \"kube-proxy-mbbms\" (UID: \"b8cf34fa-4afd-4d15-ae28-3f3d71015e38\") " pod="kube-system/kube-proxy-mbbms" Jul 2 06:56:57.506445 kubelet[2294]: I0702 06:56:57.506427 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/869e3041-21a1-42d1-a4fa-dc6dd66f9958-run\") pod \"kube-flannel-ds-hbtm8\" (UID: \"869e3041-21a1-42d1-a4fa-dc6dd66f9958\") " pod="kube-flannel/kube-flannel-ds-hbtm8" Jul 2 06:56:58.063327 sshd[2352]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:56:58.384818 sshd[2355]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:56:58.385846 sshd[2355]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Jul 2 06:56:58.629911 kubelet[2294]: E0702 06:56:58.629829 2294 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.630788 kubelet[2294]: E0702 06:56:58.630745 2294 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/869e3041-21a1-42d1-a4fa-dc6dd66f9958-flannel-cfg podName:869e3041-21a1-42d1-a4fa-dc6dd66f9958 nodeName:}" failed. No retries permitted until 2024-07-02 06:56:59.130704569 +0000 UTC m=+14.422150314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/869e3041-21a1-42d1-a4fa-dc6dd66f9958-flannel-cfg") pod "kube-flannel-ds-hbtm8" (UID: "869e3041-21a1-42d1-a4fa-dc6dd66f9958") : failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.662055 kubelet[2294]: E0702 06:56:58.656707 2294 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.662055 kubelet[2294]: E0702 06:56:58.657113 2294 projected.go:200] Error preparing data for projected volume kube-api-access-g5qxj for pod kube-flannel/kube-flannel-ds-hbtm8: failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.662055 kubelet[2294]: E0702 06:56:58.657253 2294 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/869e3041-21a1-42d1-a4fa-dc6dd66f9958-kube-api-access-g5qxj podName:869e3041-21a1-42d1-a4fa-dc6dd66f9958 nodeName:}" failed. No retries permitted until 2024-07-02 06:56:59.157221034 +0000 UTC m=+14.448666791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g5qxj" (UniqueName: "kubernetes.io/projected/869e3041-21a1-42d1-a4fa-dc6dd66f9958-kube-api-access-g5qxj") pod "kube-flannel-ds-hbtm8" (UID: "869e3041-21a1-42d1-a4fa-dc6dd66f9958") : failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.668880 kubelet[2294]: E0702 06:56:58.667500 2294 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.668880 kubelet[2294]: E0702 06:56:58.667550 2294 projected.go:200] Error preparing data for projected volume kube-api-access-f4nwn for pod kube-system/kube-proxy-mbbms: failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:58.668880 kubelet[2294]: E0702 06:56:58.667630 2294 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b8cf34fa-4afd-4d15-ae28-3f3d71015e38-kube-api-access-f4nwn podName:b8cf34fa-4afd-4d15-ae28-3f3d71015e38 nodeName:}" failed. No retries permitted until 2024-07-02 06:56:59.167604961 +0000 UTC m=+14.459050712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f4nwn" (UniqueName: "kubernetes.io/projected/b8cf34fa-4afd-4d15-ae28-3f3d71015e38-kube-api-access-f4nwn") pod "kube-proxy-mbbms" (UID: "b8cf34fa-4afd-4d15-ae28-3f3d71015e38") : failed to sync configmap cache: timed out waiting for the condition Jul 2 06:56:59.296529 kubelet[2294]: E0702 06:56:59.290500 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:59.296772 containerd[1278]: time="2024-07-02T06:56:59.292423406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hbtm8,Uid:869e3041-21a1-42d1-a4fa-dc6dd66f9958,Namespace:kube-flannel,Attempt:0,}" Jul 2 06:56:59.394061 containerd[1278]: time="2024-07-02T06:56:59.393705132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:59.394061 containerd[1278]: time="2024-07-02T06:56:59.393801851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:59.394061 containerd[1278]: time="2024-07-02T06:56:59.393834795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:59.394061 containerd[1278]: time="2024-07-02T06:56:59.393856103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:59.512775 systemd[1]: Started cri-containerd-b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57.scope - libcontainer container b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57. Jul 2 06:56:59.576850 kubelet[2294]: E0702 06:56:59.575263 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:59.577655 containerd[1278]: time="2024-07-02T06:56:59.577595067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbbms,Uid:b8cf34fa-4afd-4d15-ae28-3f3d71015e38,Namespace:kube-system,Attempt:0,}" Jul 2 06:56:59.690708 containerd[1278]: time="2024-07-02T06:56:59.690653477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hbtm8,Uid:869e3041-21a1-42d1-a4fa-dc6dd66f9958,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\"" Jul 2 06:56:59.693375 kubelet[2294]: E0702 06:56:59.692806 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:59.711809 containerd[1278]: time="2024-07-02T06:56:59.710655152Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 2 06:56:59.768210 containerd[1278]: time="2024-07-02T06:56:59.767735740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:56:59.768210 containerd[1278]: time="2024-07-02T06:56:59.767844647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:59.768210 containerd[1278]: time="2024-07-02T06:56:59.767873242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:56:59.768210 containerd[1278]: time="2024-07-02T06:56:59.767893026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:56:59.851693 systemd[1]: Started cri-containerd-e999467c034fcb50ab7f8fe73d1f5c54f09af607b1f6dc67e611304b4937a552.scope - libcontainer container e999467c034fcb50ab7f8fe73d1f5c54f09af607b1f6dc67e611304b4937a552. Jul 2 06:56:59.931343 containerd[1278]: time="2024-07-02T06:56:59.930063204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbbms,Uid:b8cf34fa-4afd-4d15-ae28-3f3d71015e38,Namespace:kube-system,Attempt:0,} returns sandbox id \"e999467c034fcb50ab7f8fe73d1f5c54f09af607b1f6dc67e611304b4937a552\"" Jul 2 06:56:59.931644 kubelet[2294]: E0702 06:56:59.931002 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:56:59.940700 containerd[1278]: time="2024-07-02T06:56:59.939031161Z" level=info msg="CreateContainer within sandbox \"e999467c034fcb50ab7f8fe73d1f5c54f09af607b1f6dc67e611304b4937a552\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 06:57:00.010342 containerd[1278]: time="2024-07-02T06:57:00.009188497Z" level=info msg="CreateContainer within sandbox \"e999467c034fcb50ab7f8fe73d1f5c54f09af607b1f6dc67e611304b4937a552\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0616fc2a585e100af89eec5be6a50c7392ffb0b03025aeb7456439c47a3be909\"" Jul 2 06:57:00.016048 containerd[1278]: time="2024-07-02T06:57:00.010951605Z" level=info msg="StartContainer for \"0616fc2a585e100af89eec5be6a50c7392ffb0b03025aeb7456439c47a3be909\"" Jul 2 06:57:00.109482 systemd[1]: Started cri-containerd-0616fc2a585e100af89eec5be6a50c7392ffb0b03025aeb7456439c47a3be909.scope - libcontainer container 0616fc2a585e100af89eec5be6a50c7392ffb0b03025aeb7456439c47a3be909. Jul 2 06:57:00.223304 containerd[1278]: time="2024-07-02T06:57:00.223036826Z" level=info msg="StartContainer for \"0616fc2a585e100af89eec5be6a50c7392ffb0b03025aeb7456439c47a3be909\" returns successfully" Jul 2 06:57:00.366820 kubelet[2294]: E0702 06:57:00.366693 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:01.020413 sshd[2352]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:57:01.363860 sshd[2550]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:57:02.517504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962329854.mount: Deactivated successfully. Jul 2 06:57:02.871108 containerd[1278]: time="2024-07-02T06:57:02.870917569Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:02.885311 containerd[1278]: time="2024-07-02T06:57:02.884142019Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jul 2 06:57:02.890949 containerd[1278]: time="2024-07-02T06:57:02.890803655Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:02.900941 containerd[1278]: time="2024-07-02T06:57:02.900861524Z" level=info msg="ImageUpdate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:02.905972 containerd[1278]: time="2024-07-02T06:57:02.905881007Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:02.909442 containerd[1278]: time="2024-07-02T06:57:02.909355329Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 3.197518938s" Jul 2 06:57:02.909961 containerd[1278]: time="2024-07-02T06:57:02.909916877Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jul 2 06:57:02.921748 containerd[1278]: time="2024-07-02T06:57:02.921674766Z" level=info msg="CreateContainer within sandbox \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 2 06:57:03.014019 containerd[1278]: time="2024-07-02T06:57:03.013932509Z" level=info msg="CreateContainer within sandbox \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d\"" Jul 2 06:57:03.016760 containerd[1278]: time="2024-07-02T06:57:03.016701499Z" level=info msg="StartContainer for \"1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d\"" Jul 2 06:57:03.077295 sshd[2352]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:57:03.113551 systemd[1]: Started cri-containerd-1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d.scope - libcontainer container 1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d. Jul 2 06:57:03.208794 systemd[1]: cri-containerd-1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d.scope: Deactivated successfully. Jul 2 06:57:03.221894 containerd[1278]: time="2024-07-02T06:57:03.220276010Z" level=info msg="StartContainer for \"1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d\" returns successfully" Jul 2 06:57:03.277082 sshd[2352]: Received disconnect from 180.101.88.240 port 12016:11: [preauth] Jul 2 06:57:03.277356 sshd[2352]: Disconnected from authenticating user root 180.101.88.240 port 12016 [preauth] Jul 2 06:57:03.283442 systemd[1]: sshd@7-143.110.155.161:22-180.101.88.240:12016.service: Deactivated successfully. Jul 2 06:57:03.328626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d-rootfs.mount: Deactivated successfully. Jul 2 06:57:03.368915 containerd[1278]: time="2024-07-02T06:57:03.368260838Z" level=info msg="shim disconnected" id=1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d namespace=k8s.io Jul 2 06:57:03.368915 containerd[1278]: time="2024-07-02T06:57:03.368355159Z" level=warning msg="cleaning up after shim disconnected" id=1d8e7c8c909cc9d27c22bbad95cd1e0ff24b7809c2c0e3d4dca4c239f82ae44d namespace=k8s.io Jul 2 06:57:03.368915 containerd[1278]: time="2024-07-02T06:57:03.368372061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:03.405997 kubelet[2294]: E0702 06:57:03.405466 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:03.432883 kubelet[2294]: I0702 06:57:03.432786 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mbbms" podStartSLOduration=6.432758764 podStartE2EDuration="6.432758764s" podCreationTimestamp="2024-07-02 06:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:00.428865387 +0000 UTC m=+15.720311168" watchObservedRunningTime="2024-07-02 06:57:03.432758764 +0000 UTC m=+18.724204521" Jul 2 06:57:04.417911 kubelet[2294]: E0702 06:57:04.416805 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:04.431554 containerd[1278]: time="2024-07-02T06:57:04.428414415Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 2 06:57:07.008694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541486746.mount: Deactivated successfully. Jul 2 06:57:09.506918 containerd[1278]: time="2024-07-02T06:57:09.506597825Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:09.511226 containerd[1278]: time="2024-07-02T06:57:09.511085846Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jul 2 06:57:09.520552 containerd[1278]: time="2024-07-02T06:57:09.515799299Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:09.528404 containerd[1278]: time="2024-07-02T06:57:09.528340111Z" level=info msg="ImageUpdate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:09.532777 containerd[1278]: time="2024-07-02T06:57:09.532704572Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:09.534360 containerd[1278]: time="2024-07-02T06:57:09.534285691Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.101612451s" Jul 2 06:57:09.534728 containerd[1278]: time="2024-07-02T06:57:09.534667825Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jul 2 06:57:09.548785 containerd[1278]: time="2024-07-02T06:57:09.548025046Z" level=info msg="CreateContainer within sandbox \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 06:57:09.622556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215655568.mount: Deactivated successfully. Jul 2 06:57:09.637426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265562968.mount: Deactivated successfully. Jul 2 06:57:09.680144 containerd[1278]: time="2024-07-02T06:57:09.665338510Z" level=info msg="CreateContainer within sandbox \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508\"" Jul 2 06:57:09.703011 containerd[1278]: time="2024-07-02T06:57:09.702921935Z" level=info msg="StartContainer for \"a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508\"" Jul 2 06:57:09.799594 systemd[1]: Started cri-containerd-a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508.scope - libcontainer container a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508. Jul 2 06:57:09.883201 systemd[1]: cri-containerd-a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508.scope: Deactivated successfully. Jul 2 06:57:09.889249 containerd[1278]: time="2024-07-02T06:57:09.885161220Z" level=info msg="StartContainer for \"a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508\" returns successfully" Jul 2 06:57:09.912671 kubelet[2294]: I0702 06:57:09.910220 2294 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 06:57:09.998282 containerd[1278]: time="2024-07-02T06:57:09.998192349Z" level=info msg="shim disconnected" id=a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508 namespace=k8s.io Jul 2 06:57:09.998282 containerd[1278]: time="2024-07-02T06:57:09.998254587Z" level=warning msg="cleaning up after shim disconnected" id=a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508 namespace=k8s.io Jul 2 06:57:09.998282 containerd[1278]: time="2024-07-02T06:57:09.998263660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:10.007184 kubelet[2294]: I0702 06:57:10.006208 2294 topology_manager.go:215] "Topology Admit Handler" podUID="dca2134a-923b-4a02-b1b6-b66e518d8334" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7rnpd" Jul 2 06:57:10.007184 kubelet[2294]: I0702 06:57:10.007066 2294 topology_manager.go:215] "Topology Admit Handler" podUID="f0cd7f83-9871-4c65-bd50-4daae33189d3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-76cqb" Jul 2 06:57:10.037066 systemd[1]: Created slice kubepods-burstable-podf0cd7f83_9871_4c65_bd50_4daae33189d3.slice - libcontainer container kubepods-burstable-podf0cd7f83_9871_4c65_bd50_4daae33189d3.slice. Jul 2 06:57:10.058725 systemd[1]: Created slice kubepods-burstable-poddca2134a_923b_4a02_b1b6_b66e518d8334.slice - libcontainer container kubepods-burstable-poddca2134a_923b_4a02_b1b6_b66e518d8334.slice. Jul 2 06:57:10.193342 kubelet[2294]: I0702 06:57:10.193165 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dca2134a-923b-4a02-b1b6-b66e518d8334-config-volume\") pod \"coredns-7db6d8ff4d-7rnpd\" (UID: \"dca2134a-923b-4a02-b1b6-b66e518d8334\") " pod="kube-system/coredns-7db6d8ff4d-7rnpd" Jul 2 06:57:10.193342 kubelet[2294]: I0702 06:57:10.193275 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7cgq\" (UniqueName: \"kubernetes.io/projected/f0cd7f83-9871-4c65-bd50-4daae33189d3-kube-api-access-c7cgq\") pod \"coredns-7db6d8ff4d-76cqb\" (UID: \"f0cd7f83-9871-4c65-bd50-4daae33189d3\") " pod="kube-system/coredns-7db6d8ff4d-76cqb" Jul 2 06:57:10.193342 kubelet[2294]: I0702 06:57:10.193348 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzhg\" (UniqueName: \"kubernetes.io/projected/dca2134a-923b-4a02-b1b6-b66e518d8334-kube-api-access-ztzhg\") pod \"coredns-7db6d8ff4d-7rnpd\" (UID: \"dca2134a-923b-4a02-b1b6-b66e518d8334\") " pod="kube-system/coredns-7db6d8ff4d-7rnpd" Jul 2 06:57:10.193738 kubelet[2294]: I0702 06:57:10.193390 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0cd7f83-9871-4c65-bd50-4daae33189d3-config-volume\") pod \"coredns-7db6d8ff4d-76cqb\" (UID: \"f0cd7f83-9871-4c65-bd50-4daae33189d3\") " pod="kube-system/coredns-7db6d8ff4d-76cqb" Jul 2 06:57:10.352413 kubelet[2294]: E0702 06:57:10.351878 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:10.365443 containerd[1278]: time="2024-07-02T06:57:10.363918375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-76cqb,Uid:f0cd7f83-9871-4c65-bd50-4daae33189d3,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:10.372565 kubelet[2294]: E0702 06:57:10.372426 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:10.374383 containerd[1278]: time="2024-07-02T06:57:10.373371215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7rnpd,Uid:dca2134a-923b-4a02-b1b6-b66e518d8334,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:10.495443 kubelet[2294]: E0702 06:57:10.492621 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:10.510486 containerd[1278]: time="2024-07-02T06:57:10.510419850Z" level=info msg="CreateContainer within sandbox \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 2 06:57:10.569415 containerd[1278]: time="2024-07-02T06:57:10.568792747Z" level=info msg="CreateContainer within sandbox \"b2f8d8a1a524a70918229aecce314002678efb277ff3c1c4f52094983bed5c57\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"eb0dfbd7462cd2f0516294fa2d858bff7fca033cfc7ae567bfa82529d3c9e4bf\"" Jul 2 06:57:10.574160 containerd[1278]: time="2024-07-02T06:57:10.572383691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-76cqb,Uid:f0cd7f83-9871-4c65-bd50-4daae33189d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3ace2a3d8406d50b160a7dc15768542c5a665f5b4a1d4d092285712a943c8b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 06:57:10.574160 containerd[1278]: time="2024-07-02T06:57:10.572899750Z" level=info msg="StartContainer for \"eb0dfbd7462cd2f0516294fa2d858bff7fca033cfc7ae567bfa82529d3c9e4bf\"" Jul 2 06:57:10.578664 kubelet[2294]: E0702 06:57:10.578391 2294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3ace2a3d8406d50b160a7dc15768542c5a665f5b4a1d4d092285712a943c8b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 06:57:10.578664 kubelet[2294]: E0702 06:57:10.578503 2294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3ace2a3d8406d50b160a7dc15768542c5a665f5b4a1d4d092285712a943c8b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-76cqb" Jul 2 06:57:10.578664 kubelet[2294]: E0702 06:57:10.578536 2294 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3ace2a3d8406d50b160a7dc15768542c5a665f5b4a1d4d092285712a943c8b6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-76cqb" Jul 2 06:57:10.578664 kubelet[2294]: E0702 06:57:10.578603 2294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-76cqb_kube-system(f0cd7f83-9871-4c65-bd50-4daae33189d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-76cqb_kube-system(f0cd7f83-9871-4c65-bd50-4daae33189d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3ace2a3d8406d50b160a7dc15768542c5a665f5b4a1d4d092285712a943c8b6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-76cqb" podUID="f0cd7f83-9871-4c65-bd50-4daae33189d3" Jul 2 06:57:10.631246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a91f914dd0903729c2f15f50b907e9fd9b9df39424c6d0eb0bfaf6925960f508-rootfs.mount: Deactivated successfully. Jul 2 06:57:10.643447 systemd[1]: run-netns-cni\x2d16c4b90e\x2d23d3\x2d1245\x2d48da\x2d0e804c1a2df3.mount: Deactivated successfully. Jul 2 06:57:10.652302 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78b2d1d152e09b934954a112442bc2f6590b9dffa18a3715f828d4451ea2b142-shm.mount: Deactivated successfully. Jul 2 06:57:10.684562 containerd[1278]: time="2024-07-02T06:57:10.684470682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7rnpd,Uid:dca2134a-923b-4a02-b1b6-b66e518d8334,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78b2d1d152e09b934954a112442bc2f6590b9dffa18a3715f828d4451ea2b142\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 06:57:10.685401 kubelet[2294]: E0702 06:57:10.685349 2294 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b2d1d152e09b934954a112442bc2f6590b9dffa18a3715f828d4451ea2b142\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 06:57:10.685597 kubelet[2294]: E0702 06:57:10.685444 2294 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b2d1d152e09b934954a112442bc2f6590b9dffa18a3715f828d4451ea2b142\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-7rnpd" Jul 2 06:57:10.685597 kubelet[2294]: E0702 06:57:10.685491 2294 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b2d1d152e09b934954a112442bc2f6590b9dffa18a3715f828d4451ea2b142\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-7rnpd" Jul 2 06:57:10.685597 kubelet[2294]: E0702 06:57:10.685561 2294 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7rnpd_kube-system(dca2134a-923b-4a02-b1b6-b66e518d8334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7rnpd_kube-system(dca2134a-923b-4a02-b1b6-b66e518d8334)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78b2d1d152e09b934954a112442bc2f6590b9dffa18a3715f828d4451ea2b142\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-7rnpd" podUID="dca2134a-923b-4a02-b1b6-b66e518d8334" Jul 2 06:57:10.718046 systemd[1]: run-containerd-runc-k8s.io-eb0dfbd7462cd2f0516294fa2d858bff7fca033cfc7ae567bfa82529d3c9e4bf-runc.MhYiWh.mount: Deactivated successfully. Jul 2 06:57:10.726648 systemd[1]: Started cri-containerd-eb0dfbd7462cd2f0516294fa2d858bff7fca033cfc7ae567bfa82529d3c9e4bf.scope - libcontainer container eb0dfbd7462cd2f0516294fa2d858bff7fca033cfc7ae567bfa82529d3c9e4bf. Jul 2 06:57:10.827850 containerd[1278]: time="2024-07-02T06:57:10.827766827Z" level=info msg="StartContainer for \"eb0dfbd7462cd2f0516294fa2d858bff7fca033cfc7ae567bfa82529d3c9e4bf\" returns successfully" Jul 2 06:57:11.504445 kubelet[2294]: E0702 06:57:11.500856 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:11.986159 systemd-networkd[1093]: flannel.1: Link UP Jul 2 06:57:11.986170 systemd-networkd[1093]: flannel.1: Gained carrier Jul 2 06:57:12.505501 kubelet[2294]: E0702 06:57:12.505471 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:14.025553 systemd-networkd[1093]: flannel.1: Gained IPv6LL Jul 2 06:57:19.791831 sshd[2342]: kex_exchange_identification: read: Connection reset by peer Jul 2 06:57:19.791831 sshd[2342]: Connection reset by 60.191.20.210 port 23456 Jul 2 06:57:19.793447 systemd[1]: sshd@6-143.110.155.161:22-60.191.20.210:23456.service: Deactivated successfully. Jul 2 06:57:22.159451 kubelet[2294]: E0702 06:57:22.157638 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:22.161400 containerd[1278]: time="2024-07-02T06:57:22.160666426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7rnpd,Uid:dca2134a-923b-4a02-b1b6-b66e518d8334,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:22.315269 systemd-networkd[1093]: cni0: Link UP Jul 2 06:57:22.356004 systemd-networkd[1093]: veth17ba0bd6: Link UP Jul 2 06:57:22.360132 kernel: cni0: port 1(veth17ba0bd6) entered blocking state Jul 2 06:57:22.360319 kernel: cni0: port 1(veth17ba0bd6) entered disabled state Jul 2 06:57:22.371892 kernel: device veth17ba0bd6 entered promiscuous mode Jul 2 06:57:22.379474 kernel: cni0: port 1(veth17ba0bd6) entered blocking state Jul 2 06:57:22.379626 kernel: cni0: port 1(veth17ba0bd6) entered forwarding state Jul 2 06:57:22.385221 kernel: cni0: port 1(veth17ba0bd6) entered disabled state Jul 2 06:57:22.419293 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth17ba0bd6: link becomes ready Jul 2 06:57:22.420258 kernel: cni0: port 1(veth17ba0bd6) entered blocking state Jul 2 06:57:22.420341 kernel: cni0: port 1(veth17ba0bd6) entered forwarding state Jul 2 06:57:22.420376 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cni0: link becomes ready Jul 2 06:57:22.421957 systemd-networkd[1093]: veth17ba0bd6: Gained carrier Jul 2 06:57:22.422471 systemd-networkd[1093]: cni0: Gained carrier Jul 2 06:57:22.433911 containerd[1278]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c928), "name":"cbr0", "type":"bridge"} Jul 2 06:57:22.433911 containerd[1278]: delegateAdd: netconf sent to delegate plugin: Jul 2 06:57:22.479876 containerd[1278]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-07-02T06:57:22.476642994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:22.479876 containerd[1278]: time="2024-07-02T06:57:22.476769352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.479876 containerd[1278]: time="2024-07-02T06:57:22.476798700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:22.479876 containerd[1278]: time="2024-07-02T06:57:22.476812933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.522081 systemd[1]: Started cri-containerd-346e48018dc7bf4b28effbb3f48f6ffd333399385c138eebcc5cf7d6e9f9101b.scope - libcontainer container 346e48018dc7bf4b28effbb3f48f6ffd333399385c138eebcc5cf7d6e9f9101b. Jul 2 06:57:22.527262 systemd[1]: run-containerd-runc-k8s.io-346e48018dc7bf4b28effbb3f48f6ffd333399385c138eebcc5cf7d6e9f9101b-runc.IlVk4W.mount: Deactivated successfully. Jul 2 06:57:22.628977 containerd[1278]: time="2024-07-02T06:57:22.628844883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7rnpd,Uid:dca2134a-923b-4a02-b1b6-b66e518d8334,Namespace:kube-system,Attempt:0,} returns sandbox id \"346e48018dc7bf4b28effbb3f48f6ffd333399385c138eebcc5cf7d6e9f9101b\"" Jul 2 06:57:22.633506 kubelet[2294]: E0702 06:57:22.631286 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:22.639894 containerd[1278]: time="2024-07-02T06:57:22.639303669Z" level=info msg="CreateContainer within sandbox \"346e48018dc7bf4b28effbb3f48f6ffd333399385c138eebcc5cf7d6e9f9101b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:57:22.754340 containerd[1278]: time="2024-07-02T06:57:22.753970510Z" level=info msg="CreateContainer within sandbox \"346e48018dc7bf4b28effbb3f48f6ffd333399385c138eebcc5cf7d6e9f9101b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"849b31bcd5552337daf7f3825aedb3b41f1375b3621f2a5b05ace5fc98082ff0\"" Jul 2 06:57:22.766290 containerd[1278]: time="2024-07-02T06:57:22.764194096Z" level=info msg="StartContainer for \"849b31bcd5552337daf7f3825aedb3b41f1375b3621f2a5b05ace5fc98082ff0\"" Jul 2 06:57:22.823439 systemd[1]: Started cri-containerd-849b31bcd5552337daf7f3825aedb3b41f1375b3621f2a5b05ace5fc98082ff0.scope - libcontainer container 849b31bcd5552337daf7f3825aedb3b41f1375b3621f2a5b05ace5fc98082ff0. Jul 2 06:57:22.929847 containerd[1278]: time="2024-07-02T06:57:22.929538237Z" level=info msg="StartContainer for \"849b31bcd5552337daf7f3825aedb3b41f1375b3621f2a5b05ace5fc98082ff0\" returns successfully" Jul 2 06:57:23.165439 kubelet[2294]: E0702 06:57:23.165397 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:23.172859 containerd[1278]: time="2024-07-02T06:57:23.169276355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-76cqb,Uid:f0cd7f83-9871-4c65-bd50-4daae33189d3,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:23.334952 systemd-networkd[1093]: veth20742b2c: Link UP Jul 2 06:57:23.350449 kernel: cni0: port 2(veth20742b2c) entered blocking state Jul 2 06:57:23.350649 kernel: cni0: port 2(veth20742b2c) entered disabled state Jul 2 06:57:23.350692 kernel: device veth20742b2c entered promiscuous mode Jul 2 06:57:23.402620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:57:23.402810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth20742b2c: link becomes ready Jul 2 06:57:23.402856 kernel: cni0: port 2(veth20742b2c) entered blocking state Jul 2 06:57:23.402887 kernel: cni0: port 2(veth20742b2c) entered forwarding state Jul 2 06:57:23.403159 systemd-networkd[1093]: veth20742b2c: Gained carrier Jul 2 06:57:23.408654 containerd[1278]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Jul 2 06:57:23.408654 containerd[1278]: delegateAdd: netconf sent to delegate plugin: Jul 2 06:57:23.475122 containerd[1278]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-07-02T06:57:23.474719821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:23.475122 containerd[1278]: time="2024-07-02T06:57:23.474831604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:23.475122 containerd[1278]: time="2024-07-02T06:57:23.474871930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:23.475122 containerd[1278]: time="2024-07-02T06:57:23.474906984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:23.564152 systemd[1]: Started cri-containerd-fa5c3429f629450e75e3ff4a0e14886abb67a2c6c605701b42b90aad327bd1ef.scope - libcontainer container fa5c3429f629450e75e3ff4a0e14886abb67a2c6c605701b42b90aad327bd1ef. Jul 2 06:57:23.576185 kubelet[2294]: E0702 06:57:23.575392 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:23.638443 kubelet[2294]: I0702 06:57:23.638350 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-hbtm8" podStartSLOduration=16.800774198 podStartE2EDuration="26.638290826s" podCreationTimestamp="2024-07-02 06:56:57 +0000 UTC" firstStartedPulling="2024-07-02 06:56:59.699000276 +0000 UTC m=+14.990446007" lastFinishedPulling="2024-07-02 06:57:09.53651689 +0000 UTC m=+24.827962635" observedRunningTime="2024-07-02 06:57:11.531316517 +0000 UTC m=+26.822762275" watchObservedRunningTime="2024-07-02 06:57:23.638290826 +0000 UTC m=+38.929736583" Jul 2 06:57:23.708195 containerd[1278]: time="2024-07-02T06:57:23.708131609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-76cqb,Uid:f0cd7f83-9871-4c65-bd50-4daae33189d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa5c3429f629450e75e3ff4a0e14886abb67a2c6c605701b42b90aad327bd1ef\"" Jul 2 06:57:23.754801 kubelet[2294]: I0702 06:57:23.754625 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7rnpd" podStartSLOduration=26.754593435 podStartE2EDuration="26.754593435s" podCreationTimestamp="2024-07-02 06:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:23.646139382 +0000 UTC m=+38.937585138" watchObservedRunningTime="2024-07-02 06:57:23.754593435 +0000 UTC m=+39.046039189" Jul 2 06:57:23.759035 kubelet[2294]: E0702 06:57:23.758973 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:23.791329 containerd[1278]: time="2024-07-02T06:57:23.791004952Z" level=info msg="CreateContainer within sandbox \"fa5c3429f629450e75e3ff4a0e14886abb67a2c6c605701b42b90aad327bd1ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:57:23.875376 containerd[1278]: time="2024-07-02T06:57:23.875312353Z" level=info msg="CreateContainer within sandbox \"fa5c3429f629450e75e3ff4a0e14886abb67a2c6c605701b42b90aad327bd1ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd4d82ff7812b03b67b59e3520b954ca5e756a555ebe4f924bc56a1d278e8e64\"" Jul 2 06:57:23.888654 containerd[1278]: time="2024-07-02T06:57:23.888571481Z" level=info msg="StartContainer for \"fd4d82ff7812b03b67b59e3520b954ca5e756a555ebe4f924bc56a1d278e8e64\"" Jul 2 06:57:24.029547 systemd[1]: Started cri-containerd-fd4d82ff7812b03b67b59e3520b954ca5e756a555ebe4f924bc56a1d278e8e64.scope - libcontainer container fd4d82ff7812b03b67b59e3520b954ca5e756a555ebe4f924bc56a1d278e8e64. Jul 2 06:57:24.120236 containerd[1278]: time="2024-07-02T06:57:24.119409478Z" level=info msg="StartContainer for \"fd4d82ff7812b03b67b59e3520b954ca5e756a555ebe4f924bc56a1d278e8e64\" returns successfully" Jul 2 06:57:24.130825 systemd-networkd[1093]: veth17ba0bd6: Gained IPv6LL Jul 2 06:57:24.131321 systemd-networkd[1093]: cni0: Gained IPv6LL Jul 2 06:57:24.612417 kubelet[2294]: E0702 06:57:24.612125 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:24.618459 kubelet[2294]: E0702 06:57:24.618414 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:24.640346 systemd-networkd[1093]: veth20742b2c: Gained IPv6LL Jul 2 06:57:24.706309 kubelet[2294]: I0702 06:57:24.706207 2294 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-76cqb" podStartSLOduration=27.706083051 podStartE2EDuration="27.706083051s" podCreationTimestamp="2024-07-02 06:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:24.649160324 +0000 UTC m=+39.940606085" watchObservedRunningTime="2024-07-02 06:57:24.706083051 +0000 UTC m=+39.997528806" Jul 2 06:57:25.638225 kubelet[2294]: E0702 06:57:25.633670 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:25.638225 kubelet[2294]: E0702 06:57:25.635571 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:26.638064 kubelet[2294]: E0702 06:57:26.637914 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:57:45.446288 systemd[1]: Started sshd@8-143.110.155.161:22-147.75.109.163:34572.service - OpenSSH per-connection server daemon (147.75.109.163:34572). Jul 2 06:57:45.583346 sshd[3289]: Accepted publickey for core from 147.75.109.163 port 34572 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:57:45.589670 sshd[3289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:45.605703 systemd-logind[1271]: New session 6 of user core. Jul 2 06:57:45.616890 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 06:57:46.032614 sshd[3289]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:46.051526 systemd[1]: sshd@8-143.110.155.161:22-147.75.109.163:34572.service: Deactivated successfully. Jul 2 06:57:46.053194 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 06:57:46.078026 systemd-logind[1271]: Session 6 logged out. Waiting for processes to exit. Jul 2 06:57:46.080222 systemd-logind[1271]: Removed session 6. Jul 2 06:57:50.509092 systemd[1]: Started sshd@9-143.110.155.161:22-180.101.88.240:29423.service - OpenSSH per-connection server daemon (180.101.88.240:29423). Jul 2 06:57:51.086774 systemd[1]: Started sshd@10-143.110.155.161:22-147.75.109.163:34582.service - OpenSSH per-connection server daemon (147.75.109.163:34582). Jul 2 06:57:51.189942 sshd[3325]: Accepted publickey for core from 147.75.109.163 port 34582 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:57:51.197490 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:51.221704 systemd-logind[1271]: New session 7 of user core. Jul 2 06:57:51.234993 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 06:57:51.597323 sshd[3325]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:51.606070 systemd[1]: sshd@10-143.110.155.161:22-147.75.109.163:34582.service: Deactivated successfully. Jul 2 06:57:51.607688 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 06:57:51.611657 systemd-logind[1271]: Session 7 logged out. Waiting for processes to exit. Jul 2 06:57:51.619369 systemd-logind[1271]: Removed session 7. Jul 2 06:57:51.755940 sshd[3336]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:57:53.667174 sshd[3322]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:57:54.004307 sshd[3359]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:57:55.856240 sshd[3322]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:57:56.189664 sshd[3360]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:57:56.645307 systemd[1]: Started sshd@11-143.110.155.161:22-147.75.109.163:56996.service - OpenSSH per-connection server daemon (147.75.109.163:56996). Jul 2 06:57:56.827008 sshd[3362]: Accepted publickey for core from 147.75.109.163 port 56996 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:57:56.832175 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:56.857377 systemd-logind[1271]: New session 8 of user core. Jul 2 06:57:56.876005 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 06:57:57.299823 sshd[3362]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:57.306390 systemd[1]: sshd@11-143.110.155.161:22-147.75.109.163:56996.service: Deactivated successfully. Jul 2 06:57:57.307650 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 06:57:57.314502 systemd-logind[1271]: Session 8 logged out. Waiting for processes to exit. Jul 2 06:57:57.318262 systemd-logind[1271]: Removed session 8. Jul 2 06:57:58.119806 sshd[3322]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:57:58.290880 sshd[3322]: Received disconnect from 180.101.88.240 port 29423:11: [preauth] Jul 2 06:57:58.290880 sshd[3322]: Disconnected from authenticating user root 180.101.88.240 port 29423 [preauth] Jul 2 06:57:58.292827 systemd[1]: sshd@9-143.110.155.161:22-180.101.88.240:29423.service: Deactivated successfully. Jul 2 06:58:01.161475 kubelet[2294]: E0702 06:58:01.159846 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:02.360668 systemd[1]: Started sshd@12-143.110.155.161:22-147.75.109.163:57008.service - OpenSSH per-connection server daemon (147.75.109.163:57008). Jul 2 06:58:02.447614 sshd[3399]: Accepted publickey for core from 147.75.109.163 port 57008 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:02.451204 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:02.475065 systemd-logind[1271]: New session 9 of user core. Jul 2 06:58:02.486416 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 06:58:02.843994 sshd[3399]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:02.874026 systemd[1]: sshd@12-143.110.155.161:22-147.75.109.163:57008.service: Deactivated successfully. Jul 2 06:58:02.891847 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 06:58:02.905067 systemd-logind[1271]: Session 9 logged out. Waiting for processes to exit. Jul 2 06:58:02.907617 systemd-logind[1271]: Removed session 9. Jul 2 06:58:02.951977 systemd[1]: Started sshd@13-143.110.155.161:22-147.75.109.163:41464.service - OpenSSH per-connection server daemon (147.75.109.163:41464). Jul 2 06:58:03.065263 sshd[3418]: Accepted publickey for core from 147.75.109.163 port 41464 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:03.070283 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:03.088869 systemd-logind[1271]: New session 10 of user core. Jul 2 06:58:03.091531 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 06:58:03.528589 sshd[3418]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:03.560256 systemd[1]: Started sshd@14-143.110.155.161:22-147.75.109.163:41472.service - OpenSSH per-connection server daemon (147.75.109.163:41472). Jul 2 06:58:03.562063 systemd[1]: sshd@13-143.110.155.161:22-147.75.109.163:41464.service: Deactivated successfully. Jul 2 06:58:03.564051 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 06:58:03.574206 systemd-logind[1271]: Session 10 logged out. Waiting for processes to exit. Jul 2 06:58:03.575867 systemd-logind[1271]: Removed session 10. Jul 2 06:58:03.659065 sshd[3441]: Accepted publickey for core from 147.75.109.163 port 41472 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:03.662349 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:03.717618 systemd-logind[1271]: New session 11 of user core. Jul 2 06:58:03.722589 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 06:58:04.028335 sshd[3441]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:04.058140 systemd[1]: sshd@14-143.110.155.161:22-147.75.109.163:41472.service: Deactivated successfully. Jul 2 06:58:04.059370 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 06:58:04.070578 systemd-logind[1271]: Session 11 logged out. Waiting for processes to exit. Jul 2 06:58:04.074839 systemd-logind[1271]: Removed session 11. Jul 2 06:58:05.161209 kubelet[2294]: E0702 06:58:05.160602 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:09.051171 systemd[1]: Started sshd@15-143.110.155.161:22-147.75.109.163:41478.service - OpenSSH per-connection server daemon (147.75.109.163:41478). Jul 2 06:58:09.130760 sshd[3475]: Accepted publickey for core from 147.75.109.163 port 41478 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:09.135601 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:09.163853 systemd-logind[1271]: New session 12 of user core. Jul 2 06:58:09.165632 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 06:58:09.464316 sshd[3475]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:09.474854 systemd[1]: sshd@15-143.110.155.161:22-147.75.109.163:41478.service: Deactivated successfully. Jul 2 06:58:09.476564 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 06:58:09.490502 systemd-logind[1271]: Session 12 logged out. Waiting for processes to exit. Jul 2 06:58:09.495716 systemd-logind[1271]: Removed session 12. Jul 2 06:58:14.486257 systemd[1]: Started sshd@16-143.110.155.161:22-147.75.109.163:48414.service - OpenSSH per-connection server daemon (147.75.109.163:48414). Jul 2 06:58:14.576678 sshd[3508]: Accepted publickey for core from 147.75.109.163 port 48414 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:14.581733 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:14.596234 systemd-logind[1271]: New session 13 of user core. Jul 2 06:58:14.605032 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 06:58:14.876531 sshd[3508]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:14.885724 systemd[1]: sshd@16-143.110.155.161:22-147.75.109.163:48414.service: Deactivated successfully. Jul 2 06:58:14.886979 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 06:58:14.890756 systemd-logind[1271]: Session 13 logged out. Waiting for processes to exit. Jul 2 06:58:14.892524 systemd-logind[1271]: Removed session 13. Jul 2 06:58:17.168343 kubelet[2294]: E0702 06:58:17.168284 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:19.910156 systemd[1]: Started sshd@17-143.110.155.161:22-147.75.109.163:48424.service - OpenSSH per-connection server daemon (147.75.109.163:48424). Jul 2 06:58:19.986785 sshd[3541]: Accepted publickey for core from 147.75.109.163 port 48424 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:19.994989 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:20.018624 systemd-logind[1271]: New session 14 of user core. Jul 2 06:58:20.032702 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 06:58:20.158654 kubelet[2294]: E0702 06:58:20.158593 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:20.347679 sshd[3541]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:20.373661 systemd[1]: Started sshd@18-143.110.155.161:22-147.75.109.163:48426.service - OpenSSH per-connection server daemon (147.75.109.163:48426). Jul 2 06:58:20.380416 systemd[1]: sshd@17-143.110.155.161:22-147.75.109.163:48424.service: Deactivated successfully. Jul 2 06:58:20.388616 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 06:58:20.393347 systemd-logind[1271]: Session 14 logged out. Waiting for processes to exit. Jul 2 06:58:20.394865 systemd-logind[1271]: Removed session 14. Jul 2 06:58:20.479027 sshd[3552]: Accepted publickey for core from 147.75.109.163 port 48426 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:20.483795 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:20.497074 systemd-logind[1271]: New session 15 of user core. Jul 2 06:58:20.503548 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 06:58:21.385863 sshd[3552]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:21.412464 systemd[1]: sshd@18-143.110.155.161:22-147.75.109.163:48426.service: Deactivated successfully. Jul 2 06:58:21.414976 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 06:58:21.419051 systemd-logind[1271]: Session 15 logged out. Waiting for processes to exit. Jul 2 06:58:21.427812 systemd[1]: Started sshd@19-143.110.155.161:22-147.75.109.163:48442.service - OpenSSH per-connection server daemon (147.75.109.163:48442). Jul 2 06:58:21.433261 systemd-logind[1271]: Removed session 15. Jul 2 06:58:21.590469 sshd[3563]: Accepted publickey for core from 147.75.109.163 port 48442 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:21.576680 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:21.617288 systemd-logind[1271]: New session 16 of user core. Jul 2 06:58:21.619855 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 06:58:24.830354 sshd[3563]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:24.843617 systemd[1]: sshd@19-143.110.155.161:22-147.75.109.163:48442.service: Deactivated successfully. Jul 2 06:58:24.843677 systemd-logind[1271]: Session 16 logged out. Waiting for processes to exit. Jul 2 06:58:24.847570 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 06:58:24.860003 systemd[1]: Started sshd@20-143.110.155.161:22-147.75.109.163:33562.service - OpenSSH per-connection server daemon (147.75.109.163:33562). Jul 2 06:58:24.863266 systemd-logind[1271]: Removed session 16. Jul 2 06:58:24.964582 sshd[3601]: Accepted publickey for core from 147.75.109.163 port 33562 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:24.973754 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:24.990126 systemd-logind[1271]: New session 17 of user core. Jul 2 06:58:24.995847 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 06:58:25.684741 sshd[3601]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:25.721244 systemd[1]: Started sshd@21-143.110.155.161:22-147.75.109.163:33570.service - OpenSSH per-connection server daemon (147.75.109.163:33570). Jul 2 06:58:25.723055 systemd[1]: sshd@20-143.110.155.161:22-147.75.109.163:33562.service: Deactivated successfully. Jul 2 06:58:25.726211 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 06:58:25.729923 systemd-logind[1271]: Session 17 logged out. Waiting for processes to exit. Jul 2 06:58:25.743282 systemd-logind[1271]: Removed session 17. Jul 2 06:58:25.839591 sshd[3610]: Accepted publickey for core from 147.75.109.163 port 33570 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:25.844461 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:25.867632 systemd-logind[1271]: New session 18 of user core. Jul 2 06:58:25.886570 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 06:58:26.211520 sshd[3610]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:26.222123 systemd[1]: sshd@21-143.110.155.161:22-147.75.109.163:33570.service: Deactivated successfully. Jul 2 06:58:26.223353 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 06:58:26.227926 systemd-logind[1271]: Session 18 logged out. Waiting for processes to exit. Jul 2 06:58:26.230553 systemd-logind[1271]: Removed session 18. Jul 2 06:58:31.230855 systemd[1]: Started sshd@22-143.110.155.161:22-147.75.109.163:33582.service - OpenSSH per-connection server daemon (147.75.109.163:33582). Jul 2 06:58:31.330720 sshd[3646]: Accepted publickey for core from 147.75.109.163 port 33582 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:31.334698 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:31.350233 systemd-logind[1271]: New session 19 of user core. Jul 2 06:58:31.368513 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 06:58:31.639486 sshd[3646]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:31.651087 systemd[1]: sshd@22-143.110.155.161:22-147.75.109.163:33582.service: Deactivated successfully. Jul 2 06:58:31.652721 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 06:58:31.654629 systemd-logind[1271]: Session 19 logged out. Waiting for processes to exit. Jul 2 06:58:31.667363 systemd-logind[1271]: Removed session 19. Jul 2 06:58:34.160225 kubelet[2294]: E0702 06:58:34.160087 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:36.657541 systemd[1]: Started sshd@23-143.110.155.161:22-147.75.109.163:41686.service - OpenSSH per-connection server daemon (147.75.109.163:41686). Jul 2 06:58:36.780799 sshd[3679]: Accepted publickey for core from 147.75.109.163 port 41686 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:36.785841 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:36.803624 systemd-logind[1271]: New session 20 of user core. Jul 2 06:58:36.816745 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 06:58:37.154122 sshd[3679]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:37.184073 systemd-logind[1271]: Session 20 logged out. Waiting for processes to exit. Jul 2 06:58:37.184443 systemd[1]: sshd@23-143.110.155.161:22-147.75.109.163:41686.service: Deactivated successfully. Jul 2 06:58:37.186125 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 06:58:37.188251 systemd-logind[1271]: Removed session 20. Jul 2 06:58:39.159017 kubelet[2294]: E0702 06:58:39.158741 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:42.185624 systemd[1]: Started sshd@24-143.110.155.161:22-147.75.109.163:41688.service - OpenSSH per-connection server daemon (147.75.109.163:41688). Jul 2 06:58:42.255273 sshd[3715]: Accepted publickey for core from 147.75.109.163 port 41688 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:42.260729 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:42.278222 systemd-logind[1271]: New session 21 of user core. Jul 2 06:58:42.287486 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 06:58:42.578122 sshd[3715]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:42.583910 systemd[1]: sshd@24-143.110.155.161:22-147.75.109.163:41688.service: Deactivated successfully. Jul 2 06:58:42.585612 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 06:58:42.588116 systemd-logind[1271]: Session 21 logged out. Waiting for processes to exit. Jul 2 06:58:42.590429 systemd-logind[1271]: Removed session 21. Jul 2 06:58:46.487451 systemd[1]: Started sshd@25-143.110.155.161:22-180.101.88.240:47044.service - OpenSSH per-connection server daemon (180.101.88.240:47044). Jul 2 06:58:47.598543 systemd[1]: Started sshd@26-143.110.155.161:22-147.75.109.163:50738.service - OpenSSH per-connection server daemon (147.75.109.163:50738). Jul 2 06:58:47.696078 sshd[3754]: Accepted publickey for core from 147.75.109.163 port 50738 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:47.702075 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:47.722780 systemd-logind[1271]: New session 22 of user core. Jul 2 06:58:47.730830 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 06:58:47.759999 sshd[3753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:58:48.030471 sshd[3754]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:48.043786 systemd[1]: sshd@26-143.110.155.161:22-147.75.109.163:50738.service: Deactivated successfully. Jul 2 06:58:48.046687 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 06:58:48.051713 systemd-logind[1271]: Session 22 logged out. Waiting for processes to exit. Jul 2 06:58:48.054951 systemd-logind[1271]: Removed session 22. Jul 2 06:58:48.163692 kubelet[2294]: E0702 06:58:48.163611 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jul 2 06:58:49.696959 sshd[3750]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:58:50.029537 sshd[3786]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:58:52.233419 sshd[3750]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:58:52.573899 sshd[3787]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=180.101.88.240 user=root Jul 2 06:58:53.058643 systemd[1]: Started sshd@27-143.110.155.161:22-147.75.109.163:41230.service - OpenSSH per-connection server daemon (147.75.109.163:41230). Jul 2 06:58:53.141417 sshd[3795]: Accepted publickey for core from 147.75.109.163 port 41230 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:53.150695 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:53.177463 systemd-logind[1271]: New session 23 of user core. Jul 2 06:58:53.184021 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 06:58:53.491876 sshd[3795]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:53.502676 systemd-logind[1271]: Session 23 logged out. Waiting for processes to exit. Jul 2 06:58:53.505333 systemd[1]: sshd@27-143.110.155.161:22-147.75.109.163:41230.service: Deactivated successfully. Jul 2 06:58:53.516921 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 06:58:53.520734 systemd-logind[1271]: Removed session 23. Jul 2 06:58:54.523386 sshd[3750]: PAM: Permission denied for root from 180.101.88.240 Jul 2 06:58:55.070019 sshd[3750]: Received disconnect from 180.101.88.240 port 47044:11: [preauth] Jul 2 06:58:55.070019 sshd[3750]: Disconnected from authenticating user root 180.101.88.240 port 47044 [preauth] Jul 2 06:58:55.072748 systemd[1]: sshd@25-143.110.155.161:22-180.101.88.240:47044.service: Deactivated successfully. Jul 2 06:58:58.509873 systemd[1]: Started sshd@28-143.110.155.161:22-147.75.109.163:41246.service - OpenSSH per-connection server daemon (147.75.109.163:41246). Jul 2 06:58:58.612067 sshd[3829]: Accepted publickey for core from 147.75.109.163 port 41246 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:58:58.617435 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:58.634514 systemd-logind[1271]: New session 24 of user core. Jul 2 06:58:58.640503 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 06:58:58.976606 sshd[3829]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:58.983757 systemd[1]: sshd@28-143.110.155.161:22-147.75.109.163:41246.service: Deactivated successfully. Jul 2 06:58:58.985674 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 06:58:58.994308 systemd-logind[1271]: Session 24 logged out. Waiting for processes to exit. Jul 2 06:58:58.997744 systemd-logind[1271]: Removed session 24. Jul 2 06:59:03.998959 systemd[1]: Started sshd@29-143.110.155.161:22-147.75.109.163:36844.service - OpenSSH per-connection server daemon (147.75.109.163:36844). Jul 2 06:59:04.088884 sshd[3863]: Accepted publickey for core from 147.75.109.163 port 36844 ssh2: RSA SHA256:9wVRA1FchLLJ6jdCWlQNRM/6zHeLSJi0lH1WDEk/EM8 Jul 2 06:59:04.093649 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:59:04.118270 systemd-logind[1271]: New session 25 of user core. Jul 2 06:59:04.119424 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 06:59:04.366391 systemd-logind[1271]: Session 25 logged out. Waiting for processes to exit. Jul 2 06:59:04.362490 sshd[3863]: pam_unix(sshd:session): session closed for user core Jul 2 06:59:04.368170 systemd[1]: sshd@29-143.110.155.161:22-147.75.109.163:36844.service: Deactivated successfully. Jul 2 06:59:04.369182 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 06:59:04.371009 systemd-logind[1271]: Removed session 25.