Jul 2 07:53:20.098236 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:53:20.098264 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:20.098275 kernel: BIOS-provided physical RAM map: Jul 2 07:53:20.098281 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:53:20.098288 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 07:53:20.098295 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 07:53:20.098308 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jul 2 07:53:20.098314 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 07:53:20.098320 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 07:53:20.098328 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 07:53:20.098335 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 07:53:20.098341 kernel: printk: bootconsole [earlyser0] enabled Jul 2 07:53:20.098349 kernel: NX (Execute Disable) protection: active Jul 2 07:53:20.098355 kernel: efi: EFI v2.70 by Microsoft Jul 2 07:53:20.098367 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Jul 2 07:53:20.098375 kernel: random: crng init done Jul 2 07:53:20.098383 kernel: SMBIOS 3.1.0 present. Jul 2 07:53:20.098403 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 07:53:20.098409 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 07:53:20.098417 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 07:53:20.098425 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Jul 2 07:53:20.098431 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 07:53:20.098442 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 07:53:20.098448 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 07:53:20.098455 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 07:53:20.098465 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 07:53:20.098472 kernel: tsc: Detected 2593.906 MHz processor Jul 2 07:53:20.098482 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:53:20.098489 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:53:20.098495 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 07:53:20.098505 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:53:20.098511 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 07:53:20.098523 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 07:53:20.098530 kernel: Using GB pages for direct mapping Jul 2 07:53:20.098536 kernel: Secure boot disabled Jul 2 07:53:20.098547 kernel: ACPI: Early table checksum verification disabled Jul 2 07:53:20.098554 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 07:53:20.098562 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098570 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098576 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 07:53:20.098593 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 07:53:20.098600 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098610 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098617 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098624 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098631 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098644 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098651 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:20.098661 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 07:53:20.098668 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 07:53:20.098675 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 07:53:20.098682 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 07:53:20.098691 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 07:53:20.098698 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 07:53:20.098711 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 07:53:20.098718 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 07:53:20.098726 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 07:53:20.098737 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 07:53:20.098751 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:53:20.098765 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:53:20.098779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 07:53:20.098793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 07:53:20.098806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 07:53:20.098823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 07:53:20.098838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 07:53:20.098851 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 07:53:20.098864 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 07:53:20.098879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 07:53:20.098893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 07:53:20.098906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 07:53:20.098919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 07:53:20.098934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 07:53:20.098952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 07:53:20.098965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 07:53:20.098979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 07:53:20.098992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 07:53:20.099006 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 07:53:20.099020 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 07:53:20.099034 kernel: Zone ranges: Jul 2 07:53:20.099048 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:53:20.099063 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:53:20.099093 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 07:53:20.099107 kernel: Movable zone start for each node Jul 2 07:53:20.099121 kernel: Early memory node ranges Jul 2 07:53:20.099134 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:53:20.099148 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 07:53:20.099161 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 07:53:20.099174 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 07:53:20.099187 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 07:53:20.099200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:53:20.099217 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:53:20.099230 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 07:53:20.099243 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 07:53:20.099257 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 07:53:20.099270 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:53:20.099283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:53:20.099297 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:53:20.099310 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 07:53:20.099324 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:53:20.099345 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 07:53:20.099360 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 07:53:20.099374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:53:20.099387 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:53:20.099403 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:53:20.099417 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:53:20.099430 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:53:20.099444 kernel: Hyper-V: PV spinlocks enabled Jul 2 07:53:20.099456 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:53:20.099475 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 07:53:20.099488 kernel: Policy zone: Normal Jul 2 07:53:20.099505 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:20.099520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:53:20.099534 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:53:20.099549 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:53:20.099562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:53:20.099576 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 308056K reserved, 0K cma-reserved) Jul 2 07:53:20.099596 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:53:20.099610 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:53:20.099638 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:53:20.099654 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:53:20.099666 kernel: rcu: RCU event tracing is enabled. Jul 2 07:53:20.099677 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:53:20.099688 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:53:20.099700 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:53:20.099713 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:53:20.099725 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:53:20.099737 kernel: Using NULL legacy PIC Jul 2 07:53:20.099754 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 07:53:20.099767 kernel: Console: colour dummy device 80x25 Jul 2 07:53:20.099780 kernel: printk: console [tty1] enabled Jul 2 07:53:20.099792 kernel: printk: console [ttyS0] enabled Jul 2 07:53:20.099804 kernel: printk: bootconsole [earlyser0] disabled Jul 2 07:53:20.099824 kernel: ACPI: Core revision 20210730 Jul 2 07:53:20.099837 kernel: Failed to register legacy timer interrupt Jul 2 07:53:20.099849 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:53:20.099861 kernel: Hyper-V: Using IPI hypercalls Jul 2 07:53:20.099877 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 2 07:53:20.099889 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 07:53:20.099902 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 07:53:20.099914 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:53:20.099926 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:53:20.099938 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:53:20.099954 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:53:20.099966 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 07:53:20.099979 kernel: RETBleed: Vulnerable Jul 2 07:53:20.099991 kernel: Speculative Store Bypass: Vulnerable Jul 2 07:53:20.100003 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:53:20.100015 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:53:20.100027 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 07:53:20.100039 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:53:20.100051 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:53:20.100063 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:53:20.100089 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 07:53:20.100101 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 07:53:20.100114 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 07:53:20.100126 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:53:20.100139 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 07:53:20.100151 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 07:53:20.100163 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 07:53:20.100175 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 07:53:20.100187 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:53:20.100199 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:53:20.100212 kernel: LSM: Security Framework initializing Jul 2 07:53:20.100223 kernel: SELinux: Initializing. Jul 2 07:53:20.100239 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:53:20.100252 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:53:20.100264 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 07:53:20.100276 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 07:53:20.100289 kernel: signal: max sigframe size: 3632 Jul 2 07:53:20.100301 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:53:20.100313 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:53:20.100326 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:53:20.100338 kernel: x86: Booting SMP configuration: Jul 2 07:53:20.100350 kernel: .... node #0, CPUs: #1 Jul 2 07:53:20.100366 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 07:53:20.100379 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:53:20.100391 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:53:20.100403 kernel: smpboot: Max logical packages: 1 Jul 2 07:53:20.100415 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 07:53:20.100428 kernel: devtmpfs: initialized Jul 2 07:53:20.100441 kernel: x86/mm: Memory block size: 128MB Jul 2 07:53:20.100458 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 07:53:20.100487 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:53:20.100507 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:53:20.100521 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:53:20.100534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:53:20.100544 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:53:20.100555 kernel: audit: type=2000 audit(1719906799.024:1): state=initialized audit_enabled=0 res=1 Jul 2 07:53:20.100566 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:53:20.100578 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:53:20.100589 kernel: cpuidle: using governor menu Jul 2 07:53:20.100604 kernel: ACPI: bus type PCI registered Jul 2 07:53:20.100616 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:53:20.100627 kernel: dca service started, version 1.12.1 Jul 2 07:53:20.100639 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:53:20.100651 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:53:20.100663 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:53:20.100676 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:53:20.100688 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:53:20.100699 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:53:20.100715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:53:20.100728 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:53:20.100741 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:53:20.100754 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:53:20.100768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:53:20.100781 kernel: ACPI: Interpreter enabled Jul 2 07:53:20.100794 kernel: ACPI: PM: (supports S0 S5) Jul 2 07:53:20.100808 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:53:20.100821 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:53:20.100838 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 07:53:20.100851 kernel: iommu: Default domain type: Translated Jul 2 07:53:20.100865 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:53:20.100878 kernel: vgaarb: loaded Jul 2 07:53:20.100891 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:53:20.100905 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jul 2 07:53:20.100918 kernel: PTP clock support registered Jul 2 07:53:20.100931 kernel: Registered efivars operations Jul 2 07:53:20.100944 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:53:20.100957 kernel: PCI: System does not support PCI Jul 2 07:53:20.100973 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 07:53:20.100986 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:53:20.100998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:53:20.101012 kernel: pnp: PnP ACPI init Jul 2 07:53:20.101025 kernel: pnp: PnP ACPI: found 3 devices Jul 2 07:53:20.101039 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:53:20.101052 kernel: NET: Registered PF_INET protocol family Jul 2 07:53:20.101065 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:53:20.106161 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:53:20.106182 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:53:20.106197 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:53:20.106210 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:53:20.106223 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:53:20.106238 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:53:20.106251 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:53:20.106266 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:53:20.106279 kernel: NET: Registered PF_XDP protocol family Jul 2 07:53:20.106297 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:53:20.106310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:53:20.106324 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Jul 2 07:53:20.106337 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:53:20.106351 kernel: Initialise system trusted keyrings Jul 2 07:53:20.106364 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:53:20.106378 kernel: Key type asymmetric registered Jul 2 07:53:20.106392 kernel: Asymmetric key parser 'x509' registered Jul 2 07:53:20.106404 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:53:20.106423 kernel: io scheduler mq-deadline registered Jul 2 07:53:20.106437 kernel: io scheduler kyber registered Jul 2 07:53:20.106449 kernel: io scheduler bfq registered Jul 2 07:53:20.106463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:53:20.106476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:53:20.106490 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:53:20.106504 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:53:20.106517 kernel: i8042: PNP: No PS/2 controller found. Jul 2 07:53:20.106715 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 07:53:20.106842 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T07:53:19 UTC (1719906799) Jul 2 07:53:20.106956 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 07:53:20.106973 kernel: fail to initialize ptp_kvm Jul 2 07:53:20.106988 kernel: intel_pstate: CPU model not supported Jul 2 07:53:20.107002 kernel: efifb: probing for efifb Jul 2 07:53:20.107015 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 07:53:20.107029 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 07:53:20.107043 kernel: efifb: scrolling: redraw Jul 2 07:53:20.107061 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:53:20.107088 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 07:53:20.107101 kernel: fb0: EFI VGA frame buffer device Jul 2 07:53:20.107114 kernel: pstore: Registered efi as persistent store backend Jul 2 07:53:20.107128 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:53:20.107143 kernel: Segment Routing with IPv6 Jul 2 07:53:20.107157 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:53:20.107171 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:53:20.107184 kernel: Key type dns_resolver registered Jul 2 07:53:20.107202 kernel: IPI shorthand broadcast: enabled Jul 2 07:53:20.107215 kernel: sched_clock: Marking stable (901284800, 23074800)->(1138234400, -213874800) Jul 2 07:53:20.107227 kernel: registered taskstats version 1 Jul 2 07:53:20.107240 kernel: Loading compiled-in X.509 certificates Jul 2 07:53:20.107254 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:53:20.107267 kernel: Key type .fscrypt registered Jul 2 07:53:20.107281 kernel: Key type fscrypt-provisioning registered Jul 2 07:53:20.107296 kernel: pstore: Using crash dump compression: deflate Jul 2 07:53:20.107314 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:53:20.107329 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:53:20.107341 kernel: ima: No architecture policies found Jul 2 07:53:20.107355 kernel: clk: Disabling unused clocks Jul 2 07:53:20.107368 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:53:20.107381 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:53:20.107394 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:53:20.107407 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:53:20.107421 kernel: Run /init as init process Jul 2 07:53:20.107435 kernel: with arguments: Jul 2 07:53:20.107452 kernel: /init Jul 2 07:53:20.107466 kernel: with environment: Jul 2 07:53:20.107478 kernel: HOME=/ Jul 2 07:53:20.107490 kernel: TERM=linux Jul 2 07:53:20.107502 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:53:20.107519 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:53:20.107535 systemd[1]: Detected virtualization microsoft. Jul 2 07:53:20.107550 systemd[1]: Detected architecture x86-64. Jul 2 07:53:20.107563 systemd[1]: Running in initrd. Jul 2 07:53:20.107575 systemd[1]: No hostname configured, using default hostname. Jul 2 07:53:20.107587 systemd[1]: Hostname set to <localhost>. Jul 2 07:53:20.107601 systemd[1]: Initializing machine ID from random generator. Jul 2 07:53:20.107615 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:53:20.107628 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:53:20.107642 systemd[1]: Reached target cryptsetup.target. Jul 2 07:53:20.107659 systemd[1]: Reached target paths.target. Jul 2 07:53:20.107671 systemd[1]: Reached target slices.target. Jul 2 07:53:20.107685 systemd[1]: Reached target swap.target. Jul 2 07:53:20.107700 systemd[1]: Reached target timers.target. Jul 2 07:53:20.107716 systemd[1]: Listening on iscsid.socket. Jul 2 07:53:20.107731 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:53:20.107746 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:53:20.107761 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:53:20.107780 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:53:20.107796 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:53:20.107812 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:53:20.107828 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:53:20.107844 systemd[1]: Reached target sockets.target. Jul 2 07:53:20.107861 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:53:20.107877 systemd[1]: Finished network-cleanup.service. Jul 2 07:53:20.107893 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:53:20.107908 systemd[1]: Starting systemd-journald.service... Jul 2 07:53:20.107928 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:53:20.107943 systemd[1]: Starting systemd-resolved.service... Jul 2 07:53:20.107959 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:53:20.107989 systemd-journald[183]: Journal started Jul 2 07:53:20.108085 systemd-journald[183]: Runtime Journal (/run/log/journal/bec29a81cf464bbf832c7ce7a977c829) is 8.0M, max 159.0M, 151.0M free. Jul 2 07:53:20.102529 systemd-modules-load[184]: Inserted module 'overlay' Jul 2 07:53:20.120137 systemd[1]: Started systemd-journald.service. Jul 2 07:53:20.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.140324 kernel: audit: type=1130 audit(1719906800.126:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.126712 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:53:20.140617 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:53:20.144767 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:53:20.157020 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:53:20.160747 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:53:20.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.179745 systemd-resolved[185]: Positive Trust Anchors: Jul 2 07:53:20.266753 kernel: audit: type=1130 audit(1719906800.140:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.266787 kernel: audit: type=1130 audit(1719906800.144:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.266807 kernel: audit: type=1130 audit(1719906800.155:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.266820 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:53:20.266830 kernel: audit: type=1130 audit(1719906800.222:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.266842 kernel: audit: type=1130 audit(1719906800.237:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.266852 kernel: audit: type=1130 audit(1719906800.250:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.179945 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:53:20.179983 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:53:20.182797 systemd-resolved[185]: Defaulting to hostname 'linux'. Jul 2 07:53:20.184136 systemd[1]: Started systemd-resolved.service. Jul 2 07:53:20.223302 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:53:20.237672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:53:20.250343 systemd[1]: Reached target nss-lookup.target. Jul 2 07:53:20.265161 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:53:20.299452 dracut-cmdline[200]: dracut-dracut-053 Jul 2 07:53:20.299452 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 07:53:20.299452 dracut-cmdline[200]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:20.324336 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 2 07:53:20.326797 kernel: Bridge firewalling registered Jul 2 07:53:20.356096 kernel: SCSI subsystem initialized Jul 2 07:53:20.381653 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:53:20.381756 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:53:20.387193 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:53:20.387228 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:53:20.395065 systemd-modules-load[184]: Inserted module 'dm_multipath' Jul 2 07:53:20.397067 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:53:20.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.403702 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:53:20.421526 kernel: audit: type=1130 audit(1719906800.402:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.430110 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:53:20.449826 kernel: audit: type=1130 audit(1719906800.432:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.449858 kernel: iscsi: registered transport (tcp) Jul 2 07:53:20.477339 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:53:20.477426 kernel: QLogic iSCSI HBA Driver Jul 2 07:53:20.508442 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:53:20.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:20.510812 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:53:20.564107 kernel: raid6: avx512x4 gen() 24941 MB/s Jul 2 07:53:20.584086 kernel: raid6: avx512x4 xor() 5674 MB/s Jul 2 07:53:20.604089 kernel: raid6: avx512x2 gen() 24234 MB/s Jul 2 07:53:20.625089 kernel: raid6: avx512x2 xor() 29352 MB/s Jul 2 07:53:20.645087 kernel: raid6: avx512x1 gen() 24376 MB/s Jul 2 07:53:20.665086 kernel: raid6: avx512x1 xor() 26605 MB/s Jul 2 07:53:20.686088 kernel: raid6: avx2x4 gen() 21883 MB/s Jul 2 07:53:20.706087 kernel: raid6: avx2x4 xor() 5546 MB/s Jul 2 07:53:20.726085 kernel: raid6: avx2x2 gen() 22806 MB/s Jul 2 07:53:20.747089 kernel: raid6: avx2x2 xor() 21911 MB/s Jul 2 07:53:20.767084 kernel: raid6: avx2x1 gen() 20701 MB/s Jul 2 07:53:20.787086 kernel: raid6: avx2x1 xor() 19092 MB/s Jul 2 07:53:20.808086 kernel: raid6: sse2x4 gen() 9982 MB/s Jul 2 07:53:20.828085 kernel: raid6: sse2x4 xor() 5849 MB/s Jul 2 07:53:20.848085 kernel: raid6: sse2x2 gen() 11090 MB/s Jul 2 07:53:20.868089 kernel: raid6: sse2x2 xor() 7585 MB/s Jul 2 07:53:20.888086 kernel: raid6: sse2x1 gen() 10277 MB/s Jul 2 07:53:20.910991 kernel: raid6: sse2x1 xor() 5790 MB/s Jul 2 07:53:20.911011 kernel: raid6: using algorithm avx512x4 gen() 24941 MB/s Jul 2 07:53:20.911035 kernel: raid6: .... xor() 5674 MB/s, rmw enabled Jul 2 07:53:20.917865 kernel: raid6: using avx512x2 recovery algorithm Jul 2 07:53:20.934101 kernel: xor: automatically using best checksumming function avx Jul 2 07:53:21.032109 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:53:21.040963 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:53:21.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.046000 audit: BPF prog-id=7 op=LOAD Jul 2 07:53:21.046000 audit: BPF prog-id=8 op=LOAD Jul 2 07:53:21.046950 systemd[1]: Starting systemd-udevd.service... Jul 2 07:53:21.061621 systemd-udevd[384]: Using default interface naming scheme 'v252'. Jul 2 07:53:21.066435 systemd[1]: Started systemd-udevd.service. Jul 2 07:53:21.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.076403 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:53:21.094564 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 2 07:53:21.128292 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:53:21.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.130347 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:53:21.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.168707 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:53:21.227128 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:53:21.234091 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 07:53:21.258619 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:53:21.258696 kernel: AES CTR mode by8 optimization enabled Jul 2 07:53:21.277103 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 07:53:21.289508 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 07:53:21.293102 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 07:53:21.295098 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 07:53:21.297104 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 07:53:21.311267 kernel: scsi host1: storvsc_host_t Jul 2 07:53:21.311343 kernel: scsi host0: storvsc_host_t Jul 2 07:53:21.328101 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 07:53:21.328190 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 07:53:21.347523 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 07:53:21.370925 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 07:53:21.370998 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 07:53:21.371214 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 07:53:21.377110 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:53:21.380090 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 07:53:21.394729 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 07:53:21.394986 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 07:53:21.395129 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 07:53:21.402898 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 07:53:21.403172 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 07:53:21.410096 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:21.415091 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 07:53:21.526143 kernel: hv_netvsc 0022489e-7e0b-0022-489e-7e0b0022489e eth0: VF slot 1 added Jul 2 07:53:21.542166 kernel: hv_vmbus: registering driver hv_pci Jul 2 07:53:21.542257 kernel: hv_pci 78501086-5f19-4160-8565-67285d8a6810: PCI VMBus probing: Using version 0x10004 Jul 2 07:53:21.553236 kernel: hv_pci 78501086-5f19-4160-8565-67285d8a6810: PCI host bridge to bus 5f19:00 Jul 2 07:53:21.554524 kernel: pci_bus 5f19:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 07:53:21.563286 kernel: pci_bus 5f19:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 07:53:21.564500 kernel: pci 5f19:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 07:53:21.580350 kernel: pci 5f19:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 07:53:21.601321 kernel: pci 5f19:00:02.0: enabling Extended Tags Jul 2 07:53:21.617110 kernel: pci 5f19:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5f19:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 07:53:21.629211 kernel: pci_bus 5f19:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 07:53:21.629544 kernel: pci 5f19:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 07:53:21.732112 kernel: mlx5_core 5f19:00:02.0: firmware version: 14.30.1284 Jul 2 07:53:21.906279 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:53:21.917545 kernel: mlx5_core 5f19:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 07:53:21.956104 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (435) Jul 2 07:53:21.971495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:53:22.017112 kernel: mlx5_core 5f19:00:02.0: Supported tc offload range - chains: 1, prios: 1 Jul 2 07:53:22.017430 kernel: mlx5_core 5f19:00:02.0: mlx5e_tc_post_act_init:40:(pid 475): firmware level support is missing Jul 2 07:53:22.029247 kernel: hv_netvsc 0022489e-7e0b-0022-489e-7e0b0022489e eth0: VF registering: eth1 Jul 2 07:53:22.029504 kernel: mlx5_core 5f19:00:02.0 eth1: joined to eth0 Jul 2 07:53:22.042098 kernel: mlx5_core 5f19:00:02.0 enP24345s1: renamed from eth1 Jul 2 07:53:22.149183 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:53:22.156990 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:53:22.167902 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:53:22.172188 systemd[1]: Starting disk-uuid.service... Jul 2 07:53:22.191132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:22.200100 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:23.215112 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:23.215206 disk-uuid[566]: The operation has completed successfully. Jul 2 07:53:23.305181 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:53:23.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.305303 systemd[1]: Finished disk-uuid.service. Jul 2 07:53:23.313618 systemd[1]: Starting verity-setup.service... Jul 2 07:53:23.353361 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:53:23.616765 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:53:23.625385 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:53:23.631335 systemd[1]: Finished verity-setup.service. Jul 2 07:53:23.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.746109 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:53:23.746394 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:53:23.750820 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:53:23.756054 systemd[1]: Starting ignition-setup.service... Jul 2 07:53:23.759880 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:53:23.802201 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:23.802299 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:53:23.802320 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:53:23.856455 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:53:23.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.863000 audit: BPF prog-id=9 op=LOAD Jul 2 07:53:23.864651 systemd[1]: Starting systemd-networkd.service... Jul 2 07:53:23.874866 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:53:23.895280 systemd-networkd[809]: lo: Link UP Jul 2 07:53:23.895290 systemd-networkd[809]: lo: Gained carrier Jul 2 07:53:23.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.897477 systemd-networkd[809]: Enumeration completed Jul 2 07:53:23.897919 systemd[1]: Started systemd-networkd.service. Jul 2 07:53:23.900172 systemd[1]: Reached target network.target. Jul 2 07:53:23.903087 systemd-networkd[809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:53:23.905626 systemd[1]: Starting iscsiuio.service... Jul 2 07:53:23.922292 systemd[1]: Started iscsiuio.service. Jul 2 07:53:23.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.927207 systemd[1]: Starting iscsid.service... Jul 2 07:53:23.932521 iscsid[816]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:53:23.932521 iscsid[816]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Jul 2 07:53:23.932521 iscsid[816]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:53:23.932521 iscsid[816]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:53:23.932521 iscsid[816]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:53:23.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.961453 iscsid[816]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:53:23.934476 systemd[1]: Started iscsid.service. Jul 2 07:53:23.952926 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:53:23.975601 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:53:23.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:23.979825 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:53:23.983897 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:53:23.988453 systemd[1]: Reached target remote-fs.target. Jul 2 07:53:23.996086 kernel: mlx5_core 5f19:00:02.0 enP24345s1: Link up Jul 2 07:53:23.996117 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:53:24.009796 systemd[1]: Finished ignition-setup.service. Jul 2 07:53:24.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.015201 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:53:24.019769 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:53:24.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.036093 kernel: hv_netvsc 0022489e-7e0b-0022-489e-7e0b0022489e eth0: Data path switched to VF: enP24345s1 Jul 2 07:53:24.036787 systemd-networkd[809]: enP24345s1: Link UP Jul 2 07:53:24.043359 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:53:24.036913 systemd-networkd[809]: eth0: Link UP Jul 2 07:53:24.043472 systemd-networkd[809]: eth0: Gained carrier Jul 2 07:53:24.049641 systemd-networkd[809]: enP24345s1: Gained carrier Jul 2 07:53:24.085195 systemd-networkd[809]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:53:25.410303 systemd-networkd[809]: eth0: Gained IPv6LL Jul 2 07:53:27.321948 ignition[831]: Ignition 2.14.0 Jul 2 07:53:27.322004 ignition[831]: Stage: fetch-offline Jul 2 07:53:27.322169 ignition[831]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:27.322229 ignition[831]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:27.428128 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:27.428379 ignition[831]: parsed url from cmdline: "" Jul 2 07:53:27.431407 ignition[831]: no config URL provided Jul 2 07:53:27.431428 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:53:27.431450 ignition[831]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:53:27.431460 ignition[831]: failed to fetch config: resource requires networking Jul 2 07:53:27.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.447250 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:53:27.485459 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 07:53:27.485499 kernel: audit: type=1130 audit(1719906807.451:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.434029 ignition[831]: Ignition finished successfully Jul 2 07:53:27.455875 systemd[1]: Starting ignition-fetch.service... Jul 2 07:53:27.473313 ignition[837]: Ignition 2.14.0 Jul 2 07:53:27.473320 ignition[837]: Stage: fetch Jul 2 07:53:27.473472 ignition[837]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:27.473497 ignition[837]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:27.478465 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:27.479353 ignition[837]: parsed url from cmdline: "" Jul 2 07:53:27.479370 ignition[837]: no config URL provided Jul 2 07:53:27.479389 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:53:27.479491 ignition[837]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:53:27.479606 ignition[837]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 07:53:27.584248 ignition[837]: GET result: OK Jul 2 07:53:27.584434 ignition[837]: config has been read from IMDS userdata Jul 2 07:53:27.584480 ignition[837]: parsing config with SHA512: c66907752b43c3844a6a0ceb9d1fb0af51358c759062cbb18ef5f70e02d70e37f246e559a4f9de322523457027c9b6a8628331de41974fdc8f53388486fe7d6c Jul 2 07:53:27.592522 unknown[837]: fetched base config from "system" Jul 2 07:53:27.592538 unknown[837]: fetched base config from "system" Jul 2 07:53:27.592548 unknown[837]: fetched user config from "azure" Jul 2 07:53:27.599779 ignition[837]: fetch: fetch complete Jul 2 07:53:27.599789 ignition[837]: fetch: fetch passed Jul 2 07:53:27.599861 ignition[837]: Ignition finished successfully Jul 2 07:53:27.606248 systemd[1]: Finished ignition-fetch.service. Jul 2 07:53:27.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.622100 kernel: audit: type=1130 audit(1719906807.607:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.621239 systemd[1]: Starting ignition-kargs.service... Jul 2 07:53:27.633457 ignition[843]: Ignition 2.14.0 Jul 2 07:53:27.633469 ignition[843]: Stage: kargs Jul 2 07:53:27.633624 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:27.633660 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:27.644544 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:27.645721 ignition[843]: kargs: kargs passed Jul 2 07:53:27.648374 systemd[1]: Finished ignition-kargs.service. Jul 2 07:53:27.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.645779 ignition[843]: Ignition finished successfully Jul 2 07:53:27.670193 kernel: audit: type=1130 audit(1719906807.651:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.652386 systemd[1]: Starting ignition-disks.service... Jul 2 07:53:27.674812 ignition[849]: Ignition 2.14.0 Jul 2 07:53:27.674823 ignition[849]: Stage: disks Jul 2 07:53:27.674982 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:27.675017 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:27.678461 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:27.681385 ignition[849]: disks: disks passed Jul 2 07:53:27.681443 ignition[849]: Ignition finished successfully Jul 2 07:53:27.688742 systemd[1]: Finished ignition-disks.service. Jul 2 07:53:27.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.692832 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:53:27.705042 kernel: audit: type=1130 audit(1719906807.692:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.708585 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:53:27.712628 systemd[1]: Reached target local-fs.target. Jul 2 07:53:27.716264 systemd[1]: Reached target sysinit.target. Jul 2 07:53:27.720030 systemd[1]: Reached target basic.target. Jul 2 07:53:27.724755 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:53:27.793836 systemd-fsck[857]: ROOT: clean, 614/7326000 files, 481076/7359488 blocks Jul 2 07:53:27.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.801104 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:53:27.820098 kernel: audit: type=1130 audit(1719906807.804:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.818042 systemd[1]: Mounting sysroot.mount... Jul 2 07:53:27.839099 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:53:27.839237 systemd[1]: Mounted sysroot.mount. Jul 2 07:53:27.840478 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:53:27.875414 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:53:27.881366 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 07:53:27.887147 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:53:27.891603 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:53:27.897182 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:53:27.955156 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:53:27.960642 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:53:27.977107 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (868) Jul 2 07:53:27.977171 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:27.982107 initrd-setup-root[873]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:53:27.992745 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:53:27.992782 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:53:27.997341 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:53:28.003488 initrd-setup-root[899]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:53:28.021842 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:53:28.028950 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:53:28.503158 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:53:28.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.506551 systemd[1]: Starting ignition-mount.service... Jul 2 07:53:28.529853 kernel: audit: type=1130 audit(1719906808.505:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.526921 systemd[1]: Starting sysroot-boot.service... Jul 2 07:53:28.535085 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:28.535249 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:28.556819 ignition[934]: INFO : Ignition 2.14.0 Jul 2 07:53:28.556819 ignition[934]: INFO : Stage: mount Jul 2 07:53:28.560786 ignition[934]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:28.560786 ignition[934]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:28.570853 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:28.576426 systemd[1]: Finished sysroot-boot.service. Jul 2 07:53:28.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.596092 kernel: audit: type=1130 audit(1719906808.581:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.599171 ignition[934]: INFO : mount: mount passed Jul 2 07:53:28.601385 ignition[934]: INFO : Ignition finished successfully Jul 2 07:53:28.604363 systemd[1]: Finished ignition-mount.service. Jul 2 07:53:28.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.620098 kernel: audit: type=1130 audit(1719906808.606:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.634366 coreos-metadata[867]: Jul 02 07:53:29.634 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 07:53:29.654338 coreos-metadata[867]: Jul 02 07:53:29.654 INFO Fetch successful Jul 2 07:53:29.689564 coreos-metadata[867]: Jul 02 07:53:29.689 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 07:53:29.707809 coreos-metadata[867]: Jul 02 07:53:29.707 INFO Fetch successful Jul 2 07:53:29.724011 coreos-metadata[867]: Jul 02 07:53:29.723 INFO wrote hostname ci-3510.3.5-a-37a211789c to /sysroot/etc/hostname Jul 2 07:53:29.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.726500 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 07:53:29.746899 kernel: audit: type=1130 audit(1719906809.730:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.732698 systemd[1]: Starting ignition-files.service... Jul 2 07:53:29.751653 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:53:29.772171 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (946) Jul 2 07:53:29.772232 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:29.772247 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:53:29.779041 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:53:29.784942 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:53:29.799864 ignition[965]: INFO : Ignition 2.14.0 Jul 2 07:53:29.799864 ignition[965]: INFO : Stage: files Jul 2 07:53:29.803863 ignition[965]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:29.803863 ignition[965]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:29.817319 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:29.836379 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:53:29.839460 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:53:29.839460 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:53:29.913109 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:53:29.916450 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:53:29.931234 unknown[965]: wrote ssh authorized keys file for user: core Jul 2 07:53:29.933883 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:53:30.108490 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:53:30.113884 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:53:30.195969 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:53:30.303720 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 07:53:30.311482 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:53:30.415911 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (970) Jul 2 07:53:30.392556 systemd[1]: mnt-oem2499263479.mount: Deactivated successfully. Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2499263479" Jul 2 07:53:30.418788 ignition[965]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2499263479": device or resource busy Jul 2 07:53:30.418788 ignition[965]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2499263479", trying btrfs: device or resource busy Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2499263479" Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2499263479" Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2499263479" Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2499263479" Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531654457" Jul 2 07:53:30.418788 ignition[965]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531654457": device or resource busy Jul 2 07:53:30.418788 ignition[965]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3531654457", trying btrfs: device or resource busy Jul 2 07:53:30.418788 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531654457" Jul 2 07:53:30.414141 systemd[1]: mnt-oem3531654457.mount: Deactivated successfully. Jul 2 07:53:30.501317 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3531654457" Jul 2 07:53:30.501317 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem3531654457" Jul 2 07:53:30.501317 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem3531654457" Jul 2 07:53:30.501317 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 07:53:30.501317 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:53:30.501317 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 07:53:30.986515 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK Jul 2 07:53:31.402502 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:53:31.402502 ignition[965]: INFO : files: op(13): [started] processing unit "waagent.service" Jul 2 07:53:31.402502 ignition[965]: INFO : files: op(13): [finished] processing unit "waagent.service" Jul 2 07:53:31.402502 ignition[965]: INFO : files: op(14): [started] processing unit "nvidia.service" Jul 2 07:53:31.402502 ignition[965]: INFO : files: op(14): [finished] processing unit "nvidia.service" Jul 2 07:53:31.402502 ignition[965]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(17): [started] setting preset to enabled for "waagent.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(17): [finished] setting preset to enabled for "waagent.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:53:31.440551 ignition[965]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:53:31.440551 ignition[965]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:53:31.440551 ignition[965]: INFO : files: files passed Jul 2 07:53:31.440551 ignition[965]: INFO : Ignition finished successfully Jul 2 07:53:31.481852 systemd[1]: Finished ignition-files.service. Jul 2 07:53:31.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.498418 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:53:31.512887 kernel: audit: type=1130 audit(1719906811.494:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.512866 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:53:31.513953 systemd[1]: Starting ignition-quench.service... Jul 2 07:53:31.523744 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:53:31.524829 systemd[1]: Finished ignition-quench.service. Jul 2 07:53:31.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.532226 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:53:31.536667 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:53:31.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.541961 systemd[1]: Reached target ignition-complete.target. Jul 2 07:53:31.547483 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:53:31.563248 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:53:31.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.563371 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:53:31.568475 systemd[1]: Reached target initrd-fs.target. Jul 2 07:53:31.572677 systemd[1]: Reached target initrd.target. Jul 2 07:53:31.573696 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:53:31.574743 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:53:31.591323 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:53:31.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.593409 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:53:31.604996 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:53:31.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.606213 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:53:31.606631 systemd[1]: Stopped target timers.target. Jul 2 07:53:31.607036 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:53:31.607189 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:53:31.607580 systemd[1]: Stopped target initrd.target. Jul 2 07:53:31.607875 systemd[1]: Stopped target basic.target. Jul 2 07:53:31.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.608306 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:53:31.608803 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:53:31.609335 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:53:31.609757 systemd[1]: Stopped target remote-fs.target. Jul 2 07:53:31.610689 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:53:31.611139 systemd[1]: Stopped target sysinit.target. Jul 2 07:53:31.611602 systemd[1]: Stopped target local-fs.target. Jul 2 07:53:31.612005 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:53:31.612530 systemd[1]: Stopped target swap.target. Jul 2 07:53:31.613035 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:53:31.613181 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:53:31.613539 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:53:31.648491 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:53:31.653377 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:53:31.661583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:53:31.667545 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:53:31.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.694595 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:53:31.697136 systemd[1]: Stopped ignition-files.service. Jul 2 07:53:31.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.701250 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 07:53:31.704335 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 07:53:31.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.710577 systemd[1]: Stopping ignition-mount.service... Jul 2 07:53:31.715697 systemd[1]: Stopping iscsid.service... Jul 2 07:53:31.725346 iscsid[816]: iscsid shutting down. Jul 2 07:53:31.720897 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:53:31.731095 ignition[1004]: INFO : Ignition 2.14.0 Jul 2 07:53:31.731095 ignition[1004]: INFO : Stage: umount Jul 2 07:53:31.731095 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:31.731095 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:31.740282 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:31.742605 ignition[1004]: INFO : umount: umount passed Jul 2 07:53:31.747467 ignition[1004]: INFO : Ignition finished successfully Jul 2 07:53:31.753736 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:53:31.756424 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:53:31.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.761362 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:53:31.761628 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:53:31.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.770557 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:53:31.772944 systemd[1]: Stopped iscsid.service. Jul 2 07:53:31.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.776988 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:53:31.777112 systemd[1]: Stopped ignition-mount.service. Jul 2 07:53:31.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.784978 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:53:31.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.785139 systemd[1]: Stopped ignition-disks.service. Jul 2 07:53:31.788288 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:53:31.788374 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:53:31.793848 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:53:31.793926 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:53:31.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.809398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:53:31.809519 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:53:31.812477 systemd[1]: Stopped target paths.target. Jul 2 07:53:31.814819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:53:31.827161 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:53:31.829987 systemd[1]: Stopped target slices.target. Jul 2 07:53:31.834206 systemd[1]: Stopped target sockets.target. Jul 2 07:53:31.836115 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:53:31.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.836173 systemd[1]: Closed iscsid.socket. Jul 2 07:53:31.839607 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:53:31.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.839677 systemd[1]: Stopped ignition-setup.service. Jul 2 07:53:31.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.846331 systemd[1]: Stopping iscsiuio.service... Jul 2 07:53:31.850856 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:53:31.851459 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:53:31.851600 systemd[1]: Stopped iscsiuio.service. Jul 2 07:53:31.854094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:53:31.854193 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:53:31.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.858913 systemd[1]: Stopped target network.target. Jul 2 07:53:31.861998 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:53:31.862045 systemd[1]: Closed iscsiuio.socket. Jul 2 07:53:31.866435 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:53:31.870593 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:53:31.873884 systemd-networkd[809]: eth0: DHCPv6 lease lost Jul 2 07:53:31.876790 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:53:31.876885 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:53:31.879422 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:53:31.901000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:53:31.879458 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:53:31.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.882844 systemd[1]: Stopping network-cleanup.service... Jul 2 07:53:31.901906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:53:31.904509 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:53:31.906949 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:53:31.907009 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:53:31.911216 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:53:31.911297 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:53:31.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.930165 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:53:31.936628 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:53:31.943523 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:53:31.946447 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:53:31.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.953719 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:53:31.955504 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:53:31.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.962138 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:53:31.961000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:53:31.963318 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:53:31.967135 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:53:31.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.967182 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:53:31.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.970439 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:53:31.972010 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:53:31.975013 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:53:32.018386 kernel: hv_netvsc 0022489e-7e0b-0022-489e-7e0b0022489e eth0: Data path switched from VF: enP24345s1 Jul 2 07:53:32.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.975849 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:53:31.980372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:53:31.980422 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:53:31.985666 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:53:31.990345 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:53:31.990464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:53:32.000261 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:53:32.000328 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:53:32.002700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:53:32.002766 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:53:32.024297 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:53:32.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.048567 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:53:32.052003 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:53:32.064021 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:53:32.066599 systemd[1]: Stopped network-cleanup.service. Jul 2 07:53:32.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.388196 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:53:32.388393 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:53:32.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.393648 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:53:32.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.397049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:53:32.397165 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:53:32.402814 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:53:32.418311 systemd[1]: Switching root. Jul 2 07:53:32.444697 systemd-journald[183]: Journal stopped Jul 2 07:53:48.126903 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 2 07:53:48.126932 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:53:48.126944 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:53:48.126954 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:53:48.126963 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:53:48.126974 kernel: SELinux: policy capability open_perms=1 Jul 2 07:53:48.126987 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:53:48.126996 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:53:48.127006 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:53:48.127017 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:53:48.127025 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:53:48.127037 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:53:48.127052 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 2 07:53:48.127083 kernel: audit: type=1403 audit(1719906815.299:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:53:48.127107 systemd[1]: Successfully loaded SELinux policy in 271.044ms. Jul 2 07:53:48.127125 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.072ms. Jul 2 07:53:48.127146 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:53:48.127165 systemd[1]: Detected virtualization microsoft. Jul 2 07:53:48.127187 systemd[1]: Detected architecture x86-64. Jul 2 07:53:48.127205 systemd[1]: Detected first boot. Jul 2 07:53:48.127224 systemd[1]: Hostname set to <ci-3510.3.5-a-37a211789c>. Jul 2 07:53:48.127243 systemd[1]: Initializing machine ID from random generator. Jul 2 07:53:48.127260 kernel: audit: type=1400 audit(1719906816.019:83): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:48.127281 kernel: audit: type=1400 audit(1719906816.049:84): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:48.127301 kernel: audit: type=1400 audit(1719906816.049:85): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:48.127320 kernel: audit: type=1334 audit(1719906816.065:86): prog-id=10 op=LOAD Jul 2 07:53:48.127344 kernel: audit: type=1334 audit(1719906816.065:87): prog-id=10 op=UNLOAD Jul 2 07:53:48.127361 kernel: audit: type=1334 audit(1719906816.081:88): prog-id=11 op=LOAD Jul 2 07:53:48.127378 kernel: audit: type=1334 audit(1719906816.081:89): prog-id=11 op=UNLOAD Jul 2 07:53:48.127397 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:53:48.127415 kernel: audit: type=1400 audit(1719906817.555:90): avc: denied { associate } for pid=1038 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:53:48.127435 kernel: audit: type=1300 audit(1719906817.555:90): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1021 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:48.127457 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:53:48.127475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:53:48.127495 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:53:48.127514 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:53:48.127530 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 07:53:48.127547 kernel: audit: type=1334 audit(1719906827.615:92): prog-id=12 op=LOAD Jul 2 07:53:48.127564 kernel: audit: type=1334 audit(1719906827.615:93): prog-id=3 op=UNLOAD Jul 2 07:53:48.127584 kernel: audit: type=1334 audit(1719906827.620:94): prog-id=13 op=LOAD Jul 2 07:53:48.127611 kernel: audit: type=1334 audit(1719906827.625:95): prog-id=14 op=LOAD Jul 2 07:53:48.127982 kernel: audit: type=1334 audit(1719906827.625:96): prog-id=4 op=UNLOAD Jul 2 07:53:48.127998 kernel: audit: type=1334 audit(1719906827.625:97): prog-id=5 op=UNLOAD Jul 2 07:53:48.128007 kernel: audit: type=1334 audit(1719906827.630:98): prog-id=15 op=LOAD Jul 2 07:53:48.128020 kernel: audit: type=1334 audit(1719906827.630:99): prog-id=12 op=UNLOAD Jul 2 07:53:48.128030 kernel: audit: type=1334 audit(1719906827.650:100): prog-id=16 op=LOAD Jul 2 07:53:48.128042 kernel: audit: type=1334 audit(1719906827.655:101): prog-id=17 op=LOAD Jul 2 07:53:48.128053 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:53:48.128068 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:53:48.128094 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:53:48.128104 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:53:48.128117 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:53:48.128127 systemd[1]: Created slice system-getty.slice. Jul 2 07:53:48.128139 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:53:48.128149 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:53:48.128162 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:53:48.128174 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:53:48.128186 systemd[1]: Created slice user.slice. Jul 2 07:53:48.128196 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:53:48.128208 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:53:48.128218 systemd[1]: Set up automount boot.automount. Jul 2 07:53:48.128230 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:53:48.128240 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:53:48.128250 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:53:48.128262 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:53:48.128275 systemd[1]: Reached target integritysetup.target. Jul 2 07:53:48.128286 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:53:48.128296 systemd[1]: Reached target remote-fs.target. Jul 2 07:53:48.128309 systemd[1]: Reached target slices.target. Jul 2 07:53:48.128319 systemd[1]: Reached target swap.target. Jul 2 07:53:48.128331 systemd[1]: Reached target torcx.target. Jul 2 07:53:48.128340 systemd[1]: Reached target veritysetup.target. Jul 2 07:53:48.128355 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:53:48.128365 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:53:48.128377 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:53:48.128388 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:53:48.132779 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:53:48.132808 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:53:48.132821 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:53:48.132832 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:53:48.132844 systemd[1]: Mounting media.mount... Jul 2 07:53:48.132855 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:48.132868 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:53:48.132878 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:53:48.132891 systemd[1]: Mounting tmp.mount... Jul 2 07:53:48.132903 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:53:48.132916 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:48.132926 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:53:48.132936 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:53:48.132949 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:48.132961 systemd[1]: Starting modprobe@drm.service... Jul 2 07:53:48.132973 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:48.132984 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:53:48.132997 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:48.133008 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:53:48.133020 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:53:48.133035 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:53:48.133048 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:53:48.133057 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:53:48.133068 systemd[1]: Stopped systemd-journald.service. Jul 2 07:53:48.133091 systemd[1]: Starting systemd-journald.service... Jul 2 07:53:48.133102 kernel: loop: module loaded Jul 2 07:53:48.133114 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:53:48.133125 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:53:48.133139 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:53:48.133151 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:53:48.133162 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:53:48.133176 systemd[1]: Stopped verity-setup.service. Jul 2 07:53:48.133187 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:48.133199 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:53:48.133212 kernel: fuse: init (API version 7.34) Jul 2 07:53:48.133222 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:53:48.133242 systemd[1]: Mounted media.mount. Jul 2 07:53:48.133265 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:53:48.133283 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:53:48.133308 systemd-journald[1148]: Journal started Jul 2 07:53:48.133390 systemd-journald[1148]: Runtime Journal (/run/log/journal/9b376903c85f49f8a27836281da82d17) is 8.0M, max 159.0M, 151.0M free. Jul 2 07:53:35.299000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:53:36.019000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:36.049000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:36.049000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:36.065000 audit: BPF prog-id=10 op=LOAD Jul 2 07:53:36.065000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:53:36.081000 audit: BPF prog-id=11 op=LOAD Jul 2 07:53:36.081000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:53:37.555000 audit[1038]: AVC avc: denied { associate } for pid=1038 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:53:37.555000 audit[1038]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1021 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:37.555000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:53:37.566000 audit[1038]: AVC avc: denied { associate } for pid=1038 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:53:37.566000 audit[1038]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1021 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:37.566000 audit: CWD cwd="/" Jul 2 07:53:37.566000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:37.566000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:37.566000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:53:47.615000 audit: BPF prog-id=12 op=LOAD Jul 2 07:53:47.615000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:53:47.620000 audit: BPF prog-id=13 op=LOAD Jul 2 07:53:47.625000 audit: BPF prog-id=14 op=LOAD Jul 2 07:53:47.625000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:53:47.625000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:53:47.630000 audit: BPF prog-id=15 op=LOAD Jul 2 07:53:47.630000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:53:47.650000 audit: BPF prog-id=16 op=LOAD Jul 2 07:53:47.655000 audit: BPF prog-id=17 op=LOAD Jul 2 07:53:47.655000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:53:47.655000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:53:47.660000 audit: BPF prog-id=18 op=LOAD Jul 2 07:53:47.660000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:53:47.665000 audit: BPF prog-id=19 op=LOAD Jul 2 07:53:47.670000 audit: BPF prog-id=20 op=LOAD Jul 2 07:53:47.670000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:53:47.670000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:53:47.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:47.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:47.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:47.684000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:53:48.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.032000 audit: BPF prog-id=21 op=LOAD Jul 2 07:53:48.032000 audit: BPF prog-id=22 op=LOAD Jul 2 07:53:48.032000 audit: BPF prog-id=23 op=LOAD Jul 2 07:53:48.032000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:53:48.032000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:53:48.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.124000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:53:48.124000 audit[1148]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe5ed03590 a2=4000 a3=7ffe5ed0362c items=0 ppid=1 pid=1148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:48.124000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:53:37.515692 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:53:47.613698 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:53:37.516191 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:53:47.671436 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:53:37.516209 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:53:37.516247 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:53:37.516257 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:53:37.516307 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:53:37.516319 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:53:37.516551 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:53:37.516590 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:53:37.516601 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:53:37.535549 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:53:37.535624 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:53:37.535652 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:53:37.535667 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:53:37.535692 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:53:37.535708 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:53:46.213952 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:46.214312 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:46.214520 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:46.214765 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:46.214824 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:53:46.214887 /usr/lib/systemd/system-generators/torcx-generator[1038]: time="2024-07-02T07:53:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:53:48.142812 systemd[1]: Started systemd-journald.service. Jul 2 07:53:48.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.143795 systemd[1]: Mounted tmp.mount. Jul 2 07:53:48.145921 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:53:48.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.148514 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:53:48.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.151398 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:53:48.151555 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:53:48.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.154244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:48.154394 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:48.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.156694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:53:48.156841 systemd[1]: Finished modprobe@drm.service. Jul 2 07:53:48.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.159155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:48.159311 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:48.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.161733 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:53:48.161880 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:53:48.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.164414 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:48.164565 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:48.166977 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:53:48.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.172563 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:53:48.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.175494 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:53:48.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.179033 systemd[1]: Reached target network-pre.target. Jul 2 07:53:48.183521 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:53:48.187638 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:53:48.190547 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:53:48.195821 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:53:48.199736 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:53:48.203056 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:48.204817 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:53:48.207207 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:48.209002 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:53:48.213955 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:53:48.219943 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:53:48.223288 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:53:48.253751 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:53:48.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.256502 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:53:48.259384 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:53:48.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.263557 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:53:48.271213 systemd-journald[1148]: Time spent on flushing to /var/log/journal/9b376903c85f49f8a27836281da82d17 is 26.028ms for 1171 entries. Jul 2 07:53:48.271213 systemd-journald[1148]: System Journal (/var/log/journal/9b376903c85f49f8a27836281da82d17) is 8.0M, max 2.6G, 2.6G free. Jul 2 07:53:48.345230 systemd-journald[1148]: Received client request to flush runtime journal. Jul 2 07:53:48.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.314371 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:53:48.346361 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:53:48.346533 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:53:48.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.876852 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:53:48.882934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:53:49.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.252834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:53:49.501098 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:53:49.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.504000 audit: BPF prog-id=24 op=LOAD Jul 2 07:53:49.504000 audit: BPF prog-id=25 op=LOAD Jul 2 07:53:49.505000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:53:49.505000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:53:49.506112 systemd[1]: Starting systemd-udevd.service... Jul 2 07:53:49.531015 systemd-udevd[1167]: Using default interface naming scheme 'v252'. Jul 2 07:53:49.746327 systemd[1]: Started systemd-udevd.service. Jul 2 07:53:49.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.754000 audit: BPF prog-id=26 op=LOAD Jul 2 07:53:49.758295 systemd[1]: Starting systemd-networkd.service... Jul 2 07:53:49.801218 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:53:49.861140 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:53:49.883000 audit: BPF prog-id=27 op=LOAD Jul 2 07:53:49.883000 audit: BPF prog-id=28 op=LOAD Jul 2 07:53:49.883000 audit: BPF prog-id=29 op=LOAD Jul 2 07:53:49.885249 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:53:49.919104 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 07:53:49.909000 audit[1186]: AVC avc: denied { confidentiality } for pid=1186 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:49.930746 kernel: hv_vmbus: registering driver hv_balloon Jul 2 07:53:49.947206 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 07:53:49.947283 kernel: hv_vmbus: registering driver hv_utils Jul 2 07:53:49.950959 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 07:53:49.951015 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 07:53:49.959925 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 07:53:49.966870 kernel: Console: switching to colour dummy device 80x25 Jul 2 07:53:49.968228 systemd[1]: Started systemd-userdbd.service. Jul 2 07:53:49.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.972092 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 07:53:49.909000 audit[1186]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55682290e530 a1=f884 a2=7f992a3d8bc5 a3=5 items=12 ppid=1167 pid=1186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:49.909000 audit: CWD cwd="/" Jul 2 07:53:49.909000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=1 name=(null) inode=15204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=2 name=(null) inode=15204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=3 name=(null) inode=15205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=4 name=(null) inode=15204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=5 name=(null) inode=15206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=6 name=(null) inode=15204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=7 name=(null) inode=15207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=8 name=(null) inode=15204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=9 name=(null) inode=15208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=10 name=(null) inode=15204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PATH item=11 name=(null) inode=15209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.909000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:53:50.269626 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 07:53:50.269726 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 07:53:50.269756 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 07:53:50.453448 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1181) Jul 2 07:53:50.501446 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Jul 2 07:53:50.514236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:53:50.584878 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:53:50.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.589098 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:53:50.794260 systemd-networkd[1183]: lo: Link UP Jul 2 07:53:50.794273 systemd-networkd[1183]: lo: Gained carrier Jul 2 07:53:50.794924 systemd-networkd[1183]: Enumeration completed Jul 2 07:53:50.795066 systemd[1]: Started systemd-networkd.service. Jul 2 07:53:50.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.799124 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:53:50.837063 systemd-networkd[1183]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:53:50.891445 kernel: mlx5_core 5f19:00:02.0 enP24345s1: Link up Jul 2 07:53:50.916711 kernel: hv_netvsc 0022489e-7e0b-0022-489e-7e0b0022489e eth0: Data path switched to VF: enP24345s1 Jul 2 07:53:50.917354 systemd-networkd[1183]: enP24345s1: Link UP Jul 2 07:53:50.917565 systemd-networkd[1183]: eth0: Link UP Jul 2 07:53:50.917573 systemd-networkd[1183]: eth0: Gained carrier Jul 2 07:53:50.922767 systemd-networkd[1183]: enP24345s1: Gained carrier Jul 2 07:53:50.952604 systemd-networkd[1183]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:53:50.957740 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:53:50.989020 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:53:50.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.992051 systemd[1]: Reached target cryptsetup.target. Jul 2 07:53:50.995796 systemd[1]: Starting lvm2-activation.service... Jul 2 07:53:51.001241 lvm[1245]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:53:51.021752 systemd[1]: Finished lvm2-activation.service. Jul 2 07:53:51.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.024569 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:53:51.027051 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:53:51.027096 systemd[1]: Reached target local-fs.target. Jul 2 07:53:51.029204 systemd[1]: Reached target machines.target. Jul 2 07:53:51.032808 systemd[1]: Starting ldconfig.service... Jul 2 07:53:51.035433 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.035536 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:51.036942 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:53:51.040276 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:53:51.044757 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:53:51.048559 systemd[1]: Starting systemd-sysext.service... Jul 2 07:53:51.153390 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1247 (bootctl) Jul 2 07:53:51.155965 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:53:51.162205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:53:51.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.193730 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:53:51.194632 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:53:51.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.201966 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:53:51.225285 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:51.225540 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:53:51.272445 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 07:53:51.329522 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:53:51.348447 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 07:53:51.356461 (sd-sysext)[1259]: Using extensions 'kubernetes'. Jul 2 07:53:51.357383 (sd-sysext)[1259]: Merged extensions into '/usr'. Jul 2 07:53:51.376514 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.378216 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:53:51.380971 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.385202 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:51.389464 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:51.394725 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:51.397208 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.397490 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:51.397728 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.401029 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:53:51.403887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:51.404054 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:51.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.407655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:51.407811 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:51.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.411466 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:51.411618 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:51.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.414699 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:51.414836 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.416182 systemd[1]: Finished systemd-sysext.service. Jul 2 07:53:51.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.420792 systemd[1]: Starting ensure-sysext.service... Jul 2 07:53:51.428976 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:53:51.437565 systemd[1]: Reloading. Jul 2 07:53:51.452205 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:53:51.454337 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:53:51.471101 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:53:51.521583 /usr/lib/systemd/system-generators/torcx-generator[1285]: time="2024-07-02T07:53:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:53:51.523182 /usr/lib/systemd/system-generators/torcx-generator[1285]: time="2024-07-02T07:53:51Z" level=info msg="torcx already run" Jul 2 07:53:51.635891 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:53:51.635914 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:53:51.657759 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:53:51.737000 audit: BPF prog-id=30 op=LOAD Jul 2 07:53:51.737000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:53:51.737000 audit: BPF prog-id=31 op=LOAD Jul 2 07:53:51.737000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:53:51.738000 audit: BPF prog-id=32 op=LOAD Jul 2 07:53:51.738000 audit: BPF prog-id=33 op=LOAD Jul 2 07:53:51.738000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:53:51.738000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:53:51.738000 audit: BPF prog-id=34 op=LOAD Jul 2 07:53:51.738000 audit: BPF prog-id=35 op=LOAD Jul 2 07:53:51.738000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:53:51.738000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:53:51.740000 audit: BPF prog-id=36 op=LOAD Jul 2 07:53:51.740000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:53:51.740000 audit: BPF prog-id=37 op=LOAD Jul 2 07:53:51.740000 audit: BPF prog-id=38 op=LOAD Jul 2 07:53:51.740000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:53:51.740000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:53:51.759064 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.759345 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.760843 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:51.766368 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:51.772775 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:51.780483 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.780747 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:51.780990 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.782238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:51.782443 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:51.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.786284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:51.786472 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:51.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.790376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:51.790539 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:51.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.799827 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.800138 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.802268 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:51.809745 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:51.819439 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:51.824196 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.824523 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:51.824747 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.826811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:51.827118 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:51.837028 systemd-fsck[1254]: fsck.fat 4.2 (2021-01-31) Jul 2 07:53:51.837028 systemd-fsck[1254]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:53:51.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.838597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:51.839061 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:51.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.842885 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:51.843326 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:51.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.852584 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.853125 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.856874 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:51.860895 systemd[1]: Starting modprobe@drm.service... Jul 2 07:53:51.865918 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:51.869708 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:51.871790 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.872035 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:51.872292 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:51.874024 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:53:51.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.877773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:51.877949 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.881147 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:53:51.881308 systemd[1]: Finished modprobe@drm.service. Jul 2 07:53:51.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.884093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:51.884262 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:51.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.887263 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:51.887425 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:51.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.893363 systemd[1]: Finished ensure-sysext.service. Jul 2 07:53:51.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.898067 systemd[1]: Mounting boot.mount... Jul 2 07:53:51.900040 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:51.900111 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:51.908200 systemd[1]: Mounted boot.mount. Jul 2 07:53:51.923455 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:53:51.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.110562 systemd-networkd[1183]: eth0: Gained IPv6LL Jul 2 07:53:52.114576 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:53:52.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.120246 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:53:52.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.124324 systemd[1]: Starting audit-rules.service... Jul 2 07:53:52.127813 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:53:52.131947 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:53:52.133000 audit: BPF prog-id=39 op=LOAD Jul 2 07:53:52.138488 systemd[1]: Starting systemd-resolved.service... Jul 2 07:53:52.139000 audit: BPF prog-id=40 op=LOAD Jul 2 07:53:52.142247 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:53:52.145835 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:53:52.155000 audit[1369]: SYSTEM_BOOT pid=1369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.159441 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:53:52.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.187093 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:53:52.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.189887 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:53:52.265546 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:53:52.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.268219 systemd[1]: Reached target time-set.target. Jul 2 07:53:52.317964 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:53:52.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.342594 systemd-resolved[1367]: Positive Trust Anchors: Jul 2 07:53:52.342625 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:53:52.342676 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:53:52.413048 systemd-resolved[1367]: Using system hostname 'ci-3510.3.5-a-37a211789c'. Jul 2 07:53:52.415317 systemd[1]: Started systemd-resolved.service. Jul 2 07:53:52.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.418333 systemd[1]: Reached target network.target. Jul 2 07:53:52.423592 systemd[1]: Reached target network-online.target. Jul 2 07:53:52.426392 systemd[1]: Reached target nss-lookup.target. Jul 2 07:53:52.445000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:53:52.445000 audit[1384]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4bd63630 a2=420 a3=0 items=0 ppid=1363 pid=1384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:52.445000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:53:52.447250 augenrules[1384]: No rules Jul 2 07:53:52.447941 systemd[1]: Finished audit-rules.service. Jul 2 07:53:52.508430 systemd-timesyncd[1368]: Contacted time server 193.1.8.106:123 (0.flatcar.pool.ntp.org). Jul 2 07:53:52.508515 systemd-timesyncd[1368]: Initial clock synchronization to Tue 2024-07-02 07:53:52.508077 UTC. Jul 2 07:53:58.533295 ldconfig[1246]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:53:58.545011 systemd[1]: Finished ldconfig.service. Jul 2 07:53:58.550102 systemd[1]: Starting systemd-update-done.service... Jul 2 07:53:58.561334 systemd[1]: Finished systemd-update-done.service. Jul 2 07:53:58.564443 systemd[1]: Reached target sysinit.target. Jul 2 07:53:58.567141 systemd[1]: Started motdgen.path. Jul 2 07:53:58.569475 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:53:58.573068 systemd[1]: Started logrotate.timer. Jul 2 07:53:58.575382 systemd[1]: Started mdadm.timer. Jul 2 07:53:58.577528 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:53:58.580198 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:53:58.580237 systemd[1]: Reached target paths.target. Jul 2 07:53:58.583101 systemd[1]: Reached target timers.target. Jul 2 07:53:58.586079 systemd[1]: Listening on dbus.socket. Jul 2 07:53:58.589531 systemd[1]: Starting docker.socket... Jul 2 07:53:58.609787 systemd[1]: Listening on sshd.socket. Jul 2 07:53:58.615851 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:58.616648 systemd[1]: Listening on docker.socket. Jul 2 07:53:58.619017 systemd[1]: Reached target sockets.target. Jul 2 07:53:58.621120 systemd[1]: Reached target basic.target. Jul 2 07:53:58.623354 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:53:58.623393 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:53:58.624817 systemd[1]: Starting containerd.service... Jul 2 07:53:58.629005 systemd[1]: Starting dbus.service... Jul 2 07:53:58.632294 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:53:58.635871 systemd[1]: Starting extend-filesystems.service... Jul 2 07:53:58.638101 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:53:58.655007 systemd[1]: Starting kubelet.service... Jul 2 07:53:58.658381 systemd[1]: Starting motdgen.service... Jul 2 07:53:58.661880 systemd[1]: Started nvidia.service. Jul 2 07:53:58.665328 systemd[1]: Starting prepare-helm.service... Jul 2 07:53:58.668528 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:53:58.671817 systemd[1]: Starting sshd-keygen.service... Jul 2 07:53:58.676927 systemd[1]: Starting systemd-logind.service... Jul 2 07:53:58.687437 jq[1394]: false Jul 2 07:53:58.679205 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:58.679310 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:53:58.679885 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:53:58.680886 systemd[1]: Starting update-engine.service... Jul 2 07:53:58.687220 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:53:58.693681 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:53:58.693986 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:53:58.704906 jq[1406]: true Jul 2 07:53:58.721052 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:53:58.721315 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:53:58.754724 jq[1412]: true Jul 2 07:53:58.765090 extend-filesystems[1395]: Found loop1 Jul 2 07:53:58.768322 extend-filesystems[1395]: Found sda Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda1 Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda2 Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda3 Jul 2 07:53:58.770675 extend-filesystems[1395]: Found usr Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda4 Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda6 Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda7 Jul 2 07:53:58.770675 extend-filesystems[1395]: Found sda9 Jul 2 07:53:58.770675 extend-filesystems[1395]: Checking size of /dev/sda9 Jul 2 07:53:58.807805 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:53:58.808010 systemd[1]: Finished motdgen.service. Jul 2 07:53:58.815839 systemd-logind[1404]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:53:58.827331 systemd-logind[1404]: New seat seat0. Jul 2 07:53:58.853477 extend-filesystems[1395]: Old size kept for /dev/sda9 Jul 2 07:53:58.858188 extend-filesystems[1395]: Found sr0 Jul 2 07:53:58.854118 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:53:58.854311 systemd[1]: Finished extend-filesystems.service. Jul 2 07:53:58.884434 env[1434]: time="2024-07-02T07:53:58.882790486Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:53:58.899795 tar[1410]: linux-amd64/helm Jul 2 07:53:58.924159 dbus-daemon[1393]: [system] SELinux support is enabled Jul 2 07:53:58.924398 systemd[1]: Started dbus.service. Jul 2 07:53:58.929698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:53:58.929732 systemd[1]: Reached target system-config.target. Jul 2 07:53:58.932439 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:53:58.932466 systemd[1]: Reached target user-config.target. Jul 2 07:53:58.938002 dbus-daemon[1393]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:53:58.938211 systemd[1]: Started systemd-logind.service. Jul 2 07:53:58.958827 bash[1432]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:53:58.959788 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:53:59.001409 env[1434]: time="2024-07-02T07:53:59.001340026Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:53:59.001619 env[1434]: time="2024-07-02T07:53:59.001588424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:59.005939 env[1434]: time="2024-07-02T07:53:59.005891184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:53:59.005939 env[1434]: time="2024-07-02T07:53:59.005933884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:59.006253 env[1434]: time="2024-07-02T07:53:59.006223481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:53:59.006307 env[1434]: time="2024-07-02T07:53:59.006254381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:59.006307 env[1434]: time="2024-07-02T07:53:59.006273281Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:53:59.006307 env[1434]: time="2024-07-02T07:53:59.006291480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:59.006446 env[1434]: time="2024-07-02T07:53:59.006399679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:59.006733 env[1434]: time="2024-07-02T07:53:59.006708077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:59.006960 env[1434]: time="2024-07-02T07:53:59.006932375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:53:59.007016 env[1434]: time="2024-07-02T07:53:59.006962074Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:53:59.007114 env[1434]: time="2024-07-02T07:53:59.007030774Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:53:59.007114 env[1434]: time="2024-07-02T07:53:59.007046774Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:53:59.031516 env[1434]: time="2024-07-02T07:53:59.031461850Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031529049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031547349Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031596248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031616148Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031632348Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031648348Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031684 env[1434]: time="2024-07-02T07:53:59.031665148Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031921 env[1434]: time="2024-07-02T07:53:59.031685447Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031921 env[1434]: time="2024-07-02T07:53:59.031705147Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031921 env[1434]: time="2024-07-02T07:53:59.031723747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.031921 env[1434]: time="2024-07-02T07:53:59.031742347Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:53:59.031921 env[1434]: time="2024-07-02T07:53:59.031897346Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:53:59.032094 env[1434]: time="2024-07-02T07:53:59.032002345Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:53:59.032447 env[1434]: time="2024-07-02T07:53:59.032349141Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:53:59.032532 env[1434]: time="2024-07-02T07:53:59.032478740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.032532 env[1434]: time="2024-07-02T07:53:59.032506240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.032571239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036096807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036124807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036144507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036163006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036182506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036200706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036219106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036243906Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036435404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036458504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036478703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036497603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:53:59.037969 env[1434]: time="2024-07-02T07:53:59.036522703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:53:59.041893 env[1434]: time="2024-07-02T07:53:59.036542603Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:53:59.041893 env[1434]: time="2024-07-02T07:53:59.036569003Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:53:59.041893 env[1434]: time="2024-07-02T07:53:59.036613402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:53:59.038328 systemd[1]: Started containerd.service. Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.036891600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.036972099Z" level=info msg="Connect containerd service" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.037033198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.037778392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.038118988Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.038177188Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.038246887Z" level=info msg="containerd successfully booted in 0.158221s" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.040930763Z" level=info msg="Start subscribing containerd event" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.040991662Z" level=info msg="Start recovering state" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.041069361Z" level=info msg="Start event monitor" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.041083361Z" level=info msg="Start snapshots syncer" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.041096461Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:53:59.042096 env[1434]: time="2024-07-02T07:53:59.041106361Z" level=info msg="Start streaming server" Jul 2 07:53:59.146885 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 07:53:59.706906 tar[1410]: linux-amd64/LICENSE Jul 2 07:53:59.706906 tar[1410]: linux-amd64/README.md Jul 2 07:53:59.715490 systemd[1]: Finished prepare-helm.service. Jul 2 07:53:59.722369 update_engine[1405]: I0702 07:53:59.721284 1405 main.cc:92] Flatcar Update Engine starting Jul 2 07:53:59.773259 systemd[1]: Started update-engine.service. Jul 2 07:53:59.778533 systemd[1]: Started locksmithd.service. Jul 2 07:53:59.781639 update_engine[1405]: I0702 07:53:59.781505 1405 update_check_scheduler.cc:74] Next update check in 10m41s Jul 2 07:54:00.140982 systemd[1]: Started kubelet.service. Jul 2 07:54:00.954812 kubelet[1495]: E0702 07:54:00.954718 1495 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:00.957206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:00.957373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:00.957706 systemd[1]: kubelet.service: Consumed 1.163s CPU time. Jul 2 07:54:01.350323 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:54:01.919578 sshd_keygen[1416]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:54:01.941026 systemd[1]: Finished sshd-keygen.service. Jul 2 07:54:01.945445 systemd[1]: Starting issuegen.service... Jul 2 07:54:01.949242 systemd[1]: Started waagent.service. Jul 2 07:54:01.956626 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:54:01.956788 systemd[1]: Finished issuegen.service. Jul 2 07:54:01.960780 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:54:01.968571 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:54:01.973028 systemd[1]: Started getty@tty1.service. Jul 2 07:54:01.976822 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:54:01.979145 systemd[1]: Reached target getty.target. Jul 2 07:54:01.981651 systemd[1]: Reached target multi-user.target. Jul 2 07:54:01.985646 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:54:01.994926 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:54:01.995133 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:54:01.998235 systemd[1]: Startup finished in 1.041s (firmware) + 30.142s (loader) + 1.097s (kernel) + 15.042s (initrd) + 26.941s (userspace) = 1min 14.264s. Jul 2 07:54:02.345243 login[1518]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 07:54:02.346863 login[1519]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 07:54:02.374776 systemd[1]: Created slice user-500.slice. Jul 2 07:54:02.376541 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:54:02.380754 systemd-logind[1404]: New session 1 of user core. Jul 2 07:54:02.385795 systemd-logind[1404]: New session 2 of user core. Jul 2 07:54:02.390974 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:54:02.393093 systemd[1]: Starting user@500.service... Jul 2 07:54:02.413900 (systemd)[1522]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:02.547708 systemd[1522]: Queued start job for default target default.target. Jul 2 07:54:02.548370 systemd[1522]: Reached target paths.target. Jul 2 07:54:02.548402 systemd[1522]: Reached target sockets.target. Jul 2 07:54:02.548440 systemd[1522]: Reached target timers.target. Jul 2 07:54:02.548456 systemd[1522]: Reached target basic.target. Jul 2 07:54:02.548598 systemd[1]: Started user@500.service. Jul 2 07:54:02.550025 systemd[1]: Started session-1.scope. Jul 2 07:54:02.550892 systemd[1]: Started session-2.scope. Jul 2 07:54:02.551863 systemd[1522]: Reached target default.target. Jul 2 07:54:02.552056 systemd[1522]: Startup finished in 126ms. Jul 2 07:54:09.021729 waagent[1513]: 2024-07-02T07:54:09.021586Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 2 07:54:09.040070 waagent[1513]: 2024-07-02T07:54:09.024842Z INFO Daemon Daemon OS: flatcar 3510.3.5 Jul 2 07:54:09.040070 waagent[1513]: 2024-07-02T07:54:09.026239Z INFO Daemon Daemon Python: 3.9.16 Jul 2 07:54:09.040070 waagent[1513]: 2024-07-02T07:54:09.027677Z INFO Daemon Daemon Run daemon Jul 2 07:54:09.040070 waagent[1513]: 2024-07-02T07:54:09.029488Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.5' Jul 2 07:54:09.046820 waagent[1513]: 2024-07-02T07:54:09.046612Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 07:54:09.064374 waagent[1513]: 2024-07-02T07:54:09.064198Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.066490Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.067444Z INFO Daemon Daemon Using waagent for provisioning Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.069382Z INFO Daemon Daemon Activate resource disk Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.070985Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.086157Z INFO Daemon Daemon Found device: None Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.086868Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.087705Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.089593Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 07:54:09.102097 waagent[1513]: 2024-07-02T07:54:09.090442Z INFO Daemon Daemon Running default provisioning handler Jul 2 07:54:09.118845 waagent[1513]: 2024-07-02T07:54:09.103515Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 07:54:09.118845 waagent[1513]: 2024-07-02T07:54:09.106787Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 07:54:09.118845 waagent[1513]: 2024-07-02T07:54:09.108077Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 07:54:09.118845 waagent[1513]: 2024-07-02T07:54:09.108899Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 07:54:09.426758 waagent[1513]: 2024-07-02T07:54:09.426475Z INFO Daemon Daemon Successfully mounted dvd Jul 2 07:54:09.529930 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 07:54:09.551397 waagent[1513]: 2024-07-02T07:54:09.551240Z INFO Daemon Daemon Detect protocol endpoint Jul 2 07:54:09.567360 waagent[1513]: 2024-07-02T07:54:09.552216Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 07:54:09.567360 waagent[1513]: 2024-07-02T07:54:09.553923Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 07:54:09.567360 waagent[1513]: 2024-07-02T07:54:09.554731Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 07:54:09.567360 waagent[1513]: 2024-07-02T07:54:09.555934Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 07:54:09.567360 waagent[1513]: 2024-07-02T07:54:09.556734Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 07:54:09.686853 waagent[1513]: 2024-07-02T07:54:09.686653Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 07:54:09.692560 waagent[1513]: 2024-07-02T07:54:09.687741Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 07:54:09.692560 waagent[1513]: 2024-07-02T07:54:09.688655Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 07:54:10.108048 waagent[1513]: 2024-07-02T07:54:10.107878Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 07:54:10.119737 waagent[1513]: 2024-07-02T07:54:10.119650Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 2 07:54:10.123266 waagent[1513]: 2024-07-02T07:54:10.123193Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 2 07:54:10.211728 waagent[1513]: 2024-07-02T07:54:10.211566Z INFO Daemon Daemon Found private key matching thumbprint A010F98FD1A4A905DCB9B11EEFF610C4D1AC15DE Jul 2 07:54:10.222344 waagent[1513]: 2024-07-02T07:54:10.212126Z INFO Daemon Daemon Certificate with thumbprint 4117A871B54293C9B432ADFDBF555624F5AE4426 has no matching private key. Jul 2 07:54:10.222344 waagent[1513]: 2024-07-02T07:54:10.213115Z INFO Daemon Daemon Fetch goal state completed Jul 2 07:54:10.259560 waagent[1513]: 2024-07-02T07:54:10.259473Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: ac866605-a467-458b-a39e-4c47faf97fb7 New eTag: 9339972215838988524] Jul 2 07:54:10.265564 waagent[1513]: 2024-07-02T07:54:10.265482Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 07:54:10.278922 waagent[1513]: 2024-07-02T07:54:10.278846Z INFO Daemon Daemon Starting provisioning Jul 2 07:54:10.281762 waagent[1513]: 2024-07-02T07:54:10.281682Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 07:54:10.284922 waagent[1513]: 2024-07-02T07:54:10.284857Z INFO Daemon Daemon Set hostname [ci-3510.3.5-a-37a211789c] Jul 2 07:54:10.306479 waagent[1513]: 2024-07-02T07:54:10.306291Z INFO Daemon Daemon Publish hostname [ci-3510.3.5-a-37a211789c] Jul 2 07:54:10.314797 waagent[1513]: 2024-07-02T07:54:10.307524Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 07:54:10.314797 waagent[1513]: 2024-07-02T07:54:10.308591Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 07:54:10.324324 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 2 07:54:10.324632 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 2 07:54:10.324717 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 2 07:54:10.325107 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:54:10.331476 systemd-networkd[1183]: eth0: DHCPv6 lease lost Jul 2 07:54:10.333128 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:54:10.333361 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:54:10.336218 systemd[1]: Starting systemd-networkd.service... Jul 2 07:54:10.369325 systemd-networkd[1566]: enP24345s1: Link UP Jul 2 07:54:10.369338 systemd-networkd[1566]: enP24345s1: Gained carrier Jul 2 07:54:10.370755 systemd-networkd[1566]: eth0: Link UP Jul 2 07:54:10.370765 systemd-networkd[1566]: eth0: Gained carrier Jul 2 07:54:10.371209 systemd-networkd[1566]: lo: Link UP Jul 2 07:54:10.371218 systemd-networkd[1566]: lo: Gained carrier Jul 2 07:54:10.371560 systemd-networkd[1566]: eth0: Gained IPv6LL Jul 2 07:54:10.371861 systemd-networkd[1566]: Enumeration completed Jul 2 07:54:10.371998 systemd[1]: Started systemd-networkd.service. Jul 2 07:54:10.374266 waagent[1513]: 2024-07-02T07:54:10.373787Z INFO Daemon Daemon Create user account if not exists Jul 2 07:54:10.377125 waagent[1513]: 2024-07-02T07:54:10.375213Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 07:54:10.377125 waagent[1513]: 2024-07-02T07:54:10.376049Z INFO Daemon Daemon Configure sudoer Jul 2 07:54:10.377520 waagent[1513]: 2024-07-02T07:54:10.377461Z INFO Daemon Daemon Configure sshd Jul 2 07:54:10.378393 waagent[1513]: 2024-07-02T07:54:10.378337Z INFO Daemon Daemon Deploy ssh public key. Jul 2 07:54:10.384442 systemd-networkd[1566]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:54:10.385662 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:54:10.422547 systemd-networkd[1566]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:54:10.426148 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:54:10.968168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:54:10.968553 systemd[1]: Stopped kubelet.service. Jul 2 07:54:10.968629 systemd[1]: kubelet.service: Consumed 1.163s CPU time. Jul 2 07:54:10.970993 systemd[1]: Starting kubelet.service... Jul 2 07:54:11.078653 systemd[1]: Started kubelet.service. Jul 2 07:54:11.665181 kubelet[1579]: E0702 07:54:11.665112 1579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:11.668994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:11.669182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:11.768441 waagent[1513]: 2024-07-02T07:54:11.768311Z INFO Daemon Daemon Provisioning complete Jul 2 07:54:11.783607 waagent[1513]: 2024-07-02T07:54:11.783472Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 07:54:11.791027 waagent[1513]: 2024-07-02T07:54:11.784082Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 07:54:11.791027 waagent[1513]: 2024-07-02T07:54:11.786015Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 2 07:54:12.115144 waagent[1585]: 2024-07-02T07:54:12.115007Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 2 07:54:12.116092 waagent[1585]: 2024-07-02T07:54:12.116014Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:12.116272 waagent[1585]: 2024-07-02T07:54:12.116216Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:12.128885 waagent[1585]: 2024-07-02T07:54:12.128786Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 2 07:54:12.129148 waagent[1585]: 2024-07-02T07:54:12.129078Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 2 07:54:12.208555 waagent[1585]: 2024-07-02T07:54:12.208374Z INFO ExtHandler ExtHandler Found private key matching thumbprint A010F98FD1A4A905DCB9B11EEFF610C4D1AC15DE Jul 2 07:54:12.208832 waagent[1585]: 2024-07-02T07:54:12.208769Z INFO ExtHandler ExtHandler Certificate with thumbprint 4117A871B54293C9B432ADFDBF555624F5AE4426 has no matching private key. Jul 2 07:54:12.209087 waagent[1585]: 2024-07-02T07:54:12.209033Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 2 07:54:12.227045 waagent[1585]: 2024-07-02T07:54:12.226962Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: f4992156-0f41-43e6-be41-7f8794c354e0 New eTag: 9339972215838988524] Jul 2 07:54:12.227688 waagent[1585]: 2024-07-02T07:54:12.227619Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 07:54:12.384322 waagent[1585]: 2024-07-02T07:54:12.384019Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 07:54:12.419929 waagent[1585]: 2024-07-02T07:54:12.419806Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1585 Jul 2 07:54:12.424221 waagent[1585]: 2024-07-02T07:54:12.424129Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 07:54:12.425694 waagent[1585]: 2024-07-02T07:54:12.425594Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 07:54:12.570563 waagent[1585]: 2024-07-02T07:54:12.570373Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 07:54:12.571144 waagent[1585]: 2024-07-02T07:54:12.571068Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 07:54:12.581217 waagent[1585]: 2024-07-02T07:54:12.581146Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 07:54:12.581878 waagent[1585]: 2024-07-02T07:54:12.581788Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 07:54:12.583181 waagent[1585]: 2024-07-02T07:54:12.583109Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 2 07:54:12.584638 waagent[1585]: 2024-07-02T07:54:12.584578Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 07:54:12.585316 waagent[1585]: 2024-07-02T07:54:12.585261Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:12.585802 waagent[1585]: 2024-07-02T07:54:12.585728Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 07:54:12.586239 waagent[1585]: 2024-07-02T07:54:12.586181Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:12.586642 waagent[1585]: 2024-07-02T07:54:12.586580Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 07:54:12.586916 waagent[1585]: 2024-07-02T07:54:12.586858Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:12.587040 waagent[1585]: 2024-07-02T07:54:12.586969Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 07:54:12.588112 waagent[1585]: 2024-07-02T07:54:12.588041Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 07:54:12.588718 waagent[1585]: 2024-07-02T07:54:12.588630Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 07:54:12.589124 waagent[1585]: 2024-07-02T07:54:12.589070Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:12.589436 waagent[1585]: 2024-07-02T07:54:12.589376Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 07:54:12.589656 waagent[1585]: 2024-07-02T07:54:12.589602Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 07:54:12.591191 waagent[1585]: 2024-07-02T07:54:12.591129Z INFO EnvHandler ExtHandler Configure routes Jul 2 07:54:12.591347 waagent[1585]: 2024-07-02T07:54:12.591301Z INFO EnvHandler ExtHandler Gateway:None Jul 2 07:54:12.591654 waagent[1585]: 2024-07-02T07:54:12.591597Z INFO EnvHandler ExtHandler Routes:None Jul 2 07:54:12.592763 waagent[1585]: 2024-07-02T07:54:12.592664Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 07:54:12.592763 waagent[1585]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 07:54:12.592763 waagent[1585]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 07:54:12.592763 waagent[1585]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 07:54:12.592763 waagent[1585]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:12.592763 waagent[1585]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:12.592763 waagent[1585]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:12.618084 waagent[1585]: 2024-07-02T07:54:12.618016Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 2 07:54:12.618899 waagent[1585]: 2024-07-02T07:54:12.618841Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 07:54:12.620266 waagent[1585]: 2024-07-02T07:54:12.620191Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 2 07:54:12.651100 waagent[1585]: 2024-07-02T07:54:12.650853Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1566' Jul 2 07:54:12.660092 waagent[1585]: 2024-07-02T07:54:12.660002Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 2 07:54:12.768049 waagent[1585]: 2024-07-02T07:54:12.767897Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 07:54:12.768049 waagent[1585]: Executing ['ip', '-a', '-o', 'link']: Jul 2 07:54:12.768049 waagent[1585]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 07:54:12.768049 waagent[1585]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:7e:0b brd ff:ff:ff:ff:ff:ff Jul 2 07:54:12.768049 waagent[1585]: 3: enP24345s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:7e:0b brd ff:ff:ff:ff:ff:ff\ altname enP24345p0s2 Jul 2 07:54:12.768049 waagent[1585]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 07:54:12.768049 waagent[1585]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 07:54:12.768049 waagent[1585]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 07:54:12.768049 waagent[1585]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 07:54:12.768049 waagent[1585]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 07:54:12.768049 waagent[1585]: 2: eth0 inet6 fe80::222:48ff:fe9e:7e0b/64 scope link \ valid_lft forever preferred_lft forever Jul 2 07:54:12.989718 waagent[1585]: 2024-07-02T07:54:12.989632Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.11.1.4 -- exiting Jul 2 07:54:13.791070 waagent[1513]: 2024-07-02T07:54:13.790857Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 2 07:54:13.797526 waagent[1513]: 2024-07-02T07:54:13.797433Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.11.1.4 to be the latest agent Jul 2 07:54:14.855397 waagent[1624]: 2024-07-02T07:54:14.855272Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.11.1.4) Jul 2 07:54:14.856162 waagent[1624]: 2024-07-02T07:54:14.856089Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.5 Jul 2 07:54:14.856314 waagent[1624]: 2024-07-02T07:54:14.856258Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 2 07:54:14.856482 waagent[1624]: 2024-07-02T07:54:14.856431Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 2 07:54:14.866405 waagent[1624]: 2024-07-02T07:54:14.866304Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 07:54:14.866811 waagent[1624]: 2024-07-02T07:54:14.866750Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:14.866972 waagent[1624]: 2024-07-02T07:54:14.866922Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:14.878750 waagent[1624]: 2024-07-02T07:54:14.878671Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 07:54:14.887320 waagent[1624]: 2024-07-02T07:54:14.887246Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 07:54:14.888280 waagent[1624]: 2024-07-02T07:54:14.888217Z INFO ExtHandler Jul 2 07:54:14.888441 waagent[1624]: 2024-07-02T07:54:14.888375Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 062180e6-3179-4913-83d3-643a9fc0acab eTag: 9339972215838988524 source: Fabric] Jul 2 07:54:14.889132 waagent[1624]: 2024-07-02T07:54:14.889073Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 07:54:14.890238 waagent[1624]: 2024-07-02T07:54:14.890175Z INFO ExtHandler Jul 2 07:54:14.890376 waagent[1624]: 2024-07-02T07:54:14.890324Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 07:54:14.897520 waagent[1624]: 2024-07-02T07:54:14.897467Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 07:54:14.897949 waagent[1624]: 2024-07-02T07:54:14.897898Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 07:54:14.917548 waagent[1624]: 2024-07-02T07:54:14.917482Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 2 07:54:14.984589 waagent[1624]: 2024-07-02T07:54:14.984440Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A010F98FD1A4A905DCB9B11EEFF610C4D1AC15DE', 'hasPrivateKey': True} Jul 2 07:54:14.985649 waagent[1624]: 2024-07-02T07:54:14.985573Z INFO ExtHandler Downloaded certificate {'thumbprint': '4117A871B54293C9B432ADFDBF555624F5AE4426', 'hasPrivateKey': False} Jul 2 07:54:14.986664 waagent[1624]: 2024-07-02T07:54:14.986602Z INFO ExtHandler Fetch goal state completed Jul 2 07:54:15.009484 waagent[1624]: 2024-07-02T07:54:15.009332Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Jul 2 07:54:15.022351 waagent[1624]: 2024-07-02T07:54:15.022226Z INFO ExtHandler ExtHandler WALinuxAgent-2.11.1.4 running as process 1624 Jul 2 07:54:15.025974 waagent[1624]: 2024-07-02T07:54:15.025890Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 07:54:15.027397 waagent[1624]: 2024-07-02T07:54:15.027332Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 07:54:15.032754 waagent[1624]: 2024-07-02T07:54:15.032695Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 07:54:15.033214 waagent[1624]: 2024-07-02T07:54:15.033148Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 07:54:15.043367 waagent[1624]: 2024-07-02T07:54:15.043262Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 07:54:15.043987 waagent[1624]: 2024-07-02T07:54:15.043917Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 07:54:15.051256 waagent[1624]: 2024-07-02T07:54:15.051150Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 07:54:15.052309 waagent[1624]: 2024-07-02T07:54:15.052239Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 07:54:15.054406 waagent[1624]: 2024-07-02T07:54:15.054341Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 07:54:15.054911 waagent[1624]: 2024-07-02T07:54:15.054851Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:15.055069 waagent[1624]: 2024-07-02T07:54:15.055017Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:15.055708 waagent[1624]: 2024-07-02T07:54:15.055647Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 07:54:15.056221 waagent[1624]: 2024-07-02T07:54:15.056160Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 07:54:15.056923 waagent[1624]: 2024-07-02T07:54:15.056869Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:15.058124 waagent[1624]: 2024-07-02T07:54:15.058064Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 07:54:15.058349 waagent[1624]: 2024-07-02T07:54:15.058279Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 07:54:15.058349 waagent[1624]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 07:54:15.058349 waagent[1624]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 07:54:15.058349 waagent[1624]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 07:54:15.058349 waagent[1624]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:15.058349 waagent[1624]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:15.058349 waagent[1624]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:15.058636 waagent[1624]: 2024-07-02T07:54:15.058584Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 07:54:15.058766 waagent[1624]: 2024-07-02T07:54:15.058695Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:15.059705 waagent[1624]: 2024-07-02T07:54:15.059647Z INFO EnvHandler ExtHandler Configure routes Jul 2 07:54:15.065311 waagent[1624]: 2024-07-02T07:54:15.065102Z INFO EnvHandler ExtHandler Gateway:None Jul 2 07:54:15.065691 waagent[1624]: 2024-07-02T07:54:15.065600Z INFO EnvHandler ExtHandler Routes:None Jul 2 07:54:15.067987 waagent[1624]: 2024-07-02T07:54:15.067923Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 07:54:15.070324 waagent[1624]: 2024-07-02T07:54:15.070222Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 07:54:15.081852 waagent[1624]: 2024-07-02T07:54:15.081789Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 07:54:15.098366 waagent[1624]: 2024-07-02T07:54:15.098286Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 07:54:15.098366 waagent[1624]: Executing ['ip', '-a', '-o', 'link']: Jul 2 07:54:15.098366 waagent[1624]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 07:54:15.098366 waagent[1624]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:7e:0b brd ff:ff:ff:ff:ff:ff Jul 2 07:54:15.098366 waagent[1624]: 3: enP24345s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:7e:0b brd ff:ff:ff:ff:ff:ff\ altname enP24345p0s2 Jul 2 07:54:15.098366 waagent[1624]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 07:54:15.098366 waagent[1624]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 07:54:15.098366 waagent[1624]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 07:54:15.098366 waagent[1624]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 07:54:15.098366 waagent[1624]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 07:54:15.098366 waagent[1624]: 2: eth0 inet6 fe80::222:48ff:fe9e:7e0b/64 scope link \ valid_lft forever preferred_lft forever Jul 2 07:54:15.115159 waagent[1624]: 2024-07-02T07:54:15.114996Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 2 07:54:15.133677 waagent[1624]: 2024-07-02T07:54:15.133600Z INFO ExtHandler ExtHandler Jul 2 07:54:15.134013 waagent[1624]: 2024-07-02T07:54:15.133966Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e43488e9-3934-4dd0-aa13-ffa3e9dd7694 correlation 26765501-1214-4a7c-a3b3-6736636af2bd created: 2024-07-02T07:52:35.936707Z] Jul 2 07:54:15.135466 waagent[1624]: 2024-07-02T07:54:15.135369Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 07:54:15.137237 waagent[1624]: 2024-07-02T07:54:15.137178Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jul 2 07:54:15.162884 waagent[1624]: 2024-07-02T07:54:15.162809Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 2 07:54:15.175703 waagent[1624]: 2024-07-02T07:54:15.175631Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.11.1.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3757EAA6-7430-47D8-8B68-12F57AD951D1;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Jul 2 07:54:15.250453 waagent[1624]: 2024-07-02T07:54:15.250281Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 2 07:54:15.250453 waagent[1624]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:15.250453 waagent[1624]: pkts bytes target prot opt in out source destination Jul 2 07:54:15.250453 waagent[1624]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:15.250453 waagent[1624]: pkts bytes target prot opt in out source destination Jul 2 07:54:15.250453 waagent[1624]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes) Jul 2 07:54:15.250453 waagent[1624]: pkts bytes target prot opt in out source destination Jul 2 07:54:15.250453 waagent[1624]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 07:54:15.250453 waagent[1624]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 07:54:15.250453 waagent[1624]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 07:54:15.258054 waagent[1624]: 2024-07-02T07:54:15.257932Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 07:54:15.258054 waagent[1624]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:15.258054 waagent[1624]: pkts bytes target prot opt in out source destination Jul 2 07:54:15.258054 waagent[1624]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:15.258054 waagent[1624]: pkts bytes target prot opt in out source destination Jul 2 07:54:15.258054 waagent[1624]: Chain OUTPUT (policy ACCEPT 5 packets, 453 bytes) Jul 2 07:54:15.258054 waagent[1624]: pkts bytes target prot opt in out source destination Jul 2 07:54:15.258054 waagent[1624]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 07:54:15.258054 waagent[1624]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 07:54:15.258054 waagent[1624]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 07:54:15.258670 waagent[1624]: 2024-07-02T07:54:15.258611Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 07:54:21.718067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:54:21.718520 systemd[1]: Stopped kubelet.service. Jul 2 07:54:21.720847 systemd[1]: Starting kubelet.service... Jul 2 07:54:21.815077 systemd[1]: Started kubelet.service. Jul 2 07:54:22.331085 kubelet[1676]: E0702 07:54:22.331016 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:22.333514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:22.333687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:32.468055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 07:54:32.468487 systemd[1]: Stopped kubelet.service. Jul 2 07:54:32.471050 systemd[1]: Starting kubelet.service... Jul 2 07:54:32.567476 systemd[1]: Started kubelet.service. Jul 2 07:54:33.079301 kubelet[1689]: E0702 07:54:33.079235 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:33.081643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:33.081811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:38.366038 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 2 07:54:42.606043 systemd[1]: Created slice system-sshd.slice. Jul 2 07:54:42.608146 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.16.10:59560.service. Jul 2 07:54:43.218155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 07:54:43.218640 systemd[1]: Stopped kubelet.service. Jul 2 07:54:43.221054 systemd[1]: Starting kubelet.service... Jul 2 07:54:43.823300 systemd[1]: Started kubelet.service. Jul 2 07:54:43.879477 kubelet[1703]: E0702 07:54:43.879384 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:43.881851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:43.882025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:43.916847 sshd[1697]: Accepted publickey for core from 10.200.16.10 port 59560 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:43.918722 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:43.924762 systemd[1]: Started session-3.scope. Jul 2 07:54:43.925272 systemd-logind[1404]: New session 3 of user core. Jul 2 07:54:44.478992 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.16.10:59568.service. Jul 2 07:54:44.919366 update_engine[1405]: I0702 07:54:44.919013 1405 update_attempter.cc:509] Updating boot flags... Jul 2 07:54:45.126724 sshd[1712]: Accepted publickey for core from 10.200.16.10 port 59568 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:45.128226 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:45.132466 systemd[1]: Started session-4.scope. Jul 2 07:54:45.132960 systemd-logind[1404]: New session 4 of user core. Jul 2 07:54:45.584308 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:45.587804 systemd[1]: sshd@1-10.200.8.10:22-10.200.16.10:59568.service: Deactivated successfully. Jul 2 07:54:45.588772 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:54:45.589519 systemd-logind[1404]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:54:45.590369 systemd-logind[1404]: Removed session 4. Jul 2 07:54:45.694464 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.16.10:59578.service. Jul 2 07:54:46.343670 sshd[1784]: Accepted publickey for core from 10.200.16.10 port 59578 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:46.345501 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:46.351498 systemd-logind[1404]: New session 5 of user core. Jul 2 07:54:46.351998 systemd[1]: Started session-5.scope. Jul 2 07:54:46.799046 sshd[1784]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:46.802545 systemd[1]: sshd@2-10.200.8.10:22-10.200.16.10:59578.service: Deactivated successfully. Jul 2 07:54:46.803978 systemd-logind[1404]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:54:46.804092 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:54:46.805597 systemd-logind[1404]: Removed session 5. Jul 2 07:54:46.908217 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.16.10:59586.service. Jul 2 07:54:47.553948 sshd[1790]: Accepted publickey for core from 10.200.16.10 port 59586 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:47.555925 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:47.561784 systemd[1]: Started session-6.scope. Jul 2 07:54:47.562299 systemd-logind[1404]: New session 6 of user core. Jul 2 07:54:48.011174 sshd[1790]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:48.014286 systemd[1]: sshd@3-10.200.8.10:22-10.200.16.10:59586.service: Deactivated successfully. Jul 2 07:54:48.015181 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:54:48.015839 systemd-logind[1404]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:54:48.016772 systemd-logind[1404]: Removed session 6. Jul 2 07:54:48.119757 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.16.10:59596.service. Jul 2 07:54:48.770213 sshd[1796]: Accepted publickey for core from 10.200.16.10 port 59596 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:48.772045 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:48.779134 systemd[1]: Started session-7.scope. Jul 2 07:54:48.780070 systemd-logind[1404]: New session 7 of user core. Jul 2 07:54:49.429340 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:54:49.430153 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:54:49.457582 systemd[1]: Starting docker.service... Jul 2 07:54:49.517170 env[1809]: time="2024-07-02T07:54:49.517118995Z" level=info msg="Starting up" Jul 2 07:54:49.518869 env[1809]: time="2024-07-02T07:54:49.518828295Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:54:49.518869 env[1809]: time="2024-07-02T07:54:49.518851795Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:54:49.519040 env[1809]: time="2024-07-02T07:54:49.518882795Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Jul 2 07:54:49.519040 env[1809]: time="2024-07-02T07:54:49.518896795Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:54:49.522717 env[1809]: time="2024-07-02T07:54:49.522696293Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:54:49.522842 env[1809]: time="2024-07-02T07:54:49.522828593Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:54:49.522933 env[1809]: time="2024-07-02T07:54:49.522916393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Jul 2 07:54:49.522999 env[1809]: time="2024-07-02T07:54:49.522989593Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:54:49.529503 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2612452478-merged.mount: Deactivated successfully. Jul 2 07:54:49.588929 env[1809]: time="2024-07-02T07:54:49.588880869Z" level=info msg="Loading containers: start." Jul 2 07:54:49.777674 kernel: Initializing XFRM netlink socket Jul 2 07:54:49.805281 env[1809]: time="2024-07-02T07:54:49.805229191Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:54:49.952738 systemd-networkd[1566]: docker0: Link UP Jul 2 07:54:49.978645 env[1809]: time="2024-07-02T07:54:49.978586527Z" level=info msg="Loading containers: done." Jul 2 07:54:49.993256 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1550912854-merged.mount: Deactivated successfully. Jul 2 07:54:50.012024 env[1809]: time="2024-07-02T07:54:50.011967516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:54:50.012276 env[1809]: time="2024-07-02T07:54:50.012248315Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:54:50.012409 env[1809]: time="2024-07-02T07:54:50.012384815Z" level=info msg="Daemon has completed initialization" Jul 2 07:54:50.046036 systemd[1]: Started docker.service. Jul 2 07:54:50.050828 env[1809]: time="2024-07-02T07:54:50.050774902Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:54:53.968079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 07:54:53.968547 systemd[1]: Stopped kubelet.service. Jul 2 07:54:53.971628 systemd[1]: Starting kubelet.service... Jul 2 07:54:54.106910 systemd[1]: Started kubelet.service. Jul 2 07:54:54.706540 kubelet[1934]: E0702 07:54:54.706475 1934 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:54.708999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:54.709175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:55.670769 env[1434]: time="2024-07-02T07:54:55.670692947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 07:54:56.411874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344362216.mount: Deactivated successfully. Jul 2 07:54:58.573479 env[1434]: time="2024-07-02T07:54:58.573401572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.580385 env[1434]: time="2024-07-02T07:54:58.580340711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.584584 env[1434]: time="2024-07-02T07:54:58.584550755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.589872 env[1434]: time="2024-07-02T07:54:58.589842237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.590523 env[1434]: time="2024-07-02T07:54:58.590491059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 07:54:58.601942 env[1434]: time="2024-07-02T07:54:58.601903251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 07:55:00.713001 env[1434]: time="2024-07-02T07:55:00.712924532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.717985 env[1434]: time="2024-07-02T07:55:00.717941195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.722397 env[1434]: time="2024-07-02T07:55:00.722365039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.725835 env[1434]: time="2024-07-02T07:55:00.725798251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.726496 env[1434]: time="2024-07-02T07:55:00.726464572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 07:55:00.737671 env[1434]: time="2024-07-02T07:55:00.737638135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 07:55:02.236888 env[1434]: time="2024-07-02T07:55:02.236753439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.241766 env[1434]: time="2024-07-02T07:55:02.241713491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.246334 env[1434]: time="2024-07-02T07:55:02.246296932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.251278 env[1434]: time="2024-07-02T07:55:02.251248484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.251973 env[1434]: time="2024-07-02T07:55:02.251936705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 07:55:02.263400 env[1434]: time="2024-07-02T07:55:02.263354756Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 07:55:03.486117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292941947.mount: Deactivated successfully. Jul 2 07:55:04.052586 env[1434]: time="2024-07-02T07:55:04.052515822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:04.058396 env[1434]: time="2024-07-02T07:55:04.058335391Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:04.062024 env[1434]: time="2024-07-02T07:55:04.061980497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:04.065717 env[1434]: time="2024-07-02T07:55:04.065675905Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:04.066055 env[1434]: time="2024-07-02T07:55:04.066016014Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 07:55:04.076723 env[1434]: time="2024-07-02T07:55:04.076683525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:55:04.718122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 07:55:04.718488 systemd[1]: Stopped kubelet.service. Jul 2 07:55:04.720582 systemd[1]: Starting kubelet.service... Jul 2 07:55:04.996817 systemd[1]: Started kubelet.service. Jul 2 07:55:05.401443 kubelet[1966]: E0702 07:55:05.368793 1966 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:55:05.370728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:55:05.370924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:55:05.480627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665848955.mount: Deactivated successfully. Jul 2 07:55:07.437792 env[1434]: time="2024-07-02T07:55:07.437723148Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.443678 env[1434]: time="2024-07-02T07:55:07.443632306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.448703 env[1434]: time="2024-07-02T07:55:07.448661441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.453198 env[1434]: time="2024-07-02T07:55:07.453165862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.453978 env[1434]: time="2024-07-02T07:55:07.453940482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:55:07.464751 env[1434]: time="2024-07-02T07:55:07.464705771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:55:07.897280 env[1434]: time="2024-07-02T07:55:07.897146860Z" level=error msg="PullImage \"registry.k8s.io/pause:3.9\" failed" error="failed to pull and unpack image \"registry.k8s.io/pause:3.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Jul 2 07:55:07.930189 env[1434]: time="2024-07-02T07:55:07.930133444Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:55:08.428071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394507661.mount: Deactivated successfully. Jul 2 07:55:08.449790 env[1434]: time="2024-07-02T07:55:08.449724257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:08.461828 env[1434]: time="2024-07-02T07:55:08.461762671Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:08.466198 env[1434]: time="2024-07-02T07:55:08.466151786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:08.471096 env[1434]: time="2024-07-02T07:55:08.471056914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:08.471545 env[1434]: time="2024-07-02T07:55:08.471510526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:55:08.482142 env[1434]: time="2024-07-02T07:55:08.482101302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:55:08.976958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185255653.mount: Deactivated successfully. Jul 2 07:55:14.114399 env[1434]: time="2024-07-02T07:55:14.114308523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:14.120813 env[1434]: time="2024-07-02T07:55:14.120758467Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:14.125023 env[1434]: time="2024-07-02T07:55:14.124969961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:14.132904 env[1434]: time="2024-07-02T07:55:14.132860136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:14.133947 env[1434]: time="2024-07-02T07:55:14.133868359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:55:15.468077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 2 07:55:15.468361 systemd[1]: Stopped kubelet.service. Jul 2 07:55:15.470713 systemd[1]: Starting kubelet.service... Jul 2 07:55:15.651790 systemd[1]: Started kubelet.service. Jul 2 07:55:16.081191 kubelet[2052]: E0702 07:55:16.081127 2052 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:55:16.084161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:55:16.084323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:55:16.673981 systemd[1]: Stopped kubelet.service. Jul 2 07:55:16.676788 systemd[1]: Starting kubelet.service... Jul 2 07:55:16.702361 systemd[1]: Reloading. Jul 2 07:55:16.777205 /usr/lib/systemd/system-generators/torcx-generator[2084]: time="2024-07-02T07:55:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:55:16.777751 /usr/lib/systemd/system-generators/torcx-generator[2084]: time="2024-07-02T07:55:16Z" level=info msg="torcx already run" Jul 2 07:55:16.907167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:55:16.907195 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:55:16.926309 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:55:17.038879 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 07:55:17.038977 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 07:55:17.039263 systemd[1]: Stopped kubelet.service. Jul 2 07:55:17.041541 systemd[1]: Starting kubelet.service... Jul 2 07:55:17.271224 systemd[1]: Started kubelet.service. Jul 2 07:55:17.318685 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:17.319098 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:55:17.319144 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:17.900530 kubelet[2150]: I0702 07:55:17.900431 2150 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:55:18.141252 kubelet[2150]: I0702 07:55:18.141204 2150 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:55:18.141252 kubelet[2150]: I0702 07:55:18.141240 2150 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:55:18.141651 kubelet[2150]: I0702 07:55:18.141569 2150 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:55:18.174686 kubelet[2150]: E0702 07:55:18.174286 2150 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.174901 kubelet[2150]: I0702 07:55:18.174302 2150 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:55:18.184482 kubelet[2150]: I0702 07:55:18.184451 2150 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:55:18.184746 kubelet[2150]: I0702 07:55:18.184722 2150 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:55:18.184953 kubelet[2150]: I0702 07:55:18.184930 2150 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:55:18.185102 kubelet[2150]: I0702 07:55:18.184958 2150 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:55:18.185102 kubelet[2150]: I0702 07:55:18.184974 2150 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:55:18.185197 kubelet[2150]: I0702 07:55:18.185107 2150 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:18.185240 kubelet[2150]: I0702 07:55:18.185229 2150 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:55:18.185280 kubelet[2150]: I0702 07:55:18.185248 2150 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:55:18.185317 kubelet[2150]: I0702 07:55:18.185283 2150 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:55:18.185317 kubelet[2150]: I0702 07:55:18.185301 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:55:18.191081 kubelet[2150]: W0702 07:55:18.190813 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.191081 kubelet[2150]: E0702 07:55:18.190872 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.191081 kubelet[2150]: W0702 07:55:18.190963 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-37a211789c&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.191081 kubelet[2150]: E0702 07:55:18.191007 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-37a211789c&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.191370 kubelet[2150]: I0702 07:55:18.191112 2150 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:55:18.195739 kubelet[2150]: I0702 07:55:18.195710 2150 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:55:18.197260 kubelet[2150]: W0702 07:55:18.197235 2150 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:55:18.201774 kubelet[2150]: I0702 07:55:18.201756 2150 server.go:1256] "Started kubelet" Jul 2 07:55:18.218452 kubelet[2150]: E0702 07:55:18.215860 2150 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-a-37a211789c.17de563aca269519 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-a-37a211789c,UID:ci-3510.3.5-a-37a211789c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-37a211789c,},FirstTimestamp:2024-07-02 07:55:18.201726233 +0000 UTC m=+0.923939009,LastTimestamp:2024-07-02 07:55:18.201726233 +0000 UTC m=+0.923939009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-37a211789c,}" Jul 2 07:55:18.218452 kubelet[2150]: I0702 07:55:18.216128 2150 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:55:18.218452 kubelet[2150]: I0702 07:55:18.216360 2150 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:55:18.218452 kubelet[2150]: I0702 07:55:18.216401 2150 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:55:18.218452 kubelet[2150]: I0702 07:55:18.217063 2150 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:55:18.219548 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:55:18.220164 kubelet[2150]: I0702 07:55:18.220139 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:55:18.222829 kubelet[2150]: E0702 07:55:18.222810 2150 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-37a211789c\" not found" Jul 2 07:55:18.222999 kubelet[2150]: I0702 07:55:18.222986 2150 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:55:18.223212 kubelet[2150]: I0702 07:55:18.223198 2150 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:55:18.223406 kubelet[2150]: I0702 07:55:18.223375 2150 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:55:18.223994 kubelet[2150]: W0702 07:55:18.223947 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.224116 kubelet[2150]: E0702 07:55:18.224103 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.224364 kubelet[2150]: E0702 07:55:18.224346 2150 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:55:18.224985 kubelet[2150]: E0702 07:55:18.224967 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-37a211789c?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="200ms" Jul 2 07:55:18.225801 kubelet[2150]: I0702 07:55:18.225787 2150 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:55:18.226012 kubelet[2150]: I0702 07:55:18.225993 2150 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:55:18.227375 kubelet[2150]: I0702 07:55:18.227360 2150 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:55:18.334916 kubelet[2150]: I0702 07:55:18.334868 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:55:18.337210 kubelet[2150]: I0702 07:55:18.337181 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:55:18.337361 kubelet[2150]: I0702 07:55:18.337236 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:55:18.337361 kubelet[2150]: I0702 07:55:18.337264 2150 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:55:18.338683 kubelet[2150]: W0702 07:55:18.338467 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.338683 kubelet[2150]: E0702 07:55:18.338532 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:18.338683 kubelet[2150]: E0702 07:55:18.338606 2150 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:55:18.416467 kubelet[2150]: I0702 07:55:18.416342 2150 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.417200 kubelet[2150]: E0702 07:55:18.417173 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.417647 kubelet[2150]: I0702 07:55:18.417627 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:55:18.417792 kubelet[2150]: I0702 07:55:18.417769 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:55:18.417875 kubelet[2150]: I0702 07:55:18.417799 2150 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:18.422818 kubelet[2150]: I0702 07:55:18.422789 2150 policy_none.go:49] "None policy: Start" Jul 2 07:55:18.423661 kubelet[2150]: I0702 07:55:18.423637 2150 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:55:18.423768 kubelet[2150]: I0702 07:55:18.423702 2150 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:55:18.427063 kubelet[2150]: E0702 07:55:18.426023 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-37a211789c?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="400ms" Jul 2 07:55:18.434016 systemd[1]: Created slice kubepods.slice. Jul 2 07:55:18.438742 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:55:18.439336 kubelet[2150]: E0702 07:55:18.438941 2150 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:55:18.441850 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:55:18.448097 kubelet[2150]: I0702 07:55:18.448064 2150 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:55:18.448323 kubelet[2150]: I0702 07:55:18.448303 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:55:18.451215 kubelet[2150]: E0702 07:55:18.451196 2150 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-37a211789c\" not found" Jul 2 07:55:18.620092 kubelet[2150]: I0702 07:55:18.620058 2150 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.620853 kubelet[2150]: E0702 07:55:18.620828 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.639356 kubelet[2150]: I0702 07:55:18.639307 2150 topology_manager.go:215] "Topology Admit Handler" podUID="dd088e04f1ef4e46f990f5d547dd2f84" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.641538 kubelet[2150]: I0702 07:55:18.641459 2150 topology_manager.go:215] "Topology Admit Handler" podUID="9a1e64ec481256cbe42e7b5bf3f928a1" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.643556 kubelet[2150]: I0702 07:55:18.643528 2150 topology_manager.go:215] "Topology Admit Handler" podUID="09501f7a2b4ecf8d39726bd567fa8675" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.651963 systemd[1]: Created slice kubepods-burstable-poddd088e04f1ef4e46f990f5d547dd2f84.slice. Jul 2 07:55:18.662014 systemd[1]: Created slice kubepods-burstable-pod9a1e64ec481256cbe42e7b5bf3f928a1.slice. Jul 2 07:55:18.665844 systemd[1]: Created slice kubepods-burstable-pod09501f7a2b4ecf8d39726bd567fa8675.slice. Jul 2 07:55:18.726331 kubelet[2150]: I0702 07:55:18.726285 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726331 kubelet[2150]: I0702 07:55:18.726349 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726621 kubelet[2150]: I0702 07:55:18.726386 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09501f7a2b4ecf8d39726bd567fa8675-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-37a211789c\" (UID: \"09501f7a2b4ecf8d39726bd567fa8675\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726621 kubelet[2150]: I0702 07:55:18.726482 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd088e04f1ef4e46f990f5d547dd2f84-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-37a211789c\" (UID: \"dd088e04f1ef4e46f990f5d547dd2f84\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726621 kubelet[2150]: I0702 07:55:18.726519 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd088e04f1ef4e46f990f5d547dd2f84-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-37a211789c\" (UID: \"dd088e04f1ef4e46f990f5d547dd2f84\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726621 kubelet[2150]: I0702 07:55:18.726551 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd088e04f1ef4e46f990f5d547dd2f84-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-37a211789c\" (UID: \"dd088e04f1ef4e46f990f5d547dd2f84\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726621 kubelet[2150]: I0702 07:55:18.726577 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726832 kubelet[2150]: I0702 07:55:18.726604 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.726832 kubelet[2150]: I0702 07:55:18.726633 2150 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:18.827558 kubelet[2150]: E0702 07:55:18.827505 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-37a211789c?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="800ms" Jul 2 07:55:18.963297 env[1434]: time="2024-07-02T07:55:18.963222406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-37a211789c,Uid:dd088e04f1ef4e46f990f5d547dd2f84,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:18.965841 env[1434]: time="2024-07-02T07:55:18.965798658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-37a211789c,Uid:9a1e64ec481256cbe42e7b5bf3f928a1,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:18.969581 env[1434]: time="2024-07-02T07:55:18.969526733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-37a211789c,Uid:09501f7a2b4ecf8d39726bd567fa8675,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:19.023162 kubelet[2150]: I0702 07:55:19.023037 2150 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:19.023881 kubelet[2150]: E0702 07:55:19.023856 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:19.041872 kubelet[2150]: W0702 07:55:19.041816 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.042111 kubelet[2150]: E0702 07:55:19.042098 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.093147 kubelet[2150]: W0702 07:55:19.093089 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.093147 kubelet[2150]: E0702 07:55:19.093149 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.436732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958840067.mount: Deactivated successfully. Jul 2 07:55:19.471140 env[1434]: time="2024-07-02T07:55:19.471064855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.474670 env[1434]: time="2024-07-02T07:55:19.474610025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.484860 env[1434]: time="2024-07-02T07:55:19.484768223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.488581 env[1434]: time="2024-07-02T07:55:19.488541697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.492300 env[1434]: time="2024-07-02T07:55:19.492262970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.495905 env[1434]: time="2024-07-02T07:55:19.495866240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.501901 env[1434]: time="2024-07-02T07:55:19.501867558Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.505901 env[1434]: time="2024-07-02T07:55:19.505866336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.513697 env[1434]: time="2024-07-02T07:55:19.513658788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.518437 env[1434]: time="2024-07-02T07:55:19.518389581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.522887 env[1434]: time="2024-07-02T07:55:19.522855868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.527511 env[1434]: time="2024-07-02T07:55:19.527477558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:19.532625 kubelet[2150]: W0702 07:55:19.532593 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.532927 kubelet[2150]: E0702 07:55:19.532637 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.599020 env[1434]: time="2024-07-02T07:55:19.598940456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:19.605186 env[1434]: time="2024-07-02T07:55:19.605135077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:19.605394 env[1434]: time="2024-07-02T07:55:19.605368081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:19.605709 env[1434]: time="2024-07-02T07:55:19.605669087Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23c2c544c019fc5d11a05a932b8004c2967180e01c5f331cf0337b00c02505a3 pid=2190 runtime=io.containerd.runc.v2 Jul 2 07:55:19.615600 env[1434]: time="2024-07-02T07:55:19.615523280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:19.615794 env[1434]: time="2024-07-02T07:55:19.615610282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:19.615794 env[1434]: time="2024-07-02T07:55:19.615647582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:19.615906 env[1434]: time="2024-07-02T07:55:19.615817786Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100 pid=2207 runtime=io.containerd.runc.v2 Jul 2 07:55:19.628772 kubelet[2150]: E0702 07:55:19.628733 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-37a211789c?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="1.6s" Jul 2 07:55:19.634277 systemd[1]: Started cri-containerd-23c2c544c019fc5d11a05a932b8004c2967180e01c5f331cf0337b00c02505a3.scope. Jul 2 07:55:19.639956 systemd[1]: Started cri-containerd-aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100.scope. Jul 2 07:55:19.642172 kubelet[2150]: W0702 07:55:19.641447 2150 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-37a211789c&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.642172 kubelet[2150]: E0702 07:55:19.641515 2150 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-37a211789c&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:55:19.649577 env[1434]: time="2024-07-02T07:55:19.642810813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:19.649577 env[1434]: time="2024-07-02T07:55:19.643465826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:19.649577 env[1434]: time="2024-07-02T07:55:19.643482927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:19.649577 env[1434]: time="2024-07-02T07:55:19.643690931Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e pid=2227 runtime=io.containerd.runc.v2 Jul 2 07:55:19.693144 systemd[1]: Started cri-containerd-9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e.scope. Jul 2 07:55:19.737713 env[1434]: time="2024-07-02T07:55:19.737650068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-37a211789c,Uid:dd088e04f1ef4e46f990f5d547dd2f84,Namespace:kube-system,Attempt:0,} returns sandbox id \"23c2c544c019fc5d11a05a932b8004c2967180e01c5f331cf0337b00c02505a3\"" Jul 2 07:55:19.746268 env[1434]: time="2024-07-02T07:55:19.746220135Z" level=info msg="CreateContainer within sandbox \"23c2c544c019fc5d11a05a932b8004c2967180e01c5f331cf0337b00c02505a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:55:19.752548 env[1434]: time="2024-07-02T07:55:19.752497258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-37a211789c,Uid:9a1e64ec481256cbe42e7b5bf3f928a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100\"" Jul 2 07:55:19.759030 env[1434]: time="2024-07-02T07:55:19.758982085Z" level=info msg="CreateContainer within sandbox \"aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:55:19.774600 env[1434]: time="2024-07-02T07:55:19.774558190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-37a211789c,Uid:09501f7a2b4ecf8d39726bd567fa8675,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e\"" Jul 2 07:55:19.778759 env[1434]: time="2024-07-02T07:55:19.778715671Z" level=info msg="CreateContainer within sandbox \"9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:55:19.813985 env[1434]: time="2024-07-02T07:55:19.813919159Z" level=info msg="CreateContainer within sandbox \"23c2c544c019fc5d11a05a932b8004c2967180e01c5f331cf0337b00c02505a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62e3dc5e1598e54a07cbeeb804c0128b5bb5f2eb09f82efe43b7b86263ec9706\"" Jul 2 07:55:19.814986 env[1434]: time="2024-07-02T07:55:19.814958779Z" level=info msg="StartContainer for \"62e3dc5e1598e54a07cbeeb804c0128b5bb5f2eb09f82efe43b7b86263ec9706\"" Jul 2 07:55:19.826483 kubelet[2150]: I0702 07:55:19.826046 2150 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:19.826483 kubelet[2150]: E0702 07:55:19.826457 2150 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:19.827946 env[1434]: time="2024-07-02T07:55:19.827904133Z" level=info msg="CreateContainer within sandbox \"aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e\"" Jul 2 07:55:19.828809 env[1434]: time="2024-07-02T07:55:19.828760949Z" level=info msg="StartContainer for \"25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e\"" Jul 2 07:55:19.844452 systemd[1]: Started cri-containerd-62e3dc5e1598e54a07cbeeb804c0128b5bb5f2eb09f82efe43b7b86263ec9706.scope. Jul 2 07:55:19.860024 env[1434]: time="2024-07-02T07:55:19.859972560Z" level=info msg="CreateContainer within sandbox \"9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691\"" Jul 2 07:55:19.860825 env[1434]: time="2024-07-02T07:55:19.860790676Z" level=info msg="StartContainer for \"b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691\"" Jul 2 07:55:19.863434 systemd[1]: Started cri-containerd-25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e.scope. Jul 2 07:55:19.897733 systemd[1]: Started cri-containerd-b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691.scope. Jul 2 07:55:19.940513 env[1434]: time="2024-07-02T07:55:19.940459833Z" level=info msg="StartContainer for \"62e3dc5e1598e54a07cbeeb804c0128b5bb5f2eb09f82efe43b7b86263ec9706\" returns successfully" Jul 2 07:55:19.968169 env[1434]: time="2024-07-02T07:55:19.967426361Z" level=info msg="StartContainer for \"25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e\" returns successfully" Jul 2 07:55:20.091555 env[1434]: time="2024-07-02T07:55:20.091503443Z" level=info msg="StartContainer for \"b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691\" returns successfully" Jul 2 07:55:21.429163 kubelet[2150]: I0702 07:55:21.429125 2150 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:22.493091 kubelet[2150]: E0702 07:55:22.493038 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-37a211789c\" not found" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:22.529393 kubelet[2150]: I0702 07:55:22.529315 2150 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:23.191182 kubelet[2150]: I0702 07:55:23.191126 2150 apiserver.go:52] "Watching apiserver" Jul 2 07:55:23.224001 kubelet[2150]: I0702 07:55:23.223935 2150 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:55:23.993051 kubelet[2150]: W0702 07:55:23.993017 2150 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:25.024304 systemd[1]: Reloading. Jul 2 07:55:25.097866 /usr/lib/systemd/system-generators/torcx-generator[2444]: time="2024-07-02T07:55:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:55:25.097909 /usr/lib/systemd/system-generators/torcx-generator[2444]: time="2024-07-02T07:55:25Z" level=info msg="torcx already run" Jul 2 07:55:25.211141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:55:25.211166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:55:25.229606 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:55:25.345923 systemd[1]: Stopping kubelet.service... Jul 2 07:55:25.347391 kubelet[2150]: I0702 07:55:25.346737 2150 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:55:25.370048 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:55:25.370306 systemd[1]: Stopped kubelet.service. Jul 2 07:55:25.373686 systemd[1]: Starting kubelet.service... Jul 2 07:55:25.557376 systemd[1]: Started kubelet.service. Jul 2 07:55:25.627906 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:25.628382 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:55:25.628463 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:25.628608 kubelet[2510]: I0702 07:55:25.628574 2510 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:55:25.638961 kubelet[2510]: I0702 07:55:25.638928 2510 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:55:25.639201 kubelet[2510]: I0702 07:55:25.639187 2510 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:55:25.639648 kubelet[2510]: I0702 07:55:25.639626 2510 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:55:25.641968 kubelet[2510]: I0702 07:55:25.641946 2510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:55:25.648274 kubelet[2510]: I0702 07:55:25.645665 2510 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:55:25.655639 kubelet[2510]: I0702 07:55:25.655605 2510 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:55:25.655903 kubelet[2510]: I0702 07:55:25.655884 2510 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:55:25.656107 kubelet[2510]: I0702 07:55:25.656074 2510 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:55:25.656259 kubelet[2510]: I0702 07:55:25.656120 2510 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:55:25.656259 kubelet[2510]: I0702 07:55:25.656134 2510 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:55:25.656259 kubelet[2510]: I0702 07:55:25.656183 2510 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:25.656395 kubelet[2510]: I0702 07:55:25.656315 2510 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:55:25.656395 kubelet[2510]: I0702 07:55:25.656334 2510 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:55:25.658519 kubelet[2510]: I0702 07:55:25.658499 2510 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:55:25.659521 kubelet[2510]: I0702 07:55:25.659498 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:55:25.664934 kubelet[2510]: I0702 07:55:25.664916 2510 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:55:25.665343 kubelet[2510]: I0702 07:55:25.665327 2510 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:55:25.665901 kubelet[2510]: I0702 07:55:25.665885 2510 server.go:1256] "Started kubelet" Jul 2 07:55:25.668484 kubelet[2510]: I0702 07:55:25.668465 2510 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:55:25.671568 kubelet[2510]: I0702 07:55:25.671546 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:55:25.672309 kubelet[2510]: I0702 07:55:25.672176 2510 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:55:25.672440 kubelet[2510]: I0702 07:55:25.671551 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:55:25.677978 kubelet[2510]: I0702 07:55:25.677956 2510 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:55:25.681178 kubelet[2510]: I0702 07:55:25.681159 2510 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:55:25.684572 kubelet[2510]: I0702 07:55:25.684552 2510 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:55:25.684852 kubelet[2510]: I0702 07:55:25.684835 2510 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:55:25.687673 kubelet[2510]: I0702 07:55:25.687647 2510 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:55:25.687760 kubelet[2510]: I0702 07:55:25.687745 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:55:25.691217 kubelet[2510]: I0702 07:55:25.691198 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:55:25.692604 kubelet[2510]: I0702 07:55:25.692581 2510 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:55:25.692765 kubelet[2510]: I0702 07:55:25.692752 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:55:25.692861 kubelet[2510]: I0702 07:55:25.692850 2510 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:55:25.692953 kubelet[2510]: I0702 07:55:25.692943 2510 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:55:25.693093 kubelet[2510]: E0702 07:55:25.693080 2510 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:55:25.739048 kubelet[2510]: I0702 07:55:25.739021 2510 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:55:25.739273 kubelet[2510]: I0702 07:55:25.739265 2510 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:55:25.739333 kubelet[2510]: I0702 07:55:25.739327 2510 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:25.739654 kubelet[2510]: I0702 07:55:25.739637 2510 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:55:25.739789 kubelet[2510]: I0702 07:55:25.739782 2510 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:55:25.739839 kubelet[2510]: I0702 07:55:25.739834 2510 policy_none.go:49] "None policy: Start" Jul 2 07:55:25.740532 kubelet[2510]: I0702 07:55:25.740512 2510 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:55:25.740673 kubelet[2510]: I0702 07:55:25.740538 2510 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:55:25.740758 kubelet[2510]: I0702 07:55:25.740739 2510 state_mem.go:75] "Updated machine memory state" Jul 2 07:55:25.744844 kubelet[2510]: I0702 07:55:25.744821 2510 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:55:25.745087 kubelet[2510]: I0702 07:55:25.745066 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:55:25.786881 kubelet[2510]: I0702 07:55:25.786851 2510 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.793647 kubelet[2510]: I0702 07:55:25.793614 2510 topology_manager.go:215] "Topology Admit Handler" podUID="09501f7a2b4ecf8d39726bd567fa8675" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.793990 kubelet[2510]: I0702 07:55:25.793956 2510 topology_manager.go:215] "Topology Admit Handler" podUID="dd088e04f1ef4e46f990f5d547dd2f84" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.794708 kubelet[2510]: I0702 07:55:25.794679 2510 topology_manager.go:215] "Topology Admit Handler" podUID="9a1e64ec481256cbe42e7b5bf3f928a1" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.799373 kubelet[2510]: I0702 07:55:25.799346 2510 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.799977 kubelet[2510]: I0702 07:55:25.799959 2510 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.803029 kubelet[2510]: W0702 07:55:25.803007 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:25.810442 kubelet[2510]: W0702 07:55:25.810072 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:25.813715 kubelet[2510]: W0702 07:55:25.813686 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:25.813847 kubelet[2510]: E0702 07:55:25.813762 2510 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.886750 kubelet[2510]: I0702 07:55:25.886602 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd088e04f1ef4e46f990f5d547dd2f84-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-37a211789c\" (UID: \"dd088e04f1ef4e46f990f5d547dd2f84\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.886750 kubelet[2510]: I0702 07:55:25.886660 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.886750 kubelet[2510]: I0702 07:55:25.886702 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.886750 kubelet[2510]: I0702 07:55:25.886735 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.887158 kubelet[2510]: I0702 07:55:25.886775 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.887158 kubelet[2510]: I0702 07:55:25.886806 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09501f7a2b4ecf8d39726bd567fa8675-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-37a211789c\" (UID: \"09501f7a2b4ecf8d39726bd567fa8675\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.887158 kubelet[2510]: I0702 07:55:25.886834 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd088e04f1ef4e46f990f5d547dd2f84-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-37a211789c\" (UID: \"dd088e04f1ef4e46f990f5d547dd2f84\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.887158 kubelet[2510]: I0702 07:55:25.886864 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd088e04f1ef4e46f990f5d547dd2f84-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-37a211789c\" (UID: \"dd088e04f1ef4e46f990f5d547dd2f84\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:25.887158 kubelet[2510]: I0702 07:55:25.886899 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a1e64ec481256cbe42e7b5bf3f928a1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-37a211789c\" (UID: \"9a1e64ec481256cbe42e7b5bf3f928a1\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" Jul 2 07:55:26.666490 kubelet[2510]: I0702 07:55:26.666433 2510 apiserver.go:52] "Watching apiserver" Jul 2 07:55:26.685844 kubelet[2510]: I0702 07:55:26.685728 2510 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:55:26.740507 kubelet[2510]: W0702 07:55:26.740479 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:26.740793 kubelet[2510]: E0702 07:55:26.740773 2510 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.5-a-37a211789c\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.5-a-37a211789c" Jul 2 07:55:26.742657 kubelet[2510]: W0702 07:55:26.742636 2510 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:26.742873 kubelet[2510]: E0702 07:55:26.742856 2510 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-37a211789c\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" Jul 2 07:55:26.783017 kubelet[2510]: I0702 07:55:26.782972 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" podStartSLOduration=1.782913253 podStartE2EDuration="1.782913253s" podCreationTimestamp="2024-07-02 07:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:26.780830819 +0000 UTC m=+1.217535293" watchObservedRunningTime="2024-07-02 07:55:26.782913253 +0000 UTC m=+1.219617827" Jul 2 07:55:26.783309 kubelet[2510]: I0702 07:55:26.783117 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-37a211789c" podStartSLOduration=1.783094256 podStartE2EDuration="1.783094256s" podCreationTimestamp="2024-07-02 07:55:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:26.757327833 +0000 UTC m=+1.194032307" watchObservedRunningTime="2024-07-02 07:55:26.783094256 +0000 UTC m=+1.219798830" Jul 2 07:55:26.790797 kubelet[2510]: I0702 07:55:26.790760 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-37a211789c" podStartSLOduration=3.790714281 podStartE2EDuration="3.790714281s" podCreationTimestamp="2024-07-02 07:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:26.790708581 +0000 UTC m=+1.227413155" watchObservedRunningTime="2024-07-02 07:55:26.790714281 +0000 UTC m=+1.227418855" Jul 2 07:55:27.337754 sudo[1799]: pam_unix(sudo:session): session closed for user root Jul 2 07:55:27.457763 sshd[1796]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:27.461394 systemd[1]: sshd@4-10.200.8.10:22-10.200.16.10:59596.service: Deactivated successfully. Jul 2 07:55:27.462397 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:55:27.462610 systemd[1]: session-7.scope: Consumed 3.276s CPU time. Jul 2 07:55:27.463236 systemd-logind[1404]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:55:27.464154 systemd-logind[1404]: Removed session 7. Jul 2 07:55:38.520197 kubelet[2510]: I0702 07:55:38.520151 2510 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:55:38.521013 env[1434]: time="2024-07-02T07:55:38.520862059Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:55:38.521348 kubelet[2510]: I0702 07:55:38.521203 2510 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:55:39.480311 kubelet[2510]: I0702 07:55:39.480264 2510 topology_manager.go:215] "Topology Admit Handler" podUID="c0408448-8d28-4eca-afdb-6e2dd4079cc2" podNamespace="kube-system" podName="kube-proxy-fjtdh" Jul 2 07:55:39.483166 kubelet[2510]: I0702 07:55:39.483124 2510 topology_manager.go:215] "Topology Admit Handler" podUID="6fd5e696-5d7c-4404-898e-a3d1141739e1" podNamespace="kube-flannel" podName="kube-flannel-ds-9q2xz" Jul 2 07:55:39.488752 systemd[1]: Created slice kubepods-besteffort-podc0408448_8d28_4eca_afdb_6e2dd4079cc2.slice. Jul 2 07:55:39.504956 systemd[1]: Created slice kubepods-burstable-pod6fd5e696_5d7c_4404_898e_a3d1141739e1.slice. Jul 2 07:55:39.582721 kubelet[2510]: I0702 07:55:39.582624 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6fd5e696-5d7c-4404-898e-a3d1141739e1-flannel-cfg\") pod \"kube-flannel-ds-9q2xz\" (UID: \"6fd5e696-5d7c-4404-898e-a3d1141739e1\") " pod="kube-flannel/kube-flannel-ds-9q2xz" Jul 2 07:55:39.583276 kubelet[2510]: I0702 07:55:39.582786 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd5e696-5d7c-4404-898e-a3d1141739e1-xtables-lock\") pod \"kube-flannel-ds-9q2xz\" (UID: \"6fd5e696-5d7c-4404-898e-a3d1141739e1\") " pod="kube-flannel/kube-flannel-ds-9q2xz" Jul 2 07:55:39.583276 kubelet[2510]: I0702 07:55:39.582875 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6fd5e696-5d7c-4404-898e-a3d1141739e1-run\") pod \"kube-flannel-ds-9q2xz\" (UID: \"6fd5e696-5d7c-4404-898e-a3d1141739e1\") " pod="kube-flannel/kube-flannel-ds-9q2xz" Jul 2 07:55:39.583276 kubelet[2510]: I0702 07:55:39.582949 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0408448-8d28-4eca-afdb-6e2dd4079cc2-kube-proxy\") pod \"kube-proxy-fjtdh\" (UID: \"c0408448-8d28-4eca-afdb-6e2dd4079cc2\") " pod="kube-system/kube-proxy-fjtdh" Jul 2 07:55:39.583276 kubelet[2510]: I0702 07:55:39.582976 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0408448-8d28-4eca-afdb-6e2dd4079cc2-xtables-lock\") pod \"kube-proxy-fjtdh\" (UID: \"c0408448-8d28-4eca-afdb-6e2dd4079cc2\") " pod="kube-system/kube-proxy-fjtdh" Jul 2 07:55:39.583276 kubelet[2510]: I0702 07:55:39.583063 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0408448-8d28-4eca-afdb-6e2dd4079cc2-lib-modules\") pod \"kube-proxy-fjtdh\" (UID: \"c0408448-8d28-4eca-afdb-6e2dd4079cc2\") " pod="kube-system/kube-proxy-fjtdh" Jul 2 07:55:39.583556 kubelet[2510]: I0702 07:55:39.583143 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgth\" (UniqueName: \"kubernetes.io/projected/c0408448-8d28-4eca-afdb-6e2dd4079cc2-kube-api-access-gpgth\") pod \"kube-proxy-fjtdh\" (UID: \"c0408448-8d28-4eca-afdb-6e2dd4079cc2\") " pod="kube-system/kube-proxy-fjtdh" Jul 2 07:55:39.583556 kubelet[2510]: I0702 07:55:39.583174 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6fd5e696-5d7c-4404-898e-a3d1141739e1-cni\") pod \"kube-flannel-ds-9q2xz\" (UID: \"6fd5e696-5d7c-4404-898e-a3d1141739e1\") " pod="kube-flannel/kube-flannel-ds-9q2xz" Jul 2 07:55:39.583556 kubelet[2510]: I0702 07:55:39.583245 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbss\" (UniqueName: \"kubernetes.io/projected/6fd5e696-5d7c-4404-898e-a3d1141739e1-kube-api-access-vvbss\") pod \"kube-flannel-ds-9q2xz\" (UID: \"6fd5e696-5d7c-4404-898e-a3d1141739e1\") " pod="kube-flannel/kube-flannel-ds-9q2xz" Jul 2 07:55:39.583556 kubelet[2510]: I0702 07:55:39.583274 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6fd5e696-5d7c-4404-898e-a3d1141739e1-cni-plugin\") pod \"kube-flannel-ds-9q2xz\" (UID: \"6fd5e696-5d7c-4404-898e-a3d1141739e1\") " pod="kube-flannel/kube-flannel-ds-9q2xz" Jul 2 07:55:39.801233 env[1434]: time="2024-07-02T07:55:39.801078016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fjtdh,Uid:c0408448-8d28-4eca-afdb-6e2dd4079cc2,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:39.812409 env[1434]: time="2024-07-02T07:55:39.812360854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9q2xz,Uid:6fd5e696-5d7c-4404-898e-a3d1141739e1,Namespace:kube-flannel,Attempt:0,}" Jul 2 07:55:39.856538 env[1434]: time="2024-07-02T07:55:39.854684171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:39.856538 env[1434]: time="2024-07-02T07:55:39.854727571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:39.856538 env[1434]: time="2024-07-02T07:55:39.854743571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:39.856538 env[1434]: time="2024-07-02T07:55:39.855005375Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b389ad652e43315b3c0824fd9d7bd78fe507fb308231950540c9f306233980f pid=2576 runtime=io.containerd.runc.v2 Jul 2 07:55:39.885319 env[1434]: time="2024-07-02T07:55:39.885224143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:39.885937 env[1434]: time="2024-07-02T07:55:39.885888852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:39.886111 env[1434]: time="2024-07-02T07:55:39.886089754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:39.886611 env[1434]: time="2024-07-02T07:55:39.886548360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18 pid=2597 runtime=io.containerd.runc.v2 Jul 2 07:55:39.901295 systemd[1]: Started cri-containerd-2b389ad652e43315b3c0824fd9d7bd78fe507fb308231950540c9f306233980f.scope. Jul 2 07:55:39.918891 systemd[1]: Started cri-containerd-d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18.scope. Jul 2 07:55:39.960519 env[1434]: time="2024-07-02T07:55:39.960364561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fjtdh,Uid:c0408448-8d28-4eca-afdb-6e2dd4079cc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b389ad652e43315b3c0824fd9d7bd78fe507fb308231950540c9f306233980f\"" Jul 2 07:55:39.967644 env[1434]: time="2024-07-02T07:55:39.967595049Z" level=info msg="CreateContainer within sandbox \"2b389ad652e43315b3c0824fd9d7bd78fe507fb308231950540c9f306233980f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:55:40.000549 env[1434]: time="2024-07-02T07:55:40.000478651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9q2xz,Uid:6fd5e696-5d7c-4404-898e-a3d1141739e1,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\"" Jul 2 07:55:40.004677 env[1434]: time="2024-07-02T07:55:40.004636200Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 2 07:55:40.017554 env[1434]: time="2024-07-02T07:55:40.017503654Z" level=info msg="CreateContainer within sandbox \"2b389ad652e43315b3c0824fd9d7bd78fe507fb308231950540c9f306233980f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b242d0cc60038567f5eb851f1f9b715ecaafc6fc9cd178b77753be3901a07477\"" Jul 2 07:55:40.021720 env[1434]: time="2024-07-02T07:55:40.018448465Z" level=info msg="StartContainer for \"b242d0cc60038567f5eb851f1f9b715ecaafc6fc9cd178b77753be3901a07477\"" Jul 2 07:55:40.042657 systemd[1]: Started cri-containerd-b242d0cc60038567f5eb851f1f9b715ecaafc6fc9cd178b77753be3901a07477.scope. Jul 2 07:55:40.094749 env[1434]: time="2024-07-02T07:55:40.094597875Z" level=info msg="StartContainer for \"b242d0cc60038567f5eb851f1f9b715ecaafc6fc9cd178b77753be3901a07477\" returns successfully" Jul 2 07:55:40.762453 kubelet[2510]: I0702 07:55:40.761939 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fjtdh" podStartSLOduration=1.761875949 podStartE2EDuration="1.761875949s" podCreationTimestamp="2024-07-02 07:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:40.76027693 +0000 UTC m=+15.196981404" watchObservedRunningTime="2024-07-02 07:55:40.761875949 +0000 UTC m=+15.198580423" Jul 2 07:55:41.970281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906291218.mount: Deactivated successfully. Jul 2 07:55:42.066078 env[1434]: time="2024-07-02T07:55:42.066010450Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:42.072764 env[1434]: time="2024-07-02T07:55:42.072709026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:42.076307 env[1434]: time="2024-07-02T07:55:42.076259167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:42.080690 env[1434]: time="2024-07-02T07:55:42.080644817Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:42.081113 env[1434]: time="2024-07-02T07:55:42.081078522Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jul 2 07:55:42.084226 env[1434]: time="2024-07-02T07:55:42.084185658Z" level=info msg="CreateContainer within sandbox \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 2 07:55:42.117168 env[1434]: time="2024-07-02T07:55:42.117111935Z" level=info msg="CreateContainer within sandbox \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded\"" Jul 2 07:55:42.119455 env[1434]: time="2024-07-02T07:55:42.119401661Z" level=info msg="StartContainer for \"5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded\"" Jul 2 07:55:42.139952 systemd[1]: Started cri-containerd-5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded.scope. Jul 2 07:55:42.174031 systemd[1]: cri-containerd-5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded.scope: Deactivated successfully. Jul 2 07:55:42.177022 env[1434]: time="2024-07-02T07:55:42.176974421Z" level=info msg="StartContainer for \"5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded\" returns successfully" Jul 2 07:55:42.257887 env[1434]: time="2024-07-02T07:55:42.256541032Z" level=info msg="shim disconnected" id=5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded Jul 2 07:55:42.257887 env[1434]: time="2024-07-02T07:55:42.256593533Z" level=warning msg="cleaning up after shim disconnected" id=5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded namespace=k8s.io Jul 2 07:55:42.257887 env[1434]: time="2024-07-02T07:55:42.256604933Z" level=info msg="cleaning up dead shim" Jul 2 07:55:42.267672 env[1434]: time="2024-07-02T07:55:42.267610059Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:55:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2850 runtime=io.containerd.runc.v2\n" Jul 2 07:55:42.760322 env[1434]: time="2024-07-02T07:55:42.760236103Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 2 07:55:42.879515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a2824f099fbc786ad43379496aabfe7e60f9dadd995cdf73d81fea00a5bcded-rootfs.mount: Deactivated successfully. Jul 2 07:55:44.965062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595293418.mount: Deactivated successfully. Jul 2 07:55:46.134018 env[1434]: time="2024-07-02T07:55:46.133942249Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:46.142888 env[1434]: time="2024-07-02T07:55:46.142835143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:46.148129 env[1434]: time="2024-07-02T07:55:46.148082699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:46.152255 env[1434]: time="2024-07-02T07:55:46.152213642Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:46.153047 env[1434]: time="2024-07-02T07:55:46.153005851Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jul 2 07:55:46.155507 env[1434]: time="2024-07-02T07:55:46.155471977Z" level=info msg="CreateContainer within sandbox \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 07:55:46.190260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3638381539.mount: Deactivated successfully. Jul 2 07:55:46.213849 env[1434]: time="2024-07-02T07:55:46.213779192Z" level=info msg="CreateContainer within sandbox \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35\"" Jul 2 07:55:46.216695 env[1434]: time="2024-07-02T07:55:46.214963305Z" level=info msg="StartContainer for \"040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35\"" Jul 2 07:55:46.251049 systemd[1]: Started cri-containerd-040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35.scope. Jul 2 07:55:46.284839 systemd[1]: cri-containerd-040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35.scope: Deactivated successfully. Jul 2 07:55:46.288007 env[1434]: time="2024-07-02T07:55:46.287953475Z" level=info msg="StartContainer for \"040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35\" returns successfully" Jul 2 07:55:46.323532 kubelet[2510]: I0702 07:55:46.323490 2510 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:55:46.408040 kubelet[2510]: I0702 07:55:46.347292 2510 topology_manager.go:215] "Topology Admit Handler" podUID="3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78" podNamespace="kube-system" podName="coredns-76f75df574-5vh6v" Jul 2 07:55:46.408040 kubelet[2510]: I0702 07:55:46.350971 2510 topology_manager.go:215] "Topology Admit Handler" podUID="f99fcb84-c437-451a-a33a-3d389b07141a" podNamespace="kube-system" podName="coredns-76f75df574-zln2z" Jul 2 07:55:46.357792 systemd[1]: Created slice kubepods-burstable-pod3421e3b8_0052_4f3a_bba5_ebc2ed3a1b78.slice. Jul 2 07:55:46.363809 systemd[1]: Created slice kubepods-burstable-podf99fcb84_c437_451a_a33a_3d389b07141a.slice. Jul 2 07:55:46.429201 kubelet[2510]: I0702 07:55:46.429154 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78-config-volume\") pod \"coredns-76f75df574-5vh6v\" (UID: \"3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78\") " pod="kube-system/coredns-76f75df574-5vh6v" Jul 2 07:55:46.429201 kubelet[2510]: I0702 07:55:46.429212 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzsvl\" (UniqueName: \"kubernetes.io/projected/3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78-kube-api-access-rzsvl\") pod \"coredns-76f75df574-5vh6v\" (UID: \"3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78\") " pod="kube-system/coredns-76f75df574-5vh6v" Jul 2 07:55:46.429530 kubelet[2510]: I0702 07:55:46.429244 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48cf\" (UniqueName: \"kubernetes.io/projected/f99fcb84-c437-451a-a33a-3d389b07141a-kube-api-access-d48cf\") pod \"coredns-76f75df574-zln2z\" (UID: \"f99fcb84-c437-451a-a33a-3d389b07141a\") " pod="kube-system/coredns-76f75df574-zln2z" Jul 2 07:55:46.429530 kubelet[2510]: I0702 07:55:46.429275 2510 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f99fcb84-c437-451a-a33a-3d389b07141a-config-volume\") pod \"coredns-76f75df574-zln2z\" (UID: \"f99fcb84-c437-451a-a33a-3d389b07141a\") " pod="kube-system/coredns-76f75df574-zln2z" Jul 2 07:55:46.887709 env[1434]: time="2024-07-02T07:55:46.887628807Z" level=info msg="shim disconnected" id=040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35 Jul 2 07:55:46.887709 env[1434]: time="2024-07-02T07:55:46.887705008Z" level=warning msg="cleaning up after shim disconnected" id=040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35 namespace=k8s.io Jul 2 07:55:46.887709 env[1434]: time="2024-07-02T07:55:46.887722208Z" level=info msg="cleaning up dead shim" Jul 2 07:55:46.897322 env[1434]: time="2024-07-02T07:55:46.897262909Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:55:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2914 runtime=io.containerd.runc.v2\n" Jul 2 07:55:47.012443 env[1434]: time="2024-07-02T07:55:47.012350621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zln2z,Uid:f99fcb84-c437-451a-a33a-3d389b07141a,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:47.012754 env[1434]: time="2024-07-02T07:55:47.012372221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5vh6v,Uid:3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:47.109973 env[1434]: time="2024-07-02T07:55:47.109891031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zln2z,Uid:f99fcb84-c437-451a-a33a-3d389b07141a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7493bdff70eb6642f3f97464fe23122352c9e0b35fb1ebc6001acf7ef46d1541\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:55:47.110517 kubelet[2510]: E0702 07:55:47.110465 2510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7493bdff70eb6642f3f97464fe23122352c9e0b35fb1ebc6001acf7ef46d1541\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:55:47.110736 kubelet[2510]: E0702 07:55:47.110549 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7493bdff70eb6642f3f97464fe23122352c9e0b35fb1ebc6001acf7ef46d1541\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-zln2z" Jul 2 07:55:47.110736 kubelet[2510]: E0702 07:55:47.110577 2510 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7493bdff70eb6642f3f97464fe23122352c9e0b35fb1ebc6001acf7ef46d1541\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-zln2z" Jul 2 07:55:47.110957 kubelet[2510]: E0702 07:55:47.110739 2510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zln2z_kube-system(f99fcb84-c437-451a-a33a-3d389b07141a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zln2z_kube-system(f99fcb84-c437-451a-a33a-3d389b07141a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7493bdff70eb6642f3f97464fe23122352c9e0b35fb1ebc6001acf7ef46d1541\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-zln2z" podUID="f99fcb84-c437-451a-a33a-3d389b07141a" Jul 2 07:55:47.115012 env[1434]: time="2024-07-02T07:55:47.114273576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5vh6v,Uid:3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c8333065565032d35b454f6d96231b93b46f93f746bd5ca66767c22a0e19e2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:55:47.115273 kubelet[2510]: E0702 07:55:47.114683 2510 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c8333065565032d35b454f6d96231b93b46f93f746bd5ca66767c22a0e19e2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:55:47.115273 kubelet[2510]: E0702 07:55:47.114736 2510 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c8333065565032d35b454f6d96231b93b46f93f746bd5ca66767c22a0e19e2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-5vh6v" Jul 2 07:55:47.115273 kubelet[2510]: E0702 07:55:47.114757 2510 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c8333065565032d35b454f6d96231b93b46f93f746bd5ca66767c22a0e19e2e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-5vh6v" Jul 2 07:55:47.115273 kubelet[2510]: E0702 07:55:47.114831 2510 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5vh6v_kube-system(3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5vh6v_kube-system(3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c8333065565032d35b454f6d96231b93b46f93f746bd5ca66767c22a0e19e2e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-5vh6v" podUID="3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78" Jul 2 07:55:47.188130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-040f955637dad843268e8b66c1d9f0b803b3a4709a699e231ea85b940a8fef35-rootfs.mount: Deactivated successfully. Jul 2 07:55:47.798765 env[1434]: time="2024-07-02T07:55:47.798407758Z" level=info msg="CreateContainer within sandbox \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 2 07:55:47.833144 env[1434]: time="2024-07-02T07:55:47.833087517Z" level=info msg="CreateContainer within sandbox \"d0ade59cff2ded9fd0eedab7aa3f429a86a06b19170081f00b026102c7f55b18\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"3ad116f86b68c9e61c49db21040c9713cf8ab897a40aaf258d9c7135aad637be\"" Jul 2 07:55:47.834009 env[1434]: time="2024-07-02T07:55:47.833973126Z" level=info msg="StartContainer for \"3ad116f86b68c9e61c49db21040c9713cf8ab897a40aaf258d9c7135aad637be\"" Jul 2 07:55:47.861330 systemd[1]: Started cri-containerd-3ad116f86b68c9e61c49db21040c9713cf8ab897a40aaf258d9c7135aad637be.scope. Jul 2 07:55:47.894954 env[1434]: time="2024-07-02T07:55:47.894902656Z" level=info msg="StartContainer for \"3ad116f86b68c9e61c49db21040c9713cf8ab897a40aaf258d9c7135aad637be\" returns successfully" Jul 2 07:55:49.073851 systemd-networkd[1566]: flannel.1: Link UP Jul 2 07:55:49.074662 systemd-networkd[1566]: flannel.1: Gained carrier Jul 2 07:55:50.766611 systemd-networkd[1566]: flannel.1: Gained IPv6LL Jul 2 07:55:58.694365 env[1434]: time="2024-07-02T07:55:58.694309460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zln2z,Uid:f99fcb84-c437-451a-a33a-3d389b07141a,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:58.754993 systemd-networkd[1566]: cni0: Link UP Jul 2 07:55:58.755006 systemd-networkd[1566]: cni0: Gained carrier Jul 2 07:55:58.759647 systemd-networkd[1566]: cni0: Lost carrier Jul 2 07:55:58.797092 systemd-networkd[1566]: vethfe577d49: Link UP Jul 2 07:55:58.804448 kernel: cni0: port 1(vethfe577d49) entered blocking state Jul 2 07:55:58.804624 kernel: cni0: port 1(vethfe577d49) entered disabled state Jul 2 07:55:58.808398 kernel: device vethfe577d49 entered promiscuous mode Jul 2 07:55:58.808493 kernel: cni0: port 1(vethfe577d49) entered blocking state Jul 2 07:55:58.814689 kernel: cni0: port 1(vethfe577d49) entered forwarding state Jul 2 07:55:58.814802 kernel: cni0: port 1(vethfe577d49) entered disabled state Jul 2 07:55:58.829703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfe577d49: link becomes ready Jul 2 07:55:58.829787 kernel: cni0: port 1(vethfe577d49) entered blocking state Jul 2 07:55:58.829814 kernel: cni0: port 1(vethfe577d49) entered forwarding state Jul 2 07:55:58.833010 systemd-networkd[1566]: vethfe577d49: Gained carrier Jul 2 07:55:58.833652 systemd-networkd[1566]: cni0: Gained carrier Jul 2 07:55:58.836501 env[1434]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000016928), "name":"cbr0", "type":"bridge"} Jul 2 07:55:58.836501 env[1434]: delegateAdd: netconf sent to delegate plugin: Jul 2 07:55:58.852699 env[1434]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-07-02T07:55:58.852586698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:58.852699 env[1434]: time="2024-07-02T07:55:58.852655399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:58.853015 env[1434]: time="2024-07-02T07:55:58.852674399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:58.853235 env[1434]: time="2024-07-02T07:55:58.853127903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e51fd9f450e6f8a91e3085dfb3f81523202131c65bf13fabea191db0b872628f pid=3160 runtime=io.containerd.runc.v2 Jul 2 07:55:58.881540 systemd[1]: Started cri-containerd-e51fd9f450e6f8a91e3085dfb3f81523202131c65bf13fabea191db0b872628f.scope. Jul 2 07:55:58.929786 env[1434]: time="2024-07-02T07:55:58.929313947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zln2z,Uid:f99fcb84-c437-451a-a33a-3d389b07141a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51fd9f450e6f8a91e3085dfb3f81523202131c65bf13fabea191db0b872628f\"" Jul 2 07:55:58.934328 env[1434]: time="2024-07-02T07:55:58.934153588Z" level=info msg="CreateContainer within sandbox \"e51fd9f450e6f8a91e3085dfb3f81523202131c65bf13fabea191db0b872628f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:55:58.968041 env[1434]: time="2024-07-02T07:55:58.967388269Z" level=info msg="CreateContainer within sandbox \"e51fd9f450e6f8a91e3085dfb3f81523202131c65bf13fabea191db0b872628f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93870acacbd3be4dce95ccf3e06c39496048a5bf223bc64f81cf7f7adad861ac\"" Jul 2 07:55:58.968713 env[1434]: time="2024-07-02T07:55:58.968642180Z" level=info msg="StartContainer for \"93870acacbd3be4dce95ccf3e06c39496048a5bf223bc64f81cf7f7adad861ac\"" Jul 2 07:55:58.992044 systemd[1]: Started cri-containerd-93870acacbd3be4dce95ccf3e06c39496048a5bf223bc64f81cf7f7adad861ac.scope. Jul 2 07:55:59.037483 env[1434]: time="2024-07-02T07:55:59.037397356Z" level=info msg="StartContainer for \"93870acacbd3be4dce95ccf3e06c39496048a5bf223bc64f81cf7f7adad861ac\" returns successfully" Jul 2 07:55:59.729782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256263377.mount: Deactivated successfully. Jul 2 07:55:59.829733 kubelet[2510]: I0702 07:55:59.829694 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9q2xz" podStartSLOduration=14.678128457 podStartE2EDuration="20.829637044s" podCreationTimestamp="2024-07-02 07:55:39 +0000 UTC" firstStartedPulling="2024-07-02 07:55:40.001920868 +0000 UTC m=+14.438625442" lastFinishedPulling="2024-07-02 07:55:46.153429555 +0000 UTC m=+20.590134029" observedRunningTime="2024-07-02 07:55:48.805865824 +0000 UTC m=+23.242570298" watchObservedRunningTime="2024-07-02 07:55:59.829637044 +0000 UTC m=+34.266341618" Jul 2 07:55:59.830732 kubelet[2510]: I0702 07:55:59.830700 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zln2z" podStartSLOduration=20.830654853 podStartE2EDuration="20.830654853s" podCreationTimestamp="2024-07-02 07:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:59.82906754 +0000 UTC m=+34.265772014" watchObservedRunningTime="2024-07-02 07:55:59.830654853 +0000 UTC m=+34.267359427" Jul 2 07:56:00.174616 systemd-networkd[1566]: vethfe577d49: Gained IPv6LL Jul 2 07:56:00.430784 systemd-networkd[1566]: cni0: Gained IPv6LL Jul 2 07:56:00.694302 env[1434]: time="2024-07-02T07:56:00.694124438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5vh6v,Uid:3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:00.763302 systemd-networkd[1566]: vethd0262f0e: Link UP Jul 2 07:56:00.767253 kernel: cni0: port 2(vethd0262f0e) entered blocking state Jul 2 07:56:00.767364 kernel: cni0: port 2(vethd0262f0e) entered disabled state Jul 2 07:56:00.771465 kernel: device vethd0262f0e entered promiscuous mode Jul 2 07:56:00.782295 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:56:00.782394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd0262f0e: link becomes ready Jul 2 07:56:00.790188 kernel: cni0: port 2(vethd0262f0e) entered blocking state Jul 2 07:56:00.790284 kernel: cni0: port 2(vethd0262f0e) entered forwarding state Jul 2 07:56:00.790826 systemd-networkd[1566]: vethd0262f0e: Gained carrier Jul 2 07:56:00.792500 env[1434]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c928), "name":"cbr0", "type":"bridge"} Jul 2 07:56:00.792500 env[1434]: delegateAdd: netconf sent to delegate plugin: Jul 2 07:56:00.809189 env[1434]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-07-02T07:56:00.809114979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:00.809451 env[1434]: time="2024-07-02T07:56:00.809153079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:00.809451 env[1434]: time="2024-07-02T07:56:00.809167079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:00.809710 env[1434]: time="2024-07-02T07:56:00.809484682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bb5218827af165edd26aad819987ba290289a11d5d71db2fa1767f11c994414 pid=3293 runtime=io.containerd.runc.v2 Jul 2 07:56:00.830068 systemd[1]: Started cri-containerd-3bb5218827af165edd26aad819987ba290289a11d5d71db2fa1767f11c994414.scope. Jul 2 07:56:00.881077 env[1434]: time="2024-07-02T07:56:00.880324461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5vh6v,Uid:3421e3b8-0052-4f3a-bba5-ebc2ed3a1b78,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb5218827af165edd26aad819987ba290289a11d5d71db2fa1767f11c994414\"" Jul 2 07:56:00.889393 env[1434]: time="2024-07-02T07:56:00.889316335Z" level=info msg="CreateContainer within sandbox \"3bb5218827af165edd26aad819987ba290289a11d5d71db2fa1767f11c994414\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:56:00.919633 env[1434]: time="2024-07-02T07:56:00.919575482Z" level=info msg="CreateContainer within sandbox \"3bb5218827af165edd26aad819987ba290289a11d5d71db2fa1767f11c994414\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55bd643624219288a03715c0c8850795304f78d1cea99a5049f662990db09d26\"" Jul 2 07:56:00.921803 env[1434]: time="2024-07-02T07:56:00.920341788Z" level=info msg="StartContainer for \"55bd643624219288a03715c0c8850795304f78d1cea99a5049f662990db09d26\"" Jul 2 07:56:00.938082 systemd[1]: Started cri-containerd-55bd643624219288a03715c0c8850795304f78d1cea99a5049f662990db09d26.scope. Jul 2 07:56:00.972740 env[1434]: time="2024-07-02T07:56:00.972676316Z" level=info msg="StartContainer for \"55bd643624219288a03715c0c8850795304f78d1cea99a5049f662990db09d26\" returns successfully" Jul 2 07:56:01.835431 kubelet[2510]: I0702 07:56:01.835353 2510 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5vh6v" podStartSLOduration=22.835292261 podStartE2EDuration="22.835292261s" podCreationTimestamp="2024-07-02 07:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:01.834887057 +0000 UTC m=+36.271591631" watchObservedRunningTime="2024-07-02 07:56:01.835292261 +0000 UTC m=+36.271996835" Jul 2 07:56:02.286705 systemd-networkd[1566]: vethd0262f0e: Gained IPv6LL Jul 2 07:57:24.894659 systemd[1]: Started sshd@5-10.200.8.10:22-10.200.16.10:34060.service. Jul 2 07:57:25.543194 sshd[3737]: Accepted publickey for core from 10.200.16.10 port 34060 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:25.545835 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:25.550205 systemd-logind[1404]: New session 8 of user core. Jul 2 07:57:25.552485 systemd[1]: Started session-8.scope. Jul 2 07:57:26.104883 sshd[3737]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:26.108259 systemd[1]: sshd@5-10.200.8.10:22-10.200.16.10:34060.service: Deactivated successfully. Jul 2 07:57:26.109371 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:57:26.110137 systemd-logind[1404]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:57:26.111065 systemd-logind[1404]: Removed session 8. Jul 2 07:57:31.215887 systemd[1]: Started sshd@6-10.200.8.10:22-10.200.16.10:37368.service. Jul 2 07:57:31.868067 sshd[3773]: Accepted publickey for core from 10.200.16.10 port 37368 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:31.869927 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:31.876215 systemd[1]: Started session-9.scope. Jul 2 07:57:31.876866 systemd-logind[1404]: New session 9 of user core. Jul 2 07:57:32.399960 sshd[3773]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:32.403785 systemd[1]: sshd@6-10.200.8.10:22-10.200.16.10:37368.service: Deactivated successfully. Jul 2 07:57:32.404760 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:57:32.405457 systemd-logind[1404]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:57:32.406344 systemd-logind[1404]: Removed session 9. Jul 2 07:57:37.508729 systemd[1]: Started sshd@7-10.200.8.10:22-10.200.16.10:37378.service. Jul 2 07:57:38.178057 sshd[3806]: Accepted publickey for core from 10.200.16.10 port 37378 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:38.179855 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:38.185026 systemd[1]: Started session-10.scope. Jul 2 07:57:38.185695 systemd-logind[1404]: New session 10 of user core. Jul 2 07:57:38.709049 sshd[3806]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:38.712285 systemd-logind[1404]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:57:38.712739 systemd[1]: sshd@7-10.200.8.10:22-10.200.16.10:37378.service: Deactivated successfully. Jul 2 07:57:38.713800 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:57:38.714781 systemd-logind[1404]: Removed session 10. Jul 2 07:57:43.818897 systemd[1]: Started sshd@8-10.200.8.10:22-10.200.16.10:34412.service. Jul 2 07:57:44.463733 sshd[3841]: Accepted publickey for core from 10.200.16.10 port 34412 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:44.465602 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:44.471889 systemd-logind[1404]: New session 11 of user core. Jul 2 07:57:44.472519 systemd[1]: Started session-11.scope. Jul 2 07:57:44.988918 sshd[3841]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:44.992604 systemd[1]: sshd@8-10.200.8.10:22-10.200.16.10:34412.service: Deactivated successfully. Jul 2 07:57:44.993680 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:57:44.994515 systemd-logind[1404]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:57:44.995502 systemd-logind[1404]: Removed session 11. Jul 2 07:57:50.098617 systemd[1]: Started sshd@9-10.200.8.10:22-10.200.16.10:60564.service. Jul 2 07:57:50.749695 sshd[3898]: Accepted publickey for core from 10.200.16.10 port 60564 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:50.751991 sshd[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:50.757727 systemd[1]: Started session-12.scope. Jul 2 07:57:50.758770 systemd-logind[1404]: New session 12 of user core. Jul 2 07:57:51.271728 sshd[3898]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:51.275272 systemd[1]: sshd@9-10.200.8.10:22-10.200.16.10:60564.service: Deactivated successfully. Jul 2 07:57:51.276508 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:57:51.277525 systemd-logind[1404]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:57:51.278630 systemd-logind[1404]: Removed session 12. Jul 2 07:57:51.381867 systemd[1]: Started sshd@10-10.200.8.10:22-10.200.16.10:60568.service. Jul 2 07:57:52.031148 sshd[3910]: Accepted publickey for core from 10.200.16.10 port 60568 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:52.033051 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:52.039499 systemd-logind[1404]: New session 13 of user core. Jul 2 07:57:52.039517 systemd[1]: Started session-13.scope. Jul 2 07:57:52.590200 sshd[3910]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:52.594280 systemd[1]: sshd@10-10.200.8.10:22-10.200.16.10:60568.service: Deactivated successfully. Jul 2 07:57:52.595411 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:57:52.596169 systemd-logind[1404]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:57:52.597027 systemd-logind[1404]: Removed session 13. Jul 2 07:57:52.701656 systemd[1]: Started sshd@11-10.200.8.10:22-10.200.16.10:60576.service. Jul 2 07:57:53.365053 sshd[3920]: Accepted publickey for core from 10.200.16.10 port 60576 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:53.367050 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:53.373463 systemd[1]: Started session-14.scope. Jul 2 07:57:53.374089 systemd-logind[1404]: New session 14 of user core. Jul 2 07:57:53.899343 sshd[3920]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:53.902830 systemd[1]: sshd@11-10.200.8.10:22-10.200.16.10:60576.service: Deactivated successfully. Jul 2 07:57:53.904228 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:57:53.904267 systemd-logind[1404]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:57:53.905759 systemd-logind[1404]: Removed session 14. Jul 2 07:57:59.009299 systemd[1]: Started sshd@12-10.200.8.10:22-10.200.16.10:52722.service. Jul 2 07:57:59.657999 sshd[3952]: Accepted publickey for core from 10.200.16.10 port 52722 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:59.660017 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:59.669036 systemd[1]: Started session-15.scope. Jul 2 07:57:59.671279 systemd-logind[1404]: New session 15 of user core. Jul 2 07:58:00.178249 sshd[3952]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:00.183296 systemd[1]: sshd@12-10.200.8.10:22-10.200.16.10:52722.service: Deactivated successfully. Jul 2 07:58:00.184551 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:58:00.185541 systemd-logind[1404]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:58:00.186710 systemd-logind[1404]: Removed session 15. Jul 2 07:58:00.287097 systemd[1]: Started sshd@13-10.200.8.10:22-10.200.16.10:52728.service. Jul 2 07:58:00.932716 sshd[3985]: Accepted publickey for core from 10.200.16.10 port 52728 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:00.934678 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:00.940272 systemd[1]: Started session-16.scope. Jul 2 07:58:00.940851 systemd-logind[1404]: New session 16 of user core. Jul 2 07:58:01.526559 sshd[3985]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:01.529832 systemd[1]: sshd@13-10.200.8.10:22-10.200.16.10:52728.service: Deactivated successfully. Jul 2 07:58:01.530886 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:58:01.531599 systemd-logind[1404]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:58:01.532490 systemd-logind[1404]: Removed session 16. Jul 2 07:58:01.637466 systemd[1]: Started sshd@14-10.200.8.10:22-10.200.16.10:52738.service. Jul 2 07:58:02.290945 sshd[3995]: Accepted publickey for core from 10.200.16.10 port 52738 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:02.292777 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:02.299560 systemd-logind[1404]: New session 17 of user core. Jul 2 07:58:02.300273 systemd[1]: Started session-17.scope. Jul 2 07:58:04.171044 sshd[3995]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:04.174597 systemd[1]: sshd@14-10.200.8.10:22-10.200.16.10:52738.service: Deactivated successfully. Jul 2 07:58:04.175600 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:58:04.176358 systemd-logind[1404]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:58:04.177534 systemd-logind[1404]: Removed session 17. Jul 2 07:58:04.281917 systemd[1]: Started sshd@15-10.200.8.10:22-10.200.16.10:52740.service. Jul 2 07:58:04.936849 sshd[4012]: Accepted publickey for core from 10.200.16.10 port 52740 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:04.938591 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:04.944499 systemd[1]: Started session-18.scope. Jul 2 07:58:04.945048 systemd-logind[1404]: New session 18 of user core. Jul 2 07:58:05.587567 sshd[4012]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:05.591904 systemd[1]: sshd@15-10.200.8.10:22-10.200.16.10:52740.service: Deactivated successfully. Jul 2 07:58:05.593064 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:58:05.593865 systemd-logind[1404]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:58:05.594906 systemd-logind[1404]: Removed session 18. Jul 2 07:58:05.698224 systemd[1]: Started sshd@16-10.200.8.10:22-10.200.16.10:52742.service. Jul 2 07:58:06.348171 sshd[4043]: Accepted publickey for core from 10.200.16.10 port 52742 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:06.349957 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:06.356119 systemd[1]: Started session-19.scope. Jul 2 07:58:06.356706 systemd-logind[1404]: New session 19 of user core. Jul 2 07:58:06.870195 sshd[4043]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:06.873954 systemd[1]: sshd@16-10.200.8.10:22-10.200.16.10:52742.service: Deactivated successfully. Jul 2 07:58:06.875105 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:58:06.875985 systemd-logind[1404]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:58:06.876952 systemd-logind[1404]: Removed session 19. Jul 2 07:58:11.981215 systemd[1]: Started sshd@17-10.200.8.10:22-10.200.16.10:33514.service. Jul 2 07:58:12.631129 sshd[4080]: Accepted publickey for core from 10.200.16.10 port 33514 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:12.632877 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:12.638707 systemd[1]: Started session-20.scope. Jul 2 07:58:12.639628 systemd-logind[1404]: New session 20 of user core. Jul 2 07:58:13.156738 sshd[4080]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:13.161140 systemd[1]: sshd@17-10.200.8.10:22-10.200.16.10:33514.service: Deactivated successfully. Jul 2 07:58:13.162222 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:58:13.162966 systemd-logind[1404]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:58:13.164058 systemd-logind[1404]: Removed session 20. Jul 2 07:58:18.268670 systemd[1]: Started sshd@18-10.200.8.10:22-10.200.16.10:33526.service. Jul 2 07:58:18.925296 sshd[4113]: Accepted publickey for core from 10.200.16.10 port 33526 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:18.927375 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:18.934018 systemd-logind[1404]: New session 21 of user core. Jul 2 07:58:18.934655 systemd[1]: Started session-21.scope. Jul 2 07:58:19.452694 sshd[4113]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:19.456258 systemd[1]: sshd@18-10.200.8.10:22-10.200.16.10:33526.service: Deactivated successfully. Jul 2 07:58:19.457377 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:58:19.458256 systemd-logind[1404]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:58:19.459266 systemd-logind[1404]: Removed session 21. Jul 2 07:58:24.561964 systemd[1]: Started sshd@19-10.200.8.10:22-10.200.16.10:33542.service. Jul 2 07:58:25.214254 sshd[4152]: Accepted publickey for core from 10.200.16.10 port 33542 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:25.216195 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:25.223458 systemd[1]: Started session-22.scope. Jul 2 07:58:25.224123 systemd-logind[1404]: New session 22 of user core. Jul 2 07:58:25.743209 sshd[4152]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:25.747103 systemd[1]: sshd@19-10.200.8.10:22-10.200.16.10:33542.service: Deactivated successfully. Jul 2 07:58:25.748122 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:58:25.748938 systemd-logind[1404]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:58:25.749764 systemd-logind[1404]: Removed session 22. Jul 2 07:58:39.995886 systemd[1]: cri-containerd-25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e.scope: Deactivated successfully. Jul 2 07:58:39.996311 systemd[1]: cri-containerd-25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e.scope: Consumed 3.607s CPU time. Jul 2 07:58:40.021845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e-rootfs.mount: Deactivated successfully. Jul 2 07:58:40.039592 env[1434]: time="2024-07-02T07:58:40.039500176Z" level=info msg="shim disconnected" id=25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e Jul 2 07:58:40.039592 env[1434]: time="2024-07-02T07:58:40.039584477Z" level=warning msg="cleaning up after shim disconnected" id=25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e namespace=k8s.io Jul 2 07:58:40.039592 env[1434]: time="2024-07-02T07:58:40.039598877Z" level=info msg="cleaning up dead shim" Jul 2 07:58:40.049234 env[1434]: time="2024-07-02T07:58:40.049178305Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4255 runtime=io.containerd.runc.v2\n" Jul 2 07:58:40.183157 kubelet[2510]: I0702 07:58:40.181349 2510 scope.go:117] "RemoveContainer" containerID="25902ceac7316edd586b25160b9a14d18b31df9ddbcdfb02a88443bf65fb2a9e" Jul 2 07:58:40.184363 env[1434]: time="2024-07-02T07:58:40.184321314Z" level=info msg="CreateContainer within sandbox \"aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 07:58:40.220654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717652631.mount: Deactivated successfully. Jul 2 07:58:40.236742 env[1434]: time="2024-07-02T07:58:40.236665715Z" level=info msg="CreateContainer within sandbox \"aa898aa293645119c750bf681fa97ce849f9ec419e1c14442b6363eddb077100\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f97d73110fbb82284e19baf4f9f3f9f4101f1b59c82cd4a0b6b7f2022d120b0f\"" Jul 2 07:58:40.237519 env[1434]: time="2024-07-02T07:58:40.237484426Z" level=info msg="StartContainer for \"f97d73110fbb82284e19baf4f9f3f9f4101f1b59c82cd4a0b6b7f2022d120b0f\"" Jul 2 07:58:40.261358 systemd[1]: Started cri-containerd-f97d73110fbb82284e19baf4f9f3f9f4101f1b59c82cd4a0b6b7f2022d120b0f.scope. Jul 2 07:58:40.339238 env[1434]: time="2024-07-02T07:58:40.339178687Z" level=info msg="StartContainer for \"f97d73110fbb82284e19baf4f9f3f9f4101f1b59c82cd4a0b6b7f2022d120b0f\" returns successfully" Jul 2 07:58:41.520590 kubelet[2510]: E0702 07:58:41.520206 2510 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:42392->10.200.8.25:2379: read: connection timed out" Jul 2 07:58:41.527320 systemd[1]: cri-containerd-b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691.scope: Deactivated successfully. Jul 2 07:58:41.527767 systemd[1]: cri-containerd-b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691.scope: Consumed 1.445s CPU time. Jul 2 07:58:41.550131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691-rootfs.mount: Deactivated successfully. Jul 2 07:58:41.573000 env[1434]: time="2024-07-02T07:58:41.572920248Z" level=info msg="shim disconnected" id=b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691 Jul 2 07:58:41.573000 env[1434]: time="2024-07-02T07:58:41.572987048Z" level=warning msg="cleaning up after shim disconnected" id=b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691 namespace=k8s.io Jul 2 07:58:41.573000 env[1434]: time="2024-07-02T07:58:41.573001149Z" level=info msg="cleaning up dead shim" Jul 2 07:58:41.582984 env[1434]: time="2024-07-02T07:58:41.582918780Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4319 runtime=io.containerd.runc.v2\n" Jul 2 07:58:42.188951 kubelet[2510]: I0702 07:58:42.188908 2510 scope.go:117] "RemoveContainer" containerID="b47c7b59555e6eeeeba7807f639ade7003e26d836dfa07b03177b95e4ca3c691" Jul 2 07:58:42.191479 env[1434]: time="2024-07-02T07:58:42.191428050Z" level=info msg="CreateContainer within sandbox \"9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 07:58:42.242963 env[1434]: time="2024-07-02T07:58:42.242891129Z" level=info msg="CreateContainer within sandbox \"9a1242621f56a3c6268f33c0b19bc561de6f3f26fad0243c2dde5851217a049e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bcde76244b6fdcbdeb4a8c9eb2b07f26f735ec1071c0e459b8b19d72b6061dff\"" Jul 2 07:58:42.243596 env[1434]: time="2024-07-02T07:58:42.243558338Z" level=info msg="StartContainer for \"bcde76244b6fdcbdeb4a8c9eb2b07f26f735ec1071c0e459b8b19d72b6061dff\"" Jul 2 07:58:42.278004 systemd[1]: Started cri-containerd-bcde76244b6fdcbdeb4a8c9eb2b07f26f735ec1071c0e459b8b19d72b6061dff.scope. Jul 2 07:58:42.329681 env[1434]: time="2024-07-02T07:58:42.329615274Z" level=info msg="StartContainer for \"bcde76244b6fdcbdeb4a8c9eb2b07f26f735ec1071c0e459b8b19d72b6061dff\" returns successfully" Jul 2 07:58:44.335985 kubelet[2510]: E0702 07:58:44.335927 2510 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:42166->10.200.8.25:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.5-a-37a211789c.17de566859ff7f58 kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.5-a-37a211789c,UID:dd088e04f1ef4e46f990f5d547dd2f84,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-37a211789c,},FirstTimestamp:2024-07-02 07:58:33.888612184 +0000 UTC m=+188.325316758,LastTimestamp:2024-07-02 07:58:33.888612184 +0000 UTC m=+188.325316758,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-37a211789c,}" Jul 2 07:58:50.352001 kubelet[2510]: I0702 07:58:50.351935 2510 status_manager.go:853] "Failed to get status for pod" podUID="dd088e04f1ef4e46f990f5d547dd2f84" pod="kube-system/kube-apiserver-ci-3510.3.5-a-37a211789c" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:42306->10.200.8.25:2379: read: connection timed out" Jul 2 07:58:51.521048 kubelet[2510]: E0702 07:58:51.520997 2510 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-37a211789c?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"