Feb 12 19:57:44.016638 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:57:44.016670 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:57:44.016684 kernel: BIOS-provided physical RAM map: Feb 12 19:57:44.016705 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:57:44.016715 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 12 19:57:44.016725 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 12 19:57:44.016741 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 12 19:57:44.016752 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 12 19:57:44.016762 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 12 19:57:44.016773 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 12 19:57:44.016784 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 12 19:57:44.016794 kernel: printk: bootconsole [earlyser0] enabled Feb 12 19:57:44.016805 kernel: NX (Execute Disable) protection: active Feb 12 19:57:44.016816 kernel: efi: EFI v2.70 by Microsoft Feb 12 19:57:44.016832 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Feb 12 19:57:44.016844 kernel: random: crng init done Feb 12 19:57:44.016855 kernel: SMBIOS 3.1.0 present. Feb 12 19:57:44.016867 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:57:44.016878 kernel: Hypervisor detected: Microsoft Hyper-V Feb 12 19:57:44.016890 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 12 19:57:44.016902 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 12 19:57:44.016913 kernel: Hyper-V: Nested features: 0x1e0101 Feb 12 19:57:44.016927 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 12 19:57:44.016939 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 12 19:57:44.016950 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 12 19:57:44.016962 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 12 19:57:44.016974 kernel: tsc: Detected 2593.905 MHz processor Feb 12 19:57:44.016987 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:57:44.016999 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:57:44.017011 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 12 19:57:44.017022 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:57:44.017034 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 12 19:57:44.017049 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 12 19:57:44.017060 kernel: Using GB pages for direct mapping Feb 12 19:57:44.017072 kernel: Secure boot disabled Feb 12 19:57:44.017084 kernel: ACPI: Early table checksum verification disabled Feb 12 19:57:44.017096 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 12 19:57:44.017107 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017120 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017132 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:57:44.017151 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 12 19:57:44.017164 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017177 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017190 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017202 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017215 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017230 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017243 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:57:44.017256 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 12 19:57:44.017269 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 12 19:57:44.017282 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 12 19:57:44.017294 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 12 19:57:44.017307 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 12 19:57:44.017320 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 12 19:57:44.017334 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 12 19:57:44.017348 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 12 19:57:44.017360 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 12 19:57:44.017373 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 12 19:57:44.017386 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:57:44.017398 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:57:44.017411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 12 19:57:44.017424 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 12 19:57:44.017436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 12 19:57:44.017452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 12 19:57:44.017465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 12 19:57:44.017478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 12 19:57:44.017491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 12 19:57:44.017503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 12 19:57:44.017516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 12 19:57:44.017529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 12 19:57:44.017541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 12 19:57:44.017554 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 12 19:57:44.017570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 12 19:57:44.017583 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 12 19:57:44.017596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 12 19:57:44.017608 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 12 19:57:44.017621 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 12 19:57:44.017634 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 12 19:57:44.017647 kernel: Zone ranges: Feb 12 19:57:44.017660 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:57:44.017672 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 12 19:57:44.017688 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:57:44.019413 kernel: Movable zone start for each node Feb 12 19:57:44.019426 kernel: Early memory node ranges Feb 12 19:57:44.019439 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:57:44.019451 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 12 19:57:44.019464 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 12 19:57:44.019476 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:57:44.019489 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 12 19:57:44.019502 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:57:44.019519 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:57:44.019532 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 12 19:57:44.019545 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 12 19:57:44.019557 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 12 19:57:44.019570 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:57:44.019584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:57:44.019597 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:57:44.019610 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 12 19:57:44.019622 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:57:44.019638 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 12 19:57:44.019651 kernel: Booting paravirtualized kernel on Hyper-V Feb 12 19:57:44.019663 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:57:44.019676 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:57:44.019689 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:57:44.019752 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:57:44.019765 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:57:44.019777 kernel: Hyper-V: PV spinlocks enabled Feb 12 19:57:44.019790 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:57:44.019806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 12 19:57:44.019818 kernel: Policy zone: Normal Feb 12 19:57:44.019833 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:57:44.019847 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:57:44.019859 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 12 19:57:44.019872 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:57:44.019885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:57:44.019897 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 12 19:57:44.019912 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:57:44.019926 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:57:44.019948 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:57:44.019963 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:57:44.019978 kernel: rcu: RCU event tracing is enabled. Feb 12 19:57:44.019992 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:57:44.020005 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:57:44.020019 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:57:44.020033 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:57:44.020046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:57:44.020060 kernel: Using NULL legacy PIC Feb 12 19:57:44.020076 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 12 19:57:44.020090 kernel: Console: colour dummy device 80x25 Feb 12 19:57:44.020103 kernel: printk: console [tty1] enabled Feb 12 19:57:44.020117 kernel: printk: console [ttyS0] enabled Feb 12 19:57:44.020130 kernel: printk: bootconsole [earlyser0] disabled Feb 12 19:57:44.020146 kernel: ACPI: Core revision 20210730 Feb 12 19:57:44.020160 kernel: Failed to register legacy timer interrupt Feb 12 19:57:44.020173 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:57:44.020186 kernel: Hyper-V: Using IPI hypercalls Feb 12 19:57:44.020200 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 12 19:57:44.020213 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 19:57:44.020227 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 19:57:44.020240 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:57:44.020253 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:57:44.020266 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:57:44.020282 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:57:44.020295 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 19:57:44.020309 kernel: RETBleed: Vulnerable Feb 12 19:57:44.020321 kernel: Speculative Store Bypass: Vulnerable Feb 12 19:57:44.020335 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:57:44.020349 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:57:44.020362 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 19:57:44.020375 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:57:44.020388 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:57:44.020401 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:57:44.020417 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 19:57:44.020430 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 19:57:44.020444 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 19:57:44.020457 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:57:44.020471 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 12 19:57:44.020484 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 12 19:57:44.020497 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 12 19:57:44.020510 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 12 19:57:44.020524 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:57:44.020537 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:57:44.020550 kernel: LSM: Security Framework initializing Feb 12 19:57:44.020564 kernel: SELinux: Initializing. Feb 12 19:57:44.020580 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:57:44.020593 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:57:44.020607 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 19:57:44.020620 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 19:57:44.020634 kernel: signal: max sigframe size: 3632 Feb 12 19:57:44.020648 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:57:44.020661 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:57:44.020675 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:57:44.020688 kernel: x86: Booting SMP configuration: Feb 12 19:57:44.020709 kernel: .... node #0, CPUs: #1 Feb 12 19:57:44.020727 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 12 19:57:44.020741 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 19:57:44.020755 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:57:44.020767 kernel: smpboot: Max logical packages: 1 Feb 12 19:57:44.020781 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 12 19:57:44.020795 kernel: devtmpfs: initialized Feb 12 19:57:44.020808 kernel: x86/mm: Memory block size: 128MB Feb 12 19:57:44.020822 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 12 19:57:44.020838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:57:44.020852 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:57:44.020865 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:57:44.020879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:57:44.020893 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:57:44.020907 kernel: audit: type=2000 audit(1707767862.023:1): state=initialized audit_enabled=0 res=1 Feb 12 19:57:44.020920 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:57:44.020933 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:57:44.020947 kernel: cpuidle: using governor menu Feb 12 19:57:44.020963 kernel: ACPI: bus type PCI registered Feb 12 19:57:44.020976 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:57:44.020990 kernel: dca service started, version 1.12.1 Feb 12 19:57:44.021004 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:57:44.021018 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:57:44.021032 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:57:44.021045 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:57:44.021059 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:57:44.021072 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:57:44.021088 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:57:44.021101 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:57:44.021115 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:57:44.021129 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:57:44.021142 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:57:44.021156 kernel: ACPI: Interpreter enabled Feb 12 19:57:44.021169 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:57:44.021182 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:57:44.021196 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:57:44.021212 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 12 19:57:44.021226 kernel: iommu: Default domain type: Translated Feb 12 19:57:44.021240 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:57:44.021253 kernel: vgaarb: loaded Feb 12 19:57:44.021266 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:57:44.021280 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:57:44.021294 kernel: PTP clock support registered Feb 12 19:57:44.021307 kernel: Registered efivars operations Feb 12 19:57:44.021321 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:57:44.021334 kernel: PCI: System does not support PCI Feb 12 19:57:44.021350 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 12 19:57:44.021364 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:57:44.021377 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:57:44.021391 kernel: pnp: PnP ACPI init Feb 12 19:57:44.021404 kernel: pnp: PnP ACPI: found 3 devices Feb 12 19:57:44.021418 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:57:44.021432 kernel: NET: Registered PF_INET protocol family Feb 12 19:57:44.021445 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:57:44.021462 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 12 19:57:44.021476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:57:44.021489 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:57:44.021503 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 12 19:57:44.021517 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 12 19:57:44.021530 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:57:44.021544 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:57:44.021557 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:57:44.021571 kernel: NET: Registered PF_XDP protocol family Feb 12 19:57:44.021587 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:57:44.021600 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 12 19:57:44.021614 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 12 19:57:44.021628 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:57:44.021641 kernel: Initialise system trusted keyrings Feb 12 19:57:44.021655 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 12 19:57:44.021669 kernel: Key type asymmetric registered Feb 12 19:57:44.021682 kernel: Asymmetric key parser 'x509' registered Feb 12 19:57:44.021704 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:57:44.021720 kernel: io scheduler mq-deadline registered Feb 12 19:57:44.021734 kernel: io scheduler kyber registered Feb 12 19:57:44.021747 kernel: io scheduler bfq registered Feb 12 19:57:44.021760 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:57:44.021774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:57:44.021788 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:57:44.021804 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 12 19:57:44.021817 kernel: i8042: PNP: No PS/2 controller found. Feb 12 19:57:44.021972 kernel: rtc_cmos 00:02: registered as rtc0 Feb 12 19:57:44.022087 kernel: rtc_cmos 00:02: setting system clock to 2024-02-12T19:57:43 UTC (1707767863) Feb 12 19:57:44.022207 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 12 19:57:44.022225 kernel: fail to initialize ptp_kvm Feb 12 19:57:44.022240 kernel: intel_pstate: CPU model not supported Feb 12 19:57:44.022253 kernel: efifb: probing for efifb Feb 12 19:57:44.022267 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:57:44.022281 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:57:44.022294 kernel: efifb: scrolling: redraw Feb 12 19:57:44.022311 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:57:44.022325 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:57:44.022338 kernel: fb0: EFI VGA frame buffer device Feb 12 19:57:44.022352 kernel: pstore: Registered efi as persistent store backend Feb 12 19:57:44.022365 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:57:44.022378 kernel: Segment Routing with IPv6 Feb 12 19:57:44.022391 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:57:44.022403 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:57:44.022415 kernel: Key type dns_resolver registered Feb 12 19:57:44.022432 kernel: IPI shorthand broadcast: enabled Feb 12 19:57:44.022450 kernel: sched_clock: Marking stable (690270100, 18790800)->(869494700, -160433800) Feb 12 19:57:44.022469 kernel: registered taskstats version 1 Feb 12 19:57:44.022490 kernel: Loading compiled-in X.509 certificates Feb 12 19:57:44.022507 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:57:44.022519 kernel: Key type .fscrypt registered Feb 12 19:57:44.022533 kernel: Key type fscrypt-provisioning registered Feb 12 19:57:44.022545 kernel: pstore: Using crash dump compression: deflate Feb 12 19:57:44.022561 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:57:44.022573 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:57:44.022587 kernel: ima: No architecture policies found Feb 12 19:57:44.022599 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:57:44.022612 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:57:44.022626 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:57:44.022639 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:57:44.022652 kernel: Run /init as init process Feb 12 19:57:44.022665 kernel: with arguments: Feb 12 19:57:44.022677 kernel: /init Feb 12 19:57:44.038643 kernel: with environment: Feb 12 19:57:44.038670 kernel: HOME=/ Feb 12 19:57:44.038684 kernel: TERM=linux Feb 12 19:57:44.038722 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:57:44.038737 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:57:44.038754 systemd[1]: Detected virtualization microsoft. Feb 12 19:57:44.038774 systemd[1]: Detected architecture x86-64. Feb 12 19:57:44.038792 systemd[1]: Running in initrd. Feb 12 19:57:44.038805 systemd[1]: No hostname configured, using default hostname. Feb 12 19:57:44.038819 systemd[1]: Hostname set to . Feb 12 19:57:44.038833 systemd[1]: Initializing machine ID from random generator. Feb 12 19:57:44.038852 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:57:44.038867 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:57:44.038882 systemd[1]: Reached target cryptsetup.target. Feb 12 19:57:44.038896 systemd[1]: Reached target paths.target. Feb 12 19:57:44.038910 systemd[1]: Reached target slices.target. Feb 12 19:57:44.038932 systemd[1]: Reached target swap.target. Feb 12 19:57:44.038947 systemd[1]: Reached target timers.target. Feb 12 19:57:44.038961 systemd[1]: Listening on iscsid.socket. Feb 12 19:57:44.038980 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:57:44.038996 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:57:44.039010 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:57:44.039024 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:57:44.039042 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:57:44.039061 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:57:44.039076 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:57:44.039090 systemd[1]: Reached target sockets.target. Feb 12 19:57:44.039106 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:57:44.039125 systemd[1]: Finished network-cleanup.service. Feb 12 19:57:44.039139 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:57:44.039152 systemd[1]: Starting systemd-journald.service... Feb 12 19:57:44.039173 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:57:44.039190 systemd[1]: Starting systemd-resolved.service... Feb 12 19:57:44.039209 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:57:44.039225 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:57:44.039238 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:57:44.039252 kernel: audit: type=1130 audit(1707767864.022:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.039273 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:57:44.039290 systemd-journald[183]: Journal started Feb 12 19:57:44.039361 systemd-journald[183]: Runtime Journal (/run/log/journal/63586a3d8a5049369f362026d9c84924) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:57:44.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.033426 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:57:44.050767 systemd[1]: Started systemd-journald.service. Feb 12 19:57:44.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.065735 kernel: audit: type=1130 audit(1707767864.045:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.071867 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:57:44.080378 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:57:44.104700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:57:44.104729 kernel: audit: type=1130 audit(1707767864.069:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.104747 kernel: Bridge firewalling registered Feb 12 19:57:44.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.080600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:57:44.080625 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:57:44.080660 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:57:44.083339 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:57:44.124643 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:57:44.127724 systemd[1]: Started systemd-resolved.service. Feb 12 19:57:44.132054 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:57:44.136672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:57:44.141442 systemd[1]: Reached target nss-lookup.target. Feb 12 19:57:44.146329 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:57:44.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.172229 kernel: audit: type=1130 audit(1707767864.131:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.172276 kernel: audit: type=1130 audit(1707767864.135:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.173063 kernel: audit: type=1130 audit(1707767864.140:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.184098 kernel: SCSI subsystem initialized Feb 12 19:57:44.185233 dracut-cmdline[200]: dracut-dracut-053 Feb 12 19:57:44.189811 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:57:44.222709 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:57:44.222753 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:57:44.231560 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:57:44.235545 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:57:44.238441 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:57:44.243391 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:57:44.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.260446 kernel: audit: type=1130 audit(1707767864.241:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.260678 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:57:44.267184 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:57:44.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.278711 kernel: audit: type=1130 audit(1707767864.262:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.287714 kernel: iscsi: registered transport (tcp) Feb 12 19:57:44.311065 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:57:44.311110 kernel: QLogic iSCSI HBA Driver Feb 12 19:57:44.339199 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:57:44.342003 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:57:44.357974 kernel: audit: type=1130 audit(1707767864.341:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.395713 kernel: raid6: avx512x4 gen() 18544 MB/s Feb 12 19:57:44.415713 kernel: raid6: avx512x4 xor() 8729 MB/s Feb 12 19:57:44.435705 kernel: raid6: avx512x2 gen() 18635 MB/s Feb 12 19:57:44.454707 kernel: raid6: avx512x2 xor() 29985 MB/s Feb 12 19:57:44.474708 kernel: raid6: avx512x1 gen() 18577 MB/s Feb 12 19:57:44.494704 kernel: raid6: avx512x1 xor() 27036 MB/s Feb 12 19:57:44.514705 kernel: raid6: avx2x4 gen() 18575 MB/s Feb 12 19:57:44.534707 kernel: raid6: avx2x4 xor() 7948 MB/s Feb 12 19:57:44.554706 kernel: raid6: avx2x2 gen() 18527 MB/s Feb 12 19:57:44.574704 kernel: raid6: avx2x2 xor() 22334 MB/s Feb 12 19:57:44.594704 kernel: raid6: avx2x1 gen() 13807 MB/s Feb 12 19:57:44.613702 kernel: raid6: avx2x1 xor() 19494 MB/s Feb 12 19:57:44.633703 kernel: raid6: sse2x4 gen() 11756 MB/s Feb 12 19:57:44.653703 kernel: raid6: sse2x4 xor() 7325 MB/s Feb 12 19:57:44.673702 kernel: raid6: sse2x2 gen() 12924 MB/s Feb 12 19:57:44.693702 kernel: raid6: sse2x2 xor() 7541 MB/s Feb 12 19:57:44.713704 kernel: raid6: sse2x1 gen() 11649 MB/s Feb 12 19:57:44.736183 kernel: raid6: sse2x1 xor() 5942 MB/s Feb 12 19:57:44.736217 kernel: raid6: using algorithm avx512x2 gen() 18635 MB/s Feb 12 19:57:44.736229 kernel: raid6: .... xor() 29985 MB/s, rmw enabled Feb 12 19:57:44.739208 kernel: raid6: using avx512x2 recovery algorithm Feb 12 19:57:44.758714 kernel: xor: automatically using best checksumming function avx Feb 12 19:57:44.853721 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:57:44.861642 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:57:44.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.864000 audit: BPF prog-id=7 op=LOAD Feb 12 19:57:44.864000 audit: BPF prog-id=8 op=LOAD Feb 12 19:57:44.866021 systemd[1]: Starting systemd-udevd.service... Feb 12 19:57:44.880395 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 19:57:44.886930 systemd[1]: Started systemd-udevd.service. Feb 12 19:57:44.890019 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:57:44.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.910080 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Feb 12 19:57:44.940321 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:57:44.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:44.945837 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:57:44.978997 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:57:44.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:45.036712 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:57:45.042989 kernel: hv_vmbus: Vmbus version:5.2 Feb 12 19:57:45.054711 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:57:45.075708 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:57:45.093776 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:57:45.100838 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:57:45.100881 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:57:45.108710 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:57:45.113713 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:57:45.113768 kernel: AES CTR mode by8 optimization enabled Feb 12 19:57:45.121664 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:57:45.121715 kernel: scsi host0: storvsc_host_t Feb 12 19:57:45.126757 kernel: scsi host1: storvsc_host_t Feb 12 19:57:45.131718 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:57:45.131763 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:57:45.143713 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:57:45.169290 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:57:45.169482 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:57:45.170716 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:57:45.185022 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:57:45.185243 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:57:45.188470 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:57:45.188661 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:57:45.194709 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:57:45.198709 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:57:45.203049 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:57:45.261986 kernel: hv_netvsc 0022489e-8a3f-0022-489e-8a3f0022489e eth0: VF slot 1 added Feb 12 19:57:45.270990 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:57:45.278172 kernel: hv_pci 5fa0486a-fbd2-447a-9adc-6e91df460d64: PCI VMBus probing: Using version 0x10004 Feb 12 19:57:45.278337 kernel: hv_pci 5fa0486a-fbd2-447a-9adc-6e91df460d64: PCI host bridge to bus fbd2:00 Feb 12 19:57:45.287024 kernel: pci_bus fbd2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 12 19:57:45.287202 kernel: pci_bus fbd2:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:57:45.295964 kernel: pci fbd2:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 12 19:57:45.304165 kernel: pci fbd2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:57:45.319707 kernel: pci fbd2:00:02.0: enabling Extended Tags Feb 12 19:57:45.335920 kernel: pci fbd2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fbd2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 12 19:57:45.344389 kernel: pci_bus fbd2:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:57:45.344574 kernel: pci fbd2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:57:45.435721 kernel: mlx5_core fbd2:00:02.0: firmware version: 14.30.1224 Feb 12 19:57:45.525392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:57:45.593895 kernel: mlx5_core fbd2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 12 19:57:45.601720 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Feb 12 19:57:45.615153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:57:45.682123 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:57:45.726625 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:57:45.729744 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:57:45.740567 systemd[1]: Starting disk-uuid.service... Feb 12 19:57:45.763127 kernel: mlx5_core fbd2:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 12 19:57:45.763397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:57:45.763419 kernel: mlx5_core fbd2:00:02.0: mlx5e_tc_post_act_init:40:(pid 187): firmware level support is missing Feb 12 19:57:45.779227 kernel: hv_netvsc 0022489e-8a3f-0022-489e-8a3f0022489e eth0: VF registering: eth1 Feb 12 19:57:45.784717 kernel: mlx5_core fbd2:00:02.0 eth1: joined to eth0 Feb 12 19:57:45.797714 kernel: mlx5_core fbd2:00:02.0 enP64466s1: renamed from eth1 Feb 12 19:57:46.769714 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:57:46.770619 disk-uuid[566]: The operation has completed successfully. Feb 12 19:57:46.845495 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:57:46.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:46.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:46.845596 systemd[1]: Finished disk-uuid.service. Feb 12 19:57:46.857017 systemd[1]: Starting verity-setup.service... Feb 12 19:57:46.890714 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:57:47.093104 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:57:47.097048 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:57:47.101919 systemd[1]: Finished verity-setup.service. Feb 12 19:57:47.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.174725 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:57:47.171950 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:57:47.174253 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:57:47.179680 systemd[1]: Starting ignition-setup.service... Feb 12 19:57:47.184335 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:57:47.199616 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:57:47.199650 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:57:47.199661 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:57:47.253639 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:57:47.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.258000 audit: BPF prog-id=9 op=LOAD Feb 12 19:57:47.260248 systemd[1]: Starting systemd-networkd.service... Feb 12 19:57:47.270801 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:57:47.285973 systemd-networkd[810]: lo: Link UP Feb 12 19:57:47.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.285981 systemd-networkd[810]: lo: Gained carrier Feb 12 19:57:47.286493 systemd-networkd[810]: Enumeration completed Feb 12 19:57:47.286559 systemd[1]: Started systemd-networkd.service. Feb 12 19:57:47.290027 systemd[1]: Reached target network.target. Feb 12 19:57:47.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.294021 systemd-networkd[810]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:57:47.298059 systemd[1]: Starting iscsiuio.service... Feb 12 19:57:47.300886 systemd[1]: Started iscsiuio.service. Feb 12 19:57:47.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.312971 iscsid[817]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:57:47.312971 iscsid[817]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:57:47.312971 iscsid[817]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:57:47.312971 iscsid[817]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:57:47.312971 iscsid[817]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:57:47.312971 iscsid[817]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:57:47.312971 iscsid[817]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:57:47.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.303852 systemd[1]: Starting iscsid.service... Feb 12 19:57:47.311024 systemd[1]: Started iscsid.service. Feb 12 19:57:47.313850 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:57:47.339306 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:57:47.343767 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:57:47.369729 kernel: mlx5_core fbd2:00:02.0 enP64466s1: Link up Feb 12 19:57:47.348574 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:57:47.353394 systemd[1]: Reached target remote-fs.target. Feb 12 19:57:47.356035 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:57:47.380464 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:57:47.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.420081 systemd[1]: Finished ignition-setup.service. Feb 12 19:57:47.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:47.424965 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:57:47.450365 kernel: hv_netvsc 0022489e-8a3f-0022-489e-8a3f0022489e eth0: Data path switched to VF: enP64466s1 Feb 12 19:57:47.450628 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:57:47.450817 systemd-networkd[810]: enP64466s1: Link UP Feb 12 19:57:47.450951 systemd-networkd[810]: eth0: Link UP Feb 12 19:57:47.451142 systemd-networkd[810]: eth0: Gained carrier Feb 12 19:57:47.457874 systemd-networkd[810]: enP64466s1: Gained carrier Feb 12 19:57:47.479774 systemd-networkd[810]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:57:48.996931 systemd-networkd[810]: eth0: Gained IPv6LL Feb 12 19:57:50.106771 ignition[832]: Ignition 2.14.0 Feb 12 19:57:50.106788 ignition[832]: Stage: fetch-offline Feb 12 19:57:50.106876 ignition[832]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:57:50.106939 ignition[832]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:57:50.135855 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:57:50.176428 ignition[832]: parsed url from cmdline: "" Feb 12 19:57:50.176441 ignition[832]: no config URL provided Feb 12 19:57:50.176451 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:57:50.178134 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:57:50.176467 ignition[832]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:57:50.176475 ignition[832]: failed to fetch config: resource requires networking Feb 12 19:57:50.177085 ignition[832]: Ignition finished successfully Feb 12 19:57:50.196758 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 12 19:57:50.196801 kernel: audit: type=1130 audit(1707767870.190:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.192361 systemd[1]: Starting ignition-fetch.service... Feb 12 19:57:50.200850 ignition[838]: Ignition 2.14.0 Feb 12 19:57:50.200855 ignition[838]: Stage: fetch Feb 12 19:57:50.200958 ignition[838]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:57:50.200982 ignition[838]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:57:50.210591 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:57:50.210892 ignition[838]: parsed url from cmdline: "" Feb 12 19:57:50.210896 ignition[838]: no config URL provided Feb 12 19:57:50.210903 ignition[838]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:57:50.210913 ignition[838]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:57:50.210967 ignition[838]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:57:50.250909 ignition[838]: GET result: OK Feb 12 19:57:50.250958 ignition[838]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 12 19:57:50.323668 ignition[838]: opening config device: "/dev/sr0" Feb 12 19:57:50.324014 ignition[838]: getting drive status for "/dev/sr0" Feb 12 19:57:50.324055 ignition[838]: drive status: OK Feb 12 19:57:50.324098 ignition[838]: mounting config device Feb 12 19:57:50.324135 ignition[838]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure341752968" Feb 12 19:57:50.346507 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/13 00:00 (1000) Feb 12 19:57:50.345681 ignition[838]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure341752968" Feb 12 19:57:50.345725 ignition[838]: checking for config drive Feb 12 19:57:50.347171 systemd[1]: tmp-ignition\x2dazure341752968.mount: Deactivated successfully. Feb 12 19:57:50.345980 ignition[838]: reading config Feb 12 19:57:50.346468 ignition[838]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure341752968" Feb 12 19:57:50.346548 ignition[838]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure341752968" Feb 12 19:57:50.346563 ignition[838]: config has been read from custom data Feb 12 19:57:50.346640 ignition[838]: parsing config with SHA512: 45be6c0f5d9f92dbb98b703aa727b52548e9920bf1ddde0d98bcaebfa242a0beb33ebbbe205ac1a81433d8278191f7c9e76f5440577467c30c33b4013abc2172 Feb 12 19:57:50.385526 unknown[838]: fetched base config from "system" Feb 12 19:57:50.385756 unknown[838]: fetched base config from "system" Feb 12 19:57:50.386352 ignition[838]: fetch: fetch complete Feb 12 19:57:50.385764 unknown[838]: fetched user config from "azure" Feb 12 19:57:50.386359 ignition[838]: fetch: fetch passed Feb 12 19:57:50.386400 ignition[838]: Ignition finished successfully Feb 12 19:57:50.397182 systemd[1]: Finished ignition-fetch.service. Feb 12 19:57:50.413565 kernel: audit: type=1130 audit(1707767870.398:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.399954 systemd[1]: Starting ignition-kargs.service... Feb 12 19:57:50.420568 ignition[845]: Ignition 2.14.0 Feb 12 19:57:50.420578 ignition[845]: Stage: kargs Feb 12 19:57:50.420737 ignition[845]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:57:50.420770 ignition[845]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:57:50.425164 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:57:50.428000 ignition[845]: kargs: kargs passed Feb 12 19:57:50.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.429197 systemd[1]: Finished ignition-kargs.service. Feb 12 19:57:50.445361 kernel: audit: type=1130 audit(1707767870.430:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.428042 ignition[845]: Ignition finished successfully Feb 12 19:57:50.431861 systemd[1]: Starting ignition-disks.service... Feb 12 19:57:50.450110 ignition[851]: Ignition 2.14.0 Feb 12 19:57:50.450120 ignition[851]: Stage: disks Feb 12 19:57:50.450235 ignition[851]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:57:50.450259 ignition[851]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:57:50.453257 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:57:50.456169 ignition[851]: disks: disks passed Feb 12 19:57:50.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.459282 systemd[1]: Finished ignition-disks.service. Feb 12 19:57:50.476882 kernel: audit: type=1130 audit(1707767870.461:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.456215 ignition[851]: Ignition finished successfully Feb 12 19:57:50.462302 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:57:50.473205 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:57:50.476883 systemd[1]: Reached target local-fs.target. Feb 12 19:57:50.480255 systemd[1]: Reached target sysinit.target. Feb 12 19:57:50.488339 systemd[1]: Reached target basic.target. Feb 12 19:57:50.492474 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:57:50.538239 systemd-fsck[859]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 12 19:57:50.543200 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:57:50.562621 kernel: audit: type=1130 audit(1707767870.545:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:50.546743 systemd[1]: Mounting sysroot.mount... Feb 12 19:57:50.573710 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:57:50.574135 systemd[1]: Mounted sysroot.mount. Feb 12 19:57:50.577554 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:57:50.609288 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:57:50.613799 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:57:50.617655 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:57:50.617706 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:57:50.627785 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:57:50.652907 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:57:50.663706 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (869) Feb 12 19:57:50.663888 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:57:50.677263 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:57:50.677298 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:57:50.677312 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:57:50.686332 initrd-setup-root[874]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:57:50.683878 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:57:50.712674 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:57:50.719346 initrd-setup-root[908]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:57:50.725481 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:57:51.154360 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:57:51.157730 systemd[1]: Starting ignition-mount.service... Feb 12 19:57:51.175546 kernel: audit: type=1130 audit(1707767871.156:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:51.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:51.176162 systemd[1]: Starting sysroot-boot.service... Feb 12 19:57:51.196632 systemd[1]: Finished sysroot-boot.service. Feb 12 19:57:51.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:51.212721 kernel: audit: type=1130 audit(1707767871.200:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:51.213830 ignition[937]: INFO : Ignition 2.14.0 Feb 12 19:57:51.213830 ignition[937]: INFO : Stage: mount Feb 12 19:57:51.217866 ignition[937]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:57:51.217866 ignition[937]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:57:51.227902 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:57:51.227902 ignition[937]: INFO : mount: mount passed Feb 12 19:57:51.227902 ignition[937]: INFO : Ignition finished successfully Feb 12 19:57:51.245230 kernel: audit: type=1130 audit(1707767871.227:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:51.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:51.223263 systemd[1]: Finished ignition-mount.service. Feb 12 19:57:51.347353 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:57:51.347466 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:57:51.904184 coreos-metadata[868]: Feb 12 19:57:51.904 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:57:51.916265 coreos-metadata[868]: Feb 12 19:57:51.916 INFO Fetch successful Feb 12 19:57:51.950391 coreos-metadata[868]: Feb 12 19:57:51.950 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:57:51.965811 coreos-metadata[868]: Feb 12 19:57:51.965 INFO Fetch successful Feb 12 19:57:51.999221 coreos-metadata[868]: Feb 12 19:57:51.999 INFO wrote hostname ci-3510.3.2-a-d5221102be to /sysroot/etc/hostname Feb 12 19:57:52.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:52.001191 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:57:52.022006 kernel: audit: type=1130 audit(1707767872.005:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:57:52.007238 systemd[1]: Starting ignition-files.service... Feb 12 19:57:52.025147 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:57:52.042713 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (947) Feb 12 19:57:52.042753 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:57:52.042766 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:57:52.048622 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:57:52.053114 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:57:52.066819 ignition[966]: INFO : Ignition 2.14.0 Feb 12 19:57:52.066819 ignition[966]: INFO : Stage: files Feb 12 19:57:52.070882 ignition[966]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:57:52.070882 ignition[966]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:57:52.079143 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:57:52.089187 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:57:52.109763 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:57:52.109763 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:57:52.134606 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:57:52.138773 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:57:52.157523 unknown[966]: wrote ssh authorized keys file for user: core Feb 12 19:57:52.160535 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:57:52.164115 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:57:52.168399 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:57:52.794068 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:57:52.935508 ignition[966]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 19:57:52.942709 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:57:52.942709 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:57:52.942709 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:57:53.850853 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:57:53.985329 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:57:53.990403 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:57:53.990403 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 19:57:54.488961 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:57:54.658196 ignition[966]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 19:57:54.665763 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:57:54.665763 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:57:54.665763 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:57:55.491710 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:58:14.759406 ignition[966]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 12 19:58:14.767634 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:58:14.767634 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:58:14.767634 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:58:14.887377 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:58:15.193066 ignition[966]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 12 19:58:15.200933 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:58:15.200933 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:58:15.200933 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:58:15.323838 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:58:16.002708 ignition[966]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 12 19:58:16.009828 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:58:16.009828 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:58:16.009828 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:58:16.009828 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:58:16.009828 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:58:16.009828 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:58:16.033416 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:58:16.033416 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:58:16.041432 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:58:16.041432 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:58:16.049124 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:58:16.053118 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:58:16.056998 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:58:16.056998 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:58:16.056998 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:58:16.074532 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3942441313" Feb 12 19:58:16.079100 ignition[966]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3942441313": device or resource busy Feb 12 19:58:16.079100 ignition[966]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3942441313", trying btrfs: device or resource busy Feb 12 19:58:16.079100 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3942441313" Feb 12 19:58:16.099643 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (969) Feb 12 19:58:16.099786 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3942441313" Feb 12 19:58:16.104808 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3942441313" Feb 12 19:58:16.108647 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3942441313" Feb 12 19:58:16.108647 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:58:16.108647 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:58:16.108647 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:58:16.106985 systemd[1]: mnt-oem3942441313.mount: Deactivated successfully. Feb 12 19:58:16.129753 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3028662173" Feb 12 19:58:16.129753 ignition[966]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3028662173": device or resource busy Feb 12 19:58:16.129753 ignition[966]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3028662173", trying btrfs: device or resource busy Feb 12 19:58:16.129753 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3028662173" Feb 12 19:58:16.129753 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3028662173" Feb 12 19:58:16.129753 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3028662173" Feb 12 19:58:16.129753 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3028662173" Feb 12 19:58:16.129753 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(17): [started] processing unit "waagent.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(17): [finished] processing unit "waagent.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:58:16.129753 ignition[966]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 12 19:58:16.157590 kernel: audit: type=1130 audit(1707767896.129:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.120776 systemd[1]: mnt-oem3028662173.mount: Deactivated successfully. Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1f): [started] setting preset to enabled for "waagent.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(1f): [finished] setting preset to enabled for "waagent.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(20): [started] setting preset to enabled for "nvidia.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(20): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:58:16.157960 ignition[966]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:58:16.157960 ignition[966]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:58:16.157960 ignition[966]: INFO : files: files passed Feb 12 19:58:16.157960 ignition[966]: INFO : Ignition finished successfully Feb 12 19:58:16.126613 systemd[1]: Finished ignition-files.service. Feb 12 19:58:16.143761 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:58:16.272894 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:58:16.159054 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:58:16.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.208797 systemd[1]: Starting ignition-quench.service... Feb 12 19:58:16.299487 kernel: audit: type=1130 audit(1707767896.281:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.278245 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:58:16.284889 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:58:16.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.284969 systemd[1]: Finished ignition-quench.service. Feb 12 19:58:16.339386 kernel: audit: type=1130 audit(1707767896.298:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.339422 kernel: audit: type=1131 audit(1707767896.298:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.299523 systemd[1]: Reached target ignition-complete.target. Feb 12 19:58:16.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.300322 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:58:16.366138 kernel: audit: type=1130 audit(1707767896.332:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.366167 kernel: audit: type=1131 audit(1707767896.332:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.331906 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:58:16.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.331992 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:58:16.380856 kernel: audit: type=1130 audit(1707767896.363:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.333103 systemd[1]: Reached target initrd-fs.target. Feb 12 19:58:16.333362 systemd[1]: Reached target initrd.target. Feb 12 19:58:16.333772 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:58:16.334474 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:58:16.356298 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:58:16.377251 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:58:16.400154 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:58:16.404020 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:58:16.408106 systemd[1]: Stopped target timers.target. Feb 12 19:58:16.411771 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:58:16.414084 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:58:16.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.418114 systemd[1]: Stopped target initrd.target. Feb 12 19:58:16.432871 kernel: audit: type=1131 audit(1707767896.417:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.432998 systemd[1]: Stopped target basic.target. Feb 12 19:58:16.436457 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:58:16.440368 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:58:16.444282 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:58:16.448444 systemd[1]: Stopped target remote-fs.target. Feb 12 19:58:16.451985 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:58:16.455805 systemd[1]: Stopped target sysinit.target. Feb 12 19:58:16.459191 systemd[1]: Stopped target local-fs.target. Feb 12 19:58:16.462760 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:58:16.466518 systemd[1]: Stopped target swap.target. Feb 12 19:58:16.469774 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:58:16.471930 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:58:16.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.475620 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:58:16.489935 kernel: audit: type=1131 audit(1707767896.475:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.490070 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:58:16.492242 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:58:16.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.496197 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:58:16.511714 kernel: audit: type=1131 audit(1707767896.495:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.496309 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:58:16.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.513865 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:58:16.515981 systemd[1]: Stopped ignition-files.service. Feb 12 19:58:16.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.519551 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:58:16.521901 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:58:16.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.526999 systemd[1]: Stopping ignition-mount.service... Feb 12 19:58:16.536524 ignition[1004]: INFO : Ignition 2.14.0 Feb 12 19:58:16.536524 ignition[1004]: INFO : Stage: umount Feb 12 19:58:16.536524 ignition[1004]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:58:16.536524 ignition[1004]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:58:16.532267 systemd[1]: Stopping iscsiuio.service... Feb 12 19:58:16.540789 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:58:16.536796 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:58:16.544055 ignition[1004]: INFO : umount: umount passed Feb 12 19:58:16.544365 ignition[1004]: INFO : Ignition finished successfully Feb 12 19:58:16.558673 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:58:16.561109 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:58:16.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.565005 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:58:16.567241 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:58:16.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.572518 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:58:16.574534 systemd[1]: Stopped iscsiuio.service. Feb 12 19:58:16.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.578111 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:58:16.580199 systemd[1]: Stopped ignition-mount.service. Feb 12 19:58:16.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.583937 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:58:16.584054 systemd[1]: Stopped ignition-disks.service. Feb 12 19:58:16.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.589451 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:58:16.589500 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:58:16.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.594972 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:58:16.595019 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:58:16.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.600450 systemd[1]: Stopped target network.target. Feb 12 19:58:16.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.602062 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:58:16.602107 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:58:16.602217 systemd[1]: Stopped target paths.target. Feb 12 19:58:16.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.602538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:58:16.605724 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:58:16.609197 systemd[1]: Stopped target slices.target. Feb 12 19:58:16.610836 systemd[1]: Stopped target sockets.target. Feb 12 19:58:16.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.612529 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:58:16.612563 systemd[1]: Closed iscsid.socket. Feb 12 19:58:16.616317 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:58:16.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.616357 systemd[1]: Closed iscsiuio.socket. Feb 12 19:58:16.619918 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:58:16.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.619968 systemd[1]: Stopped ignition-setup.service. Feb 12 19:58:16.623410 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:58:16.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.627254 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:58:16.630187 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:58:16.657000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:58:16.630812 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:58:16.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.630908 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:58:16.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.633734 systemd-networkd[810]: eth0: DHCPv6 lease lost Feb 12 19:58:16.672000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:58:16.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.636931 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:58:16.637025 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:58:16.646330 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:58:16.646425 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:58:16.652303 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:58:16.652388 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:58:16.657944 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:58:16.657982 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:58:16.661135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:58:16.661181 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:58:16.663658 systemd[1]: Stopping network-cleanup.service... Feb 12 19:58:16.668500 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:58:16.668552 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:58:16.670444 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:58:16.670490 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:58:16.672405 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:58:16.672450 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:58:16.683472 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:58:16.710700 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:58:16.712812 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:58:16.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.717538 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:58:16.717604 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:58:16.719491 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:58:16.719537 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:58:16.729150 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:58:16.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.729205 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:58:16.731071 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:58:16.731111 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:58:16.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.738575 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:58:16.738618 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:58:16.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.746761 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:58:16.748796 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:58:16.748858 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:58:16.753785 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:58:16.753827 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:58:16.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.763316 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:58:16.763367 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:58:16.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.769629 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:58:16.772125 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:58:16.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.816757 kernel: hv_netvsc 0022489e-8a3f-0022-489e-8a3f0022489e eth0: Data path switched from VF: enP64466s1 Feb 12 19:58:16.835899 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:58:16.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:16.835987 systemd[1]: Stopped network-cleanup.service. Feb 12 19:58:16.838147 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:58:16.843163 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:58:16.856355 systemd[1]: Switching root. Feb 12 19:58:16.881349 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 12 19:58:16.881419 iscsid[817]: iscsid shutting down. Feb 12 19:58:16.883345 systemd-journald[183]: Journal stopped Feb 12 19:58:29.269957 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:58:29.269990 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:58:29.270002 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:58:29.270014 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:58:29.270022 kernel: SELinux: policy capability open_perms=1 Feb 12 19:58:29.270033 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:58:29.270043 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:58:29.270056 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:58:29.270064 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:58:29.270075 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:58:29.270083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:58:29.270095 systemd[1]: Successfully loaded SELinux policy in 230.123ms. Feb 12 19:58:29.270106 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.913ms. Feb 12 19:58:29.270119 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:58:29.270133 systemd[1]: Detected virtualization microsoft. Feb 12 19:58:29.270143 systemd[1]: Detected architecture x86-64. Feb 12 19:58:29.270153 systemd[1]: Detected first boot. Feb 12 19:58:29.270164 systemd[1]: Hostname set to . Feb 12 19:58:29.270174 systemd[1]: Initializing machine ID from random generator. Feb 12 19:58:29.270187 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:58:29.270199 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:58:29.270208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:58:29.270221 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:58:29.270232 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:58:29.270245 kernel: kauditd_printk_skb: 50 callbacks suppressed Feb 12 19:58:29.270254 kernel: audit: type=1334 audit(1707767908.807:90): prog-id=12 op=LOAD Feb 12 19:58:29.270268 kernel: audit: type=1334 audit(1707767908.807:91): prog-id=3 op=UNLOAD Feb 12 19:58:29.270280 kernel: audit: type=1334 audit(1707767908.812:92): prog-id=13 op=LOAD Feb 12 19:58:29.270289 kernel: audit: type=1334 audit(1707767908.816:93): prog-id=14 op=LOAD Feb 12 19:58:29.270299 kernel: audit: type=1334 audit(1707767908.816:94): prog-id=4 op=UNLOAD Feb 12 19:58:29.270311 kernel: audit: type=1334 audit(1707767908.816:95): prog-id=5 op=UNLOAD Feb 12 19:58:29.270320 kernel: audit: type=1334 audit(1707767908.820:96): prog-id=15 op=LOAD Feb 12 19:58:29.270331 kernel: audit: type=1334 audit(1707767908.820:97): prog-id=12 op=UNLOAD Feb 12 19:58:29.270341 kernel: audit: type=1334 audit(1707767908.825:98): prog-id=16 op=LOAD Feb 12 19:58:29.270353 kernel: audit: type=1334 audit(1707767908.829:99): prog-id=17 op=LOAD Feb 12 19:58:29.270363 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:58:29.270373 systemd[1]: Stopped iscsid.service. Feb 12 19:58:29.270385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:58:29.270395 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:58:29.270407 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:58:29.270422 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:58:29.270435 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:58:29.270447 systemd[1]: Created slice system-getty.slice. Feb 12 19:58:29.270457 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:58:29.270469 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:58:29.270482 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:58:29.270492 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:58:29.270505 systemd[1]: Created slice user.slice. Feb 12 19:58:29.270514 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:58:29.270527 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:58:29.270540 systemd[1]: Set up automount boot.automount. Feb 12 19:58:29.270551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:58:29.270562 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:58:29.270573 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:58:29.270585 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:58:29.270595 systemd[1]: Reached target integritysetup.target. Feb 12 19:58:29.270607 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:58:29.270617 systemd[1]: Reached target remote-fs.target. Feb 12 19:58:29.270631 systemd[1]: Reached target slices.target. Feb 12 19:58:29.270640 systemd[1]: Reached target swap.target. Feb 12 19:58:29.270653 systemd[1]: Reached target torcx.target. Feb 12 19:58:29.270665 systemd[1]: Reached target veritysetup.target. Feb 12 19:58:29.270676 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:58:29.270688 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:58:29.270712 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:58:29.270725 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:58:29.270736 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:58:29.270748 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:58:29.270759 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:58:29.270770 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:58:29.270782 systemd[1]: Mounting media.mount... Feb 12 19:58:29.270793 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:58:29.270808 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:58:29.270818 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:58:29.270830 systemd[1]: Mounting tmp.mount... Feb 12 19:58:29.270840 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:58:29.270852 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:58:29.270863 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:58:29.270874 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:58:29.270887 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:58:29.270899 systemd[1]: Starting modprobe@drm.service... Feb 12 19:58:29.270912 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:58:29.270922 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:58:29.270934 systemd[1]: Starting modprobe@loop.service... Feb 12 19:58:29.270946 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:58:29.270957 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:58:29.270969 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:58:29.270980 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:58:29.270993 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:58:29.271006 systemd[1]: Stopped systemd-journald.service. Feb 12 19:58:29.271018 systemd[1]: Starting systemd-journald.service... Feb 12 19:58:29.271029 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:58:29.271041 kernel: loop: module loaded Feb 12 19:58:29.271052 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:58:29.271065 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:58:29.271075 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:58:29.271084 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:58:29.271094 systemd[1]: Stopped verity-setup.service. Feb 12 19:58:29.271108 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:58:29.271118 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:58:29.271127 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:58:29.271137 kernel: fuse: init (API version 7.34) Feb 12 19:58:29.271145 systemd[1]: Mounted media.mount. Feb 12 19:58:29.271155 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:58:29.271164 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:58:29.271181 systemd-journald[1139]: Journal started Feb 12 19:58:29.271231 systemd-journald[1139]: Runtime Journal (/run/log/journal/353f92729118405bafab917208cb1f97) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:58:18.793000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:58:19.534000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:58:19.547000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:58:19.547000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:58:19.547000 audit: BPF prog-id=10 op=LOAD Feb 12 19:58:19.547000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:58:19.547000 audit: BPF prog-id=11 op=LOAD Feb 12 19:58:19.547000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:58:20.757000 audit[1037]: AVC avc: denied { associate } for pid=1037 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:58:20.757000 audit[1037]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078cc a1=c00002ae58 a2=c000029b00 a3=32 items=0 ppid=1020 pid=1037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:58:20.757000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:58:20.764000 audit[1037]: AVC avc: denied { associate } for pid=1037 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:58:20.764000 audit[1037]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a5 a2=1ed a3=0 items=2 ppid=1020 pid=1037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:58:20.764000 audit: CWD cwd="/" Feb 12 19:58:20.764000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:20.764000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:20.764000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:58:28.807000 audit: BPF prog-id=12 op=LOAD Feb 12 19:58:28.807000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:58:28.812000 audit: BPF prog-id=13 op=LOAD Feb 12 19:58:28.816000 audit: BPF prog-id=14 op=LOAD Feb 12 19:58:28.816000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:58:28.816000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:58:28.820000 audit: BPF prog-id=15 op=LOAD Feb 12 19:58:28.820000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:58:28.825000 audit: BPF prog-id=16 op=LOAD Feb 12 19:58:28.829000 audit: BPF prog-id=17 op=LOAD Feb 12 19:58:28.829000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:58:28.829000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:58:28.833000 audit: BPF prog-id=18 op=LOAD Feb 12 19:58:28.833000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:58:28.856000 audit: BPF prog-id=19 op=LOAD Feb 12 19:58:28.856000 audit: BPF prog-id=20 op=LOAD Feb 12 19:58:28.856000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:58:28.856000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:58:28.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:28.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:28.866000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:58:28.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:28.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.174000 audit: BPF prog-id=21 op=LOAD Feb 12 19:58:29.175000 audit: BPF prog-id=22 op=LOAD Feb 12 19:58:29.175000 audit: BPF prog-id=23 op=LOAD Feb 12 19:58:29.175000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:58:29.175000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:58:29.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.260000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:58:29.260000 audit[1139]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc270e3a0 a2=4000 a3=7ffdc270e43c items=0 ppid=1 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:58:29.260000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:58:28.806430 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:58:20.743892 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:58:28.857342 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:58:20.744486 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:58:20.744509 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:58:20.744546 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:58:20.744558 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:58:20.744605 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:58:20.744620 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:58:20.744892 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:58:20.744937 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:58:20.744952 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:58:20.745636 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:58:20.745687 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:58:20.745724 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:58:20.745740 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:58:20.745759 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:58:20.745775 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:58:27.702675 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:58:27.702927 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:58:27.703046 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:58:27.703212 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:58:27.703258 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:58:27.703309 /usr/lib/systemd/system-generators/torcx-generator[1037]: time="2024-02-12T19:58:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:58:29.277133 systemd[1]: Started systemd-journald.service. Feb 12 19:58:29.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.278055 systemd[1]: Mounted tmp.mount. Feb 12 19:58:29.280013 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:58:29.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.282487 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:58:29.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.284783 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:58:29.284933 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:58:29.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.287097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:58:29.287241 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:58:29.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.289375 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:58:29.289532 systemd[1]: Finished modprobe@drm.service. Feb 12 19:58:29.291586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:58:29.291749 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:58:29.294088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:58:29.294231 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:58:29.296237 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:58:29.296391 systemd[1]: Finished modprobe@loop.service. Feb 12 19:58:29.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.298490 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:58:29.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.300800 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:58:29.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.303346 systemd[1]: Reached target network-pre.target. Feb 12 19:58:29.306479 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:58:29.309934 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:58:29.313356 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:58:29.314668 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:58:29.317804 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:58:29.319986 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:58:29.320966 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:58:29.323422 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:58:29.324470 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:58:29.329256 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:58:29.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.332702 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:58:29.335316 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:58:29.338451 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:58:29.341356 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:58:29.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.345087 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:58:29.359713 systemd-journald[1139]: Time spent on flushing to /var/log/journal/353f92729118405bafab917208cb1f97 is 38.698ms for 1206 entries. Feb 12 19:58:29.359713 systemd-journald[1139]: System Journal (/var/log/journal/353f92729118405bafab917208cb1f97) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:58:29.433561 systemd-journald[1139]: Received client request to flush runtime journal. Feb 12 19:58:29.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.416618 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:58:29.433837 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:58:29.420614 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:58:29.434915 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:58:29.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.459269 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:58:29.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.928191 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:58:29.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:29.931961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:58:30.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:30.203525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:58:30.535984 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:58:30.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:30.538000 audit: BPF prog-id=24 op=LOAD Feb 12 19:58:30.538000 audit: BPF prog-id=25 op=LOAD Feb 12 19:58:30.538000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:58:30.538000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:58:30.540178 systemd[1]: Starting systemd-udevd.service... Feb 12 19:58:30.557330 systemd-udevd[1166]: Using default interface naming scheme 'v252'. Feb 12 19:58:30.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:30.733000 audit: BPF prog-id=26 op=LOAD Feb 12 19:58:30.730364 systemd[1]: Started systemd-udevd.service. Feb 12 19:58:30.735319 systemd[1]: Starting systemd-networkd.service... Feb 12 19:58:30.771932 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:58:30.802000 audit: BPF prog-id=27 op=LOAD Feb 12 19:58:30.802000 audit: BPF prog-id=28 op=LOAD Feb 12 19:58:30.802000 audit: BPF prog-id=29 op=LOAD Feb 12 19:58:30.804090 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:58:30.835722 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:58:30.853000 audit[1176]: AVC avc: denied { confidentiality } for pid=1176 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:58:30.860018 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:58:30.867627 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:58:30.867686 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:58:30.874716 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:58:30.876355 systemd[1]: Started systemd-userdbd.service. Feb 12 19:58:30.890971 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:58:30.891017 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:58:30.891043 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:58:30.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:30.898784 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:58:30.905686 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:58:30.960630 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:58:30.960746 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:58:30.960775 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:58:30.853000 audit[1176]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5637fad510d0 a1=f884 a2=7f87d3891bc5 a3=5 items=12 ppid=1166 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:58:30.853000 audit: CWD cwd="/" Feb 12 19:58:30.853000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=1 name=(null) inode=15716 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=2 name=(null) inode=15716 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=3 name=(null) inode=15717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=4 name=(null) inode=15716 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=5 name=(null) inode=15718 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=6 name=(null) inode=15716 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=7 name=(null) inode=15719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=8 name=(null) inode=15716 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=9 name=(null) inode=15720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=10 name=(null) inode=15716 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PATH item=11 name=(null) inode=15721 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:58:30.853000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:58:31.259391 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1177) Feb 12 19:58:31.331671 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:58:31.418084 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 12 19:58:31.431114 systemd-networkd[1178]: lo: Link UP Feb 12 19:58:31.431719 systemd-networkd[1178]: lo: Gained carrier Feb 12 19:58:31.433082 systemd-networkd[1178]: Enumeration completed Feb 12 19:58:31.433465 systemd[1]: Started systemd-networkd.service. Feb 12 19:58:31.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:31.437592 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:58:31.441875 systemd-networkd[1178]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:58:31.457381 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:58:31.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:31.461421 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:58:31.499011 kernel: mlx5_core fbd2:00:02.0 enP64466s1: Link up Feb 12 19:58:31.537013 kernel: hv_netvsc 0022489e-8a3f-0022-489e-8a3f0022489e eth0: Data path switched to VF: enP64466s1 Feb 12 19:58:31.538506 systemd-networkd[1178]: enP64466s1: Link UP Feb 12 19:58:31.538757 systemd-networkd[1178]: eth0: Link UP Feb 12 19:58:31.538854 systemd-networkd[1178]: eth0: Gained carrier Feb 12 19:58:31.542290 systemd-networkd[1178]: enP64466s1: Gained carrier Feb 12 19:58:31.576113 systemd-networkd[1178]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:58:31.695639 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:58:31.718001 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:58:31.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:31.720837 systemd[1]: Reached target cryptsetup.target. Feb 12 19:58:31.724201 systemd[1]: Starting lvm2-activation.service... Feb 12 19:58:31.728726 lvm[1244]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:58:31.748868 systemd[1]: Finished lvm2-activation.service. Feb 12 19:58:31.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:31.751246 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:58:31.753413 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:58:31.753447 systemd[1]: Reached target local-fs.target. Feb 12 19:58:31.755513 systemd[1]: Reached target machines.target. Feb 12 19:58:31.758779 systemd[1]: Starting ldconfig.service... Feb 12 19:58:31.771919 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:58:31.771987 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:58:31.773016 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:58:31.776109 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:58:31.779866 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:58:31.782505 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:58:31.782595 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:58:31.783956 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:58:31.796307 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:58:31.808250 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:58:31.837089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:58:31.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:31.997677 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:58:32.087624 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1246 (bootctl) Feb 12 19:58:32.089199 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:58:32.797259 systemd-networkd[1178]: eth0: Gained IPv6LL Feb 12 19:58:32.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:32.799781 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:58:33.581867 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:58:33.582522 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:58:33.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:33.867591 systemd-fsck[1254]: fsck.fat 4.2 (2021-01-31) Feb 12 19:58:33.867591 systemd-fsck[1254]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 19:58:33.870283 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:58:33.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:33.875245 systemd[1]: Mounting boot.mount... Feb 12 19:58:33.892420 systemd[1]: Mounted boot.mount. Feb 12 19:58:33.906307 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:58:33.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.260345 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:58:34.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.264453 systemd[1]: Starting audit-rules.service... Feb 12 19:58:34.265957 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 12 19:58:34.266027 kernel: audit: type=1130 audit(1707767914.262:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.278948 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:58:34.282362 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:58:34.284000 audit: BPF prog-id=30 op=LOAD Feb 12 19:58:34.286785 systemd[1]: Starting systemd-resolved.service... Feb 12 19:58:34.290014 kernel: audit: type=1334 audit(1707767914.284:170): prog-id=30 op=LOAD Feb 12 19:58:34.291000 audit: BPF prog-id=31 op=LOAD Feb 12 19:58:34.295335 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:58:34.297357 kernel: audit: type=1334 audit(1707767914.291:171): prog-id=31 op=LOAD Feb 12 19:58:34.298755 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:58:34.322000 audit[1266]: SYSTEM_BOOT pid=1266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.335012 kernel: audit: type=1127 audit(1707767914.322:172): pid=1266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.324772 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:58:34.348125 kernel: audit: type=1130 audit(1707767914.336:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.388393 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:58:34.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.391307 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:58:34.403020 kernel: audit: type=1130 audit(1707767914.390:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.408391 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:58:34.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.411417 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:58:34.424985 kernel: audit: type=1130 audit(1707767914.410:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.425054 kernel: audit: type=1130 audit(1707767914.423:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.422868 systemd-resolved[1264]: Positive Trust Anchors: Feb 12 19:58:34.422879 systemd-resolved[1264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:58:34.422920 systemd-resolved[1264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:58:34.424713 systemd[1]: Reached target time-set.target. Feb 12 19:58:34.533063 systemd-resolved[1264]: Using system hostname 'ci-3510.3.2-a-d5221102be'. Feb 12 19:58:34.534630 systemd[1]: Started systemd-resolved.service. Feb 12 19:58:34.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.537530 systemd[1]: Reached target network.target. Feb 12 19:58:34.549521 kernel: audit: type=1130 audit(1707767914.536:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:58:34.551139 systemd[1]: Reached target network-online.target. Feb 12 19:58:34.553362 systemd[1]: Reached target nss-lookup.target. Feb 12 19:58:34.635000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:58:34.637163 augenrules[1281]: No rules Feb 12 19:58:34.638378 systemd[1]: Finished audit-rules.service. Feb 12 19:58:34.635000 audit[1281]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb28d5230 a2=420 a3=0 items=0 ppid=1260 pid=1281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:58:34.635000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:58:34.648101 kernel: audit: type=1305 audit(1707767914.635:178): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:58:38.934127 ldconfig[1245]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:58:38.950482 systemd[1]: Finished ldconfig.service. Feb 12 19:58:38.954601 systemd[1]: Starting systemd-update-done.service... Feb 12 19:58:38.975099 systemd[1]: Finished systemd-update-done.service. Feb 12 19:58:38.978018 systemd[1]: Reached target sysinit.target. Feb 12 19:58:38.980223 systemd[1]: Started motdgen.path. Feb 12 19:58:38.982089 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:58:38.984915 systemd[1]: Started logrotate.timer. Feb 12 19:58:38.986921 systemd[1]: Started mdadm.timer. Feb 12 19:58:38.988738 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:58:38.990653 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:58:38.990690 systemd[1]: Reached target paths.target. Feb 12 19:58:38.992365 systemd[1]: Reached target timers.target. Feb 12 19:58:38.994304 systemd[1]: Listening on dbus.socket. Feb 12 19:58:38.996830 systemd[1]: Starting docker.socket... Feb 12 19:58:39.011485 systemd[1]: Listening on sshd.socket. Feb 12 19:58:39.013518 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:58:39.013942 systemd[1]: Listening on docker.socket. Feb 12 19:58:39.015863 systemd[1]: Reached target sockets.target. Feb 12 19:58:39.017594 systemd[1]: Reached target basic.target. Feb 12 19:58:39.019484 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:58:39.019513 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:58:39.020497 systemd[1]: Starting containerd.service... Feb 12 19:58:39.023506 systemd[1]: Starting dbus.service... Feb 12 19:58:39.025985 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:58:39.029397 systemd[1]: Starting extend-filesystems.service... Feb 12 19:58:39.031487 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:58:39.032769 systemd[1]: Starting motdgen.service... Feb 12 19:58:39.038559 systemd[1]: Started nvidia.service. Feb 12 19:58:39.041539 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:58:39.044547 systemd[1]: Starting prepare-critools.service... Feb 12 19:58:39.047582 systemd[1]: Starting prepare-helm.service... Feb 12 19:58:39.050748 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:58:39.054406 systemd[1]: Starting sshd-keygen.service... Feb 12 19:58:39.060086 systemd[1]: Starting systemd-logind.service... Feb 12 19:58:39.062112 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:58:39.062185 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:58:39.062773 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:58:39.063591 systemd[1]: Starting update-engine.service... Feb 12 19:58:39.066804 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:58:39.074973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:58:39.075263 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:58:39.113007 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:58:39.113443 systemd[1]: Finished motdgen.service. Feb 12 19:58:39.117551 extend-filesystems[1292]: Found sda Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda1 Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda2 Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda3 Feb 12 19:58:39.119875 extend-filesystems[1292]: Found usr Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda4 Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda6 Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda7 Feb 12 19:58:39.119875 extend-filesystems[1292]: Found sda9 Feb 12 19:58:39.119875 extend-filesystems[1292]: Checking size of /dev/sda9 Feb 12 19:58:39.140494 jq[1306]: true Feb 12 19:58:39.135891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:58:39.140718 jq[1291]: false Feb 12 19:58:39.136123 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:58:39.153350 jq[1325]: true Feb 12 19:58:39.168715 env[1317]: time="2024-02-12T19:58:39.168676200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:58:39.183051 tar[1308]: ./ Feb 12 19:58:39.183051 tar[1308]: ./loopback Feb 12 19:58:39.184744 tar[1311]: linux-amd64/helm Feb 12 19:58:39.190936 tar[1310]: crictl Feb 12 19:58:39.217285 extend-filesystems[1292]: Old size kept for /dev/sda9 Feb 12 19:58:39.228359 extend-filesystems[1292]: Found sr0 Feb 12 19:58:39.217958 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:58:39.218143 systemd[1]: Finished extend-filesystems.service. Feb 12 19:58:39.264293 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:58:39.264521 systemd-logind[1304]: New seat seat0. Feb 12 19:58:39.300596 bash[1349]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:58:39.300399 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:58:39.321440 tar[1308]: ./bandwidth Feb 12 19:58:39.326249 env[1317]: time="2024-02-12T19:58:39.326203000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:58:39.326561 env[1317]: time="2024-02-12T19:58:39.326352500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328075 env[1317]: time="2024-02-12T19:58:39.327775300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328075 env[1317]: time="2024-02-12T19:58:39.327809400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328075 env[1317]: time="2024-02-12T19:58:39.328062100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328232 env[1317]: time="2024-02-12T19:58:39.328084300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328232 env[1317]: time="2024-02-12T19:58:39.328102600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:58:39.328232 env[1317]: time="2024-02-12T19:58:39.328115000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328232 env[1317]: time="2024-02-12T19:58:39.328216900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328781 env[1317]: time="2024-02-12T19:58:39.328468600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328781 env[1317]: time="2024-02-12T19:58:39.328667800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:58:39.328781 env[1317]: time="2024-02-12T19:58:39.328688600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:58:39.328781 env[1317]: time="2024-02-12T19:58:39.328749400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:58:39.328781 env[1317]: time="2024-02-12T19:58:39.328765000Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:58:39.351839 env[1317]: time="2024-02-12T19:58:39.351808400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:58:39.351939 env[1317]: time="2024-02-12T19:58:39.351875300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:58:39.351939 env[1317]: time="2024-02-12T19:58:39.351896200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:58:39.352049 env[1317]: time="2024-02-12T19:58:39.351947800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352049 env[1317]: time="2024-02-12T19:58:39.351969500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352049 env[1317]: time="2024-02-12T19:58:39.352038200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352159 env[1317]: time="2024-02-12T19:58:39.352061400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352159 env[1317]: time="2024-02-12T19:58:39.352094200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352159 env[1317]: time="2024-02-12T19:58:39.352113600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352159 env[1317]: time="2024-02-12T19:58:39.352133300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352302 env[1317]: time="2024-02-12T19:58:39.352166100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.352302 env[1317]: time="2024-02-12T19:58:39.352185900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:58:39.352376 env[1317]: time="2024-02-12T19:58:39.352329200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:58:39.352469 env[1317]: time="2024-02-12T19:58:39.352447400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.352908300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.352964000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.352984300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353071500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353101200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353118700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353134600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353152000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353179600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353195700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353212200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.353301 env[1317]: time="2024-02-12T19:58:39.353231200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353424100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353449900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353471100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353501700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353521300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353538400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353575000Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:58:39.354186 env[1317]: time="2024-02-12T19:58:39.353615200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:58:39.354466 env[1317]: time="2024-02-12T19:58:39.353909900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:58:39.354466 env[1317]: time="2024-02-12T19:58:39.354018300Z" level=info msg="Connect containerd service" Feb 12 19:58:39.354466 env[1317]: time="2024-02-12T19:58:39.354065300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.354893100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.355206100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.355270000Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.357915700Z" level=info msg="containerd successfully booted in 0.189891s" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.358638500Z" level=info msg="Start subscribing containerd event" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.358778800Z" level=info msg="Start recovering state" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.358935400Z" level=info msg="Start event monitor" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.365303100Z" level=info msg="Start snapshots syncer" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.365441800Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:58:39.379821 env[1317]: time="2024-02-12T19:58:39.365463300Z" level=info msg="Start streaming server" Feb 12 19:58:39.355393 systemd[1]: Started containerd.service. Feb 12 19:58:39.388481 dbus-daemon[1290]: [system] SELinux support is enabled Feb 12 19:58:39.388653 systemd[1]: Started dbus.service. Feb 12 19:58:39.393535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:58:39.393563 systemd[1]: Reached target system-config.target. Feb 12 19:58:39.396145 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:58:39.396166 systemd[1]: Reached target user-config.target. Feb 12 19:58:39.402875 systemd[1]: Started systemd-logind.service. Feb 12 19:58:39.405323 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:58:39.405941 dbus-daemon[1290]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:58:39.433020 tar[1308]: ./ptp Feb 12 19:58:39.566091 tar[1308]: ./vlan Feb 12 19:58:39.677434 tar[1308]: ./host-device Feb 12 19:58:39.758726 tar[1308]: ./tuning Feb 12 19:58:39.815369 update_engine[1305]: I0212 19:58:39.814875 1305 main.cc:92] Flatcar Update Engine starting Feb 12 19:58:39.834580 tar[1308]: ./vrf Feb 12 19:58:39.891910 systemd[1]: Started update-engine.service. Feb 12 19:58:39.892366 update_engine[1305]: I0212 19:58:39.892010 1305 update_check_scheduler.cc:74] Next update check in 7m2s Feb 12 19:58:39.897027 systemd[1]: Started locksmithd.service. Feb 12 19:58:39.906838 tar[1308]: ./sbr Feb 12 19:58:39.988112 tar[1308]: ./tap Feb 12 19:58:40.079894 tar[1308]: ./dhcp Feb 12 19:58:40.309947 tar[1308]: ./static Feb 12 19:58:40.368424 systemd[1]: Finished prepare-critools.service. Feb 12 19:58:40.369954 tar[1308]: ./firewall Feb 12 19:58:40.391674 tar[1311]: linux-amd64/LICENSE Feb 12 19:58:40.392017 tar[1311]: linux-amd64/README.md Feb 12 19:58:40.396980 systemd[1]: Finished prepare-helm.service. Feb 12 19:58:40.423157 tar[1308]: ./macvlan Feb 12 19:58:40.468009 tar[1308]: ./dummy Feb 12 19:58:40.511588 tar[1308]: ./bridge Feb 12 19:58:40.559410 tar[1308]: ./ipvlan Feb 12 19:58:40.603851 tar[1308]: ./portmap Feb 12 19:58:40.645874 tar[1308]: ./host-local Feb 12 19:58:40.718778 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:58:40.834983 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:58:40.854865 systemd[1]: Finished sshd-keygen.service. Feb 12 19:58:40.859166 systemd[1]: Starting issuegen.service... Feb 12 19:58:40.862693 systemd[1]: Started waagent.service. Feb 12 19:58:40.865803 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:58:40.866051 systemd[1]: Finished issuegen.service. Feb 12 19:58:40.869570 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:58:40.887447 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:58:40.891486 systemd[1]: Started getty@tty1.service. Feb 12 19:58:40.894965 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:58:40.897787 systemd[1]: Reached target getty.target. Feb 12 19:58:40.899921 systemd[1]: Reached target multi-user.target. Feb 12 19:58:40.903511 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:58:40.912516 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:58:40.912655 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:58:40.915163 systemd[1]: Startup finished in 853ms (firmware) + 22.176s (loader) + 855ms (kernel) + 34.679s (initrd) + 22.305s (userspace) = 1min 20.871s. Feb 12 19:58:41.223018 login[1419]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:58:41.224869 login[1420]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:58:41.258827 systemd[1]: Created slice user-500.slice. Feb 12 19:58:41.260195 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:58:41.264043 systemd-logind[1304]: New session 2 of user core. Feb 12 19:58:41.270103 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:58:41.272157 systemd[1]: Starting user@500.service... Feb 12 19:58:41.283945 (systemd)[1426]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:58:41.435205 systemd[1426]: Queued start job for default target default.target. Feb 12 19:58:41.435920 systemd[1426]: Reached target paths.target. Feb 12 19:58:41.435956 systemd[1426]: Reached target sockets.target. Feb 12 19:58:41.435977 systemd[1426]: Reached target timers.target. Feb 12 19:58:41.436012 systemd[1426]: Reached target basic.target. Feb 12 19:58:41.436147 systemd[1]: Started user@500.service. Feb 12 19:58:41.437607 systemd[1]: Started session-2.scope. Feb 12 19:58:41.438358 systemd[1426]: Reached target default.target. Feb 12 19:58:41.438607 systemd[1426]: Startup finished in 148ms. Feb 12 19:58:41.847913 locksmithd[1401]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:58:42.223430 login[1419]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:58:42.228874 systemd-logind[1304]: New session 1 of user core. Feb 12 19:58:42.229488 systemd[1]: Started session-1.scope. Feb 12 19:58:44.570403 systemd-timesyncd[1265]: Timed out waiting for reply from 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 12 19:58:45.035208 systemd-timesyncd[1265]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Feb 12 19:58:45.035300 systemd-timesyncd[1265]: Initial clock synchronization to Mon 2024-02-12 19:58:44.573846 UTC. Feb 12 19:58:46.896933 waagent[1414]: 2024-02-12T19:58:46.896816Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:58:46.901040 waagent[1414]: 2024-02-12T19:58:46.900953Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:58:46.903546 waagent[1414]: 2024-02-12T19:58:46.903485Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:58:46.905884 waagent[1414]: 2024-02-12T19:58:46.905815Z INFO Daemon Daemon Run daemon Feb 12 19:58:46.908503 waagent[1414]: 2024-02-12T19:58:46.908194Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:58:46.920309 waagent[1414]: 2024-02-12T19:58:46.920191Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:58:46.926488 waagent[1414]: 2024-02-12T19:58:46.926384Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:58:46.931109 waagent[1414]: 2024-02-12T19:58:46.931050Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:58:46.933700 waagent[1414]: 2024-02-12T19:58:46.933639Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:58:46.936560 waagent[1414]: 2024-02-12T19:58:46.936501Z INFO Daemon Daemon Activate resource disk Feb 12 19:58:46.939050 waagent[1414]: 2024-02-12T19:58:46.938976Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:58:46.948977 waagent[1414]: 2024-02-12T19:58:46.948907Z INFO Daemon Daemon Found device: None Feb 12 19:58:46.951267 waagent[1414]: 2024-02-12T19:58:46.951202Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:58:46.954873 waagent[1414]: 2024-02-12T19:58:46.954813Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:58:46.960539 waagent[1414]: 2024-02-12T19:58:46.960477Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:58:46.963355 waagent[1414]: 2024-02-12T19:58:46.963295Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:58:46.973294 waagent[1414]: 2024-02-12T19:58:46.973172Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:58:46.979726 waagent[1414]: 2024-02-12T19:58:46.979622Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:58:46.987352 waagent[1414]: 2024-02-12T19:58:46.980084Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:58:46.987352 waagent[1414]: 2024-02-12T19:58:46.980873Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:58:47.008023 waagent[1414]: 2024-02-12T19:58:47.005336Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:58:47.124468 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:58:47.141244 waagent[1414]: 2024-02-12T19:58:47.141116Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:58:47.144360 waagent[1414]: 2024-02-12T19:58:47.144289Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:58:47.147390 waagent[1414]: 2024-02-12T19:58:47.147286Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:58:47.150451 waagent[1414]: 2024-02-12T19:58:47.150392Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:58:47.153132 waagent[1414]: 2024-02-12T19:58:47.153074Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:58:47.155705 waagent[1414]: 2024-02-12T19:58:47.155644Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:58:47.282676 waagent[1414]: 2024-02-12T19:58:47.282597Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:58:47.290281 waagent[1414]: 2024-02-12T19:58:47.283512Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:58:47.290281 waagent[1414]: 2024-02-12T19:58:47.284439Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:58:47.721477 waagent[1414]: 2024-02-12T19:58:47.721329Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:58:47.731460 waagent[1414]: 2024-02-12T19:58:47.731380Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:58:47.736212 waagent[1414]: 2024-02-12T19:58:47.731787Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:58:47.809900 waagent[1414]: 2024-02-12T19:58:47.809777Z INFO Daemon Daemon Found private key matching thumbprint 0B4C451E30C3945399444C7E6357621F033370A4 Feb 12 19:58:47.820045 waagent[1414]: 2024-02-12T19:58:47.810323Z INFO Daemon Daemon Certificate with thumbprint C7D89B281B998B07E84B23C3F251800AE918E593 has no matching private key. Feb 12 19:58:47.820045 waagent[1414]: 2024-02-12T19:58:47.811522Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:58:47.826022 waagent[1414]: 2024-02-12T19:58:47.825955Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 61c65690-815d-4d12-b11c-81c381905ef4 New eTag: 8813654117369055893] Feb 12 19:58:47.832874 waagent[1414]: 2024-02-12T19:58:47.826738Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:58:47.836081 waagent[1414]: 2024-02-12T19:58:47.836026Z INFO Daemon Daemon Starting provisioning Feb 12 19:58:47.848698 waagent[1414]: 2024-02-12T19:58:47.836315Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:58:47.848698 waagent[1414]: 2024-02-12T19:58:47.837172Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-d5221102be] Feb 12 19:58:47.848698 waagent[1414]: 2024-02-12T19:58:47.841246Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-d5221102be] Feb 12 19:58:47.848698 waagent[1414]: 2024-02-12T19:58:47.842085Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:58:47.848698 waagent[1414]: 2024-02-12T19:58:47.842965Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:58:47.856258 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:58:47.856510 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:58:47.856586 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:58:47.856934 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:58:47.862047 systemd-networkd[1178]: eth0: DHCPv6 lease lost Feb 12 19:58:47.863356 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:58:47.863549 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:58:47.866109 systemd[1]: Starting systemd-networkd.service... Feb 12 19:58:47.896510 systemd-networkd[1469]: enP64466s1: Link UP Feb 12 19:58:47.896520 systemd-networkd[1469]: enP64466s1: Gained carrier Feb 12 19:58:47.897849 systemd-networkd[1469]: eth0: Link UP Feb 12 19:58:47.897857 systemd-networkd[1469]: eth0: Gained carrier Feb 12 19:58:47.898293 systemd-networkd[1469]: lo: Link UP Feb 12 19:58:47.898303 systemd-networkd[1469]: lo: Gained carrier Feb 12 19:58:47.898604 systemd-networkd[1469]: eth0: Gained IPv6LL Feb 12 19:58:47.898863 systemd-networkd[1469]: Enumeration completed Feb 12 19:58:47.898958 systemd[1]: Started systemd-networkd.service. Feb 12 19:58:47.900939 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:58:47.903126 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:58:47.904851 waagent[1414]: 2024-02-12T19:58:47.904432Z INFO Daemon Daemon Create user account if not exists Feb 12 19:58:47.908657 waagent[1414]: 2024-02-12T19:58:47.908544Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:58:47.912466 waagent[1414]: 2024-02-12T19:58:47.912391Z INFO Daemon Daemon Configure sudoer Feb 12 19:58:47.913201 waagent[1414]: 2024-02-12T19:58:47.913126Z INFO Daemon Daemon Configure sshd Feb 12 19:58:47.914064 waagent[1414]: 2024-02-12T19:58:47.914013Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:58:47.932101 systemd-networkd[1469]: eth0: DHCPv4 address 10.200.8.16/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:58:47.936575 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:58:47.940143 waagent[1414]: 2024-02-12T19:58:47.940042Z INFO Daemon Daemon Decode custom data Feb 12 19:58:47.944678 waagent[1414]: 2024-02-12T19:58:47.940576Z INFO Daemon Daemon Save custom data Feb 12 19:58:49.148529 waagent[1414]: 2024-02-12T19:58:49.148423Z INFO Daemon Daemon Provisioning complete Feb 12 19:58:49.164539 waagent[1414]: 2024-02-12T19:58:49.164460Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:58:49.167530 waagent[1414]: 2024-02-12T19:58:49.167464Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:58:49.172528 waagent[1414]: 2024-02-12T19:58:49.172465Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:58:49.434324 waagent[1478]: 2024-02-12T19:58:49.434161Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:58:49.435028 waagent[1478]: 2024-02-12T19:58:49.434951Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:58:49.435184 waagent[1478]: 2024-02-12T19:58:49.435130Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:58:49.446273 waagent[1478]: 2024-02-12T19:58:49.446198Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:58:49.446505 waagent[1478]: 2024-02-12T19:58:49.446380Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:58:49.509887 waagent[1478]: 2024-02-12T19:58:49.509765Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0B4C451E30C3945399444C7E6357621F033370A4 Feb 12 19:58:49.510121 waagent[1478]: 2024-02-12T19:58:49.510057Z INFO ExtHandler ExtHandler Certificate with thumbprint C7D89B281B998B07E84B23C3F251800AE918E593 has no matching private key. Feb 12 19:58:49.510385 waagent[1478]: 2024-02-12T19:58:49.510333Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:58:49.523635 waagent[1478]: 2024-02-12T19:58:49.523572Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 3086f7c8-402a-4479-bd8c-7f9d4e93f16b New eTag: 8813654117369055893] Feb 12 19:58:49.524215 waagent[1478]: 2024-02-12T19:58:49.524158Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:58:49.570509 waagent[1478]: 2024-02-12T19:58:49.570392Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:58:49.579007 waagent[1478]: 2024-02-12T19:58:49.578922Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1478 Feb 12 19:58:49.582321 waagent[1478]: 2024-02-12T19:58:49.582256Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:58:49.583564 waagent[1478]: 2024-02-12T19:58:49.583507Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:58:49.651142 waagent[1478]: 2024-02-12T19:58:49.651079Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:58:49.651531 waagent[1478]: 2024-02-12T19:58:49.651468Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:58:49.659413 waagent[1478]: 2024-02-12T19:58:49.659344Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:58:49.659855 waagent[1478]: 2024-02-12T19:58:49.659799Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:58:49.660874 waagent[1478]: 2024-02-12T19:58:49.660810Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:58:49.662119 waagent[1478]: 2024-02-12T19:58:49.662059Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:58:49.662742 waagent[1478]: 2024-02-12T19:58:49.662672Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:58:49.663160 waagent[1478]: 2024-02-12T19:58:49.663104Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:58:49.663507 waagent[1478]: 2024-02-12T19:58:49.663449Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:58:49.663678 waagent[1478]: 2024-02-12T19:58:49.663629Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:58:49.663753 waagent[1478]: 2024-02-12T19:58:49.663701Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:58:49.664834 waagent[1478]: 2024-02-12T19:58:49.664777Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:58:49.665083 waagent[1478]: 2024-02-12T19:58:49.665033Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:58:49.665167 waagent[1478]: 2024-02-12T19:58:49.665112Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:58:49.665701 waagent[1478]: 2024-02-12T19:58:49.665649Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:58:49.666289 waagent[1478]: 2024-02-12T19:58:49.666230Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:58:49.667449 waagent[1478]: 2024-02-12T19:58:49.667395Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:58:49.667585 waagent[1478]: 2024-02-12T19:58:49.667542Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:58:49.668140 waagent[1478]: 2024-02-12T19:58:49.668075Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:58:49.668823 waagent[1478]: 2024-02-12T19:58:49.668767Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:58:49.671814 waagent[1478]: 2024-02-12T19:58:49.671702Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:58:49.671814 waagent[1478]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:58:49.671814 waagent[1478]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:58:49.671814 waagent[1478]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:58:49.671814 waagent[1478]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:58:49.671814 waagent[1478]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:58:49.671814 waagent[1478]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:58:49.688424 waagent[1478]: 2024-02-12T19:58:49.688310Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:58:49.689253 waagent[1478]: 2024-02-12T19:58:49.689199Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:58:49.690237 waagent[1478]: 2024-02-12T19:58:49.690177Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:58:49.711314 waagent[1478]: 2024-02-12T19:58:49.711250Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1469' Feb 12 19:58:49.736718 waagent[1478]: 2024-02-12T19:58:49.736668Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:58:49.794409 waagent[1478]: 2024-02-12T19:58:49.793803Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:58:49.794409 waagent[1478]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:58:49.794409 waagent[1478]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:58:49.794409 waagent[1478]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:8a:3f brd ff:ff:ff:ff:ff:ff Feb 12 19:58:49.794409 waagent[1478]: 3: enP64466s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:8a:3f brd ff:ff:ff:ff:ff:ff\ altname enP64466p0s2 Feb 12 19:58:49.794409 waagent[1478]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:58:49.794409 waagent[1478]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:58:49.794409 waagent[1478]: 2: eth0 inet 10.200.8.16/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:58:49.794409 waagent[1478]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:58:49.794409 waagent[1478]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:58:49.794409 waagent[1478]: 2: eth0 inet6 fe80::222:48ff:fe9e:8a3f/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:58:50.053340 waagent[1478]: 2024-02-12T19:58:50.053178Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 12 19:58:50.056491 waagent[1478]: 2024-02-12T19:58:50.056389Z INFO EnvHandler ExtHandler Firewall rules: Feb 12 19:58:50.056491 waagent[1478]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:58:50.056491 waagent[1478]: pkts bytes target prot opt in out source destination Feb 12 19:58:50.056491 waagent[1478]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:58:50.056491 waagent[1478]: pkts bytes target prot opt in out source destination Feb 12 19:58:50.056491 waagent[1478]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:58:50.056491 waagent[1478]: pkts bytes target prot opt in out source destination Feb 12 19:58:50.056491 waagent[1478]: 2 104 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:58:50.056491 waagent[1478]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:58:50.058078 waagent[1478]: 2024-02-12T19:58:50.058021Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:58:50.085317 waagent[1478]: 2024-02-12T19:58:50.085249Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:58:50.176092 waagent[1414]: 2024-02-12T19:58:50.175908Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:58:50.181807 waagent[1414]: 2024-02-12T19:58:50.181748Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:58:51.185052 waagent[1519]: 2024-02-12T19:58:51.184926Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:58:51.185751 waagent[1519]: 2024-02-12T19:58:51.185679Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:58:51.185892 waagent[1519]: 2024-02-12T19:58:51.185837Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:58:51.195379 waagent[1519]: 2024-02-12T19:58:51.195280Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:58:51.195764 waagent[1519]: 2024-02-12T19:58:51.195706Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:58:51.195923 waagent[1519]: 2024-02-12T19:58:51.195873Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:58:51.207305 waagent[1519]: 2024-02-12T19:58:51.207232Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:58:51.215251 waagent[1519]: 2024-02-12T19:58:51.215191Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:58:51.216140 waagent[1519]: 2024-02-12T19:58:51.216083Z INFO ExtHandler Feb 12 19:58:51.216290 waagent[1519]: 2024-02-12T19:58:51.216240Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 954c4f67-26c6-4679-90c2-d03da8ce8d4f eTag: 8813654117369055893 source: Fabric] Feb 12 19:58:51.216968 waagent[1519]: 2024-02-12T19:58:51.216910Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:58:51.218034 waagent[1519]: 2024-02-12T19:58:51.217959Z INFO ExtHandler Feb 12 19:58:51.218177 waagent[1519]: 2024-02-12T19:58:51.218127Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:58:51.224407 waagent[1519]: 2024-02-12T19:58:51.224358Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:58:51.224825 waagent[1519]: 2024-02-12T19:58:51.224774Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:58:51.245062 waagent[1519]: 2024-02-12T19:58:51.244983Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:58:51.306694 waagent[1519]: 2024-02-12T19:58:51.306573Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C7D89B281B998B07E84B23C3F251800AE918E593', 'hasPrivateKey': False} Feb 12 19:58:51.307635 waagent[1519]: 2024-02-12T19:58:51.307570Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B4C451E30C3945399444C7E6357621F033370A4', 'hasPrivateKey': True} Feb 12 19:58:51.308586 waagent[1519]: 2024-02-12T19:58:51.308523Z INFO ExtHandler Fetch goal state completed Feb 12 19:58:51.328411 waagent[1519]: 2024-02-12T19:58:51.328342Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1519 Feb 12 19:58:51.331595 waagent[1519]: 2024-02-12T19:58:51.331533Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:58:51.333033 waagent[1519]: 2024-02-12T19:58:51.332961Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:58:51.337585 waagent[1519]: 2024-02-12T19:58:51.337530Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:58:51.337930 waagent[1519]: 2024-02-12T19:58:51.337873Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:58:51.345505 waagent[1519]: 2024-02-12T19:58:51.345452Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:58:51.345940 waagent[1519]: 2024-02-12T19:58:51.345885Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:58:51.365579 waagent[1519]: 2024-02-12T19:58:51.365481Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 12 19:58:51.368346 waagent[1519]: 2024-02-12T19:58:51.368248Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 12 19:58:51.372999 waagent[1519]: 2024-02-12T19:58:51.372929Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:58:51.374354 waagent[1519]: 2024-02-12T19:58:51.374294Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:58:51.374788 waagent[1519]: 2024-02-12T19:58:51.374732Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:58:51.374943 waagent[1519]: 2024-02-12T19:58:51.374894Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:58:51.375498 waagent[1519]: 2024-02-12T19:58:51.375438Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:58:51.375777 waagent[1519]: 2024-02-12T19:58:51.375722Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:58:51.375777 waagent[1519]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:58:51.375777 waagent[1519]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:58:51.375777 waagent[1519]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:58:51.375777 waagent[1519]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:58:51.375777 waagent[1519]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:58:51.375777 waagent[1519]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:58:51.377984 waagent[1519]: 2024-02-12T19:58:51.377864Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:58:51.378951 waagent[1519]: 2024-02-12T19:58:51.378887Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:58:51.379551 waagent[1519]: 2024-02-12T19:58:51.379491Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:58:51.379806 waagent[1519]: 2024-02-12T19:58:51.379750Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:58:51.380301 waagent[1519]: 2024-02-12T19:58:51.380233Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:58:51.380491 waagent[1519]: 2024-02-12T19:58:51.380421Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:58:51.382871 waagent[1519]: 2024-02-12T19:58:51.382763Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:58:51.383197 waagent[1519]: 2024-02-12T19:58:51.383141Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:58:51.383651 waagent[1519]: 2024-02-12T19:58:51.383590Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:58:51.386321 waagent[1519]: 2024-02-12T19:58:51.386242Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:58:51.387981 waagent[1519]: 2024-02-12T19:58:51.387909Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:58:51.387981 waagent[1519]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:58:51.387981 waagent[1519]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:58:51.387981 waagent[1519]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:8a:3f brd ff:ff:ff:ff:ff:ff Feb 12 19:58:51.387981 waagent[1519]: 3: enP64466s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9e:8a:3f brd ff:ff:ff:ff:ff:ff\ altname enP64466p0s2 Feb 12 19:58:51.387981 waagent[1519]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:58:51.387981 waagent[1519]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:58:51.387981 waagent[1519]: 2: eth0 inet 10.200.8.16/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:58:51.387981 waagent[1519]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:58:51.387981 waagent[1519]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:58:51.387981 waagent[1519]: 2: eth0 inet6 fe80::222:48ff:fe9e:8a3f/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:58:51.391373 waagent[1519]: 2024-02-12T19:58:51.391161Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:58:51.403425 waagent[1519]: 2024-02-12T19:58:51.403345Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:58:51.408674 waagent[1519]: 2024-02-12T19:58:51.408378Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:58:51.471499 waagent[1519]: 2024-02-12T19:58:51.471400Z INFO ExtHandler ExtHandler Feb 12 19:58:51.472717 waagent[1519]: 2024-02-12T19:58:51.472655Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 378a0cc9-d12a-4cdb-906e-cc9cbe0bf48e correlation 8f26702a-040f-4e6f-a66f-416a25b51ef6 created: 2024-02-12T19:57:10.546204Z] Feb 12 19:58:51.478225 waagent[1519]: 2024-02-12T19:58:51.478157Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:58:51.481984 waagent[1519]: 2024-02-12T19:58:51.481894Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Feb 12 19:58:51.484348 waagent[1519]: 2024-02-12T19:58:51.484286Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:58:51.484348 waagent[1519]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:58:51.484348 waagent[1519]: pkts bytes target prot opt in out source destination Feb 12 19:58:51.484348 waagent[1519]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:58:51.484348 waagent[1519]: pkts bytes target prot opt in out source destination Feb 12 19:58:51.484348 waagent[1519]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:58:51.484348 waagent[1519]: pkts bytes target prot opt in out source destination Feb 12 19:58:51.484348 waagent[1519]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:58:51.484348 waagent[1519]: 114 14025 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:58:51.484348 waagent[1519]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:58:51.501673 waagent[1519]: 2024-02-12T19:58:51.501602Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:58:51.510351 waagent[1519]: 2024-02-12T19:58:51.510280Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 74BA099E-5D2F-497C-A983-11FE696E966D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:59:17.998831 systemd[1]: Created slice system-sshd.slice. Feb 12 19:59:18.000704 systemd[1]: Started sshd@0-10.200.8.16:22-10.200.12.6:49924.service. Feb 12 19:59:18.932126 sshd[1558]: Accepted publickey for core from 10.200.12.6 port 49924 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 19:59:18.933735 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:59:18.937751 systemd-logind[1304]: New session 3 of user core. Feb 12 19:59:18.939580 systemd[1]: Started session-3.scope. Feb 12 19:59:19.248532 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 12 19:59:19.469149 systemd[1]: Started sshd@1-10.200.8.16:22-10.200.12.6:49934.service. Feb 12 19:59:20.085242 sshd[1563]: Accepted publickey for core from 10.200.12.6 port 49934 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 19:59:20.086852 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:59:20.092496 systemd[1]: Started session-4.scope. Feb 12 19:59:20.093105 systemd-logind[1304]: New session 4 of user core. Feb 12 19:59:20.524771 sshd[1563]: pam_unix(sshd:session): session closed for user core Feb 12 19:59:20.528358 systemd[1]: sshd@1-10.200.8.16:22-10.200.12.6:49934.service: Deactivated successfully. Feb 12 19:59:20.529342 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:59:20.529928 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:59:20.530680 systemd-logind[1304]: Removed session 4. Feb 12 19:59:20.628076 systemd[1]: Started sshd@2-10.200.8.16:22-10.200.12.6:49948.service. Feb 12 19:59:21.246460 sshd[1569]: Accepted publickey for core from 10.200.12.6 port 49948 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 19:59:21.248098 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:59:21.253080 systemd[1]: Started session-5.scope. Feb 12 19:59:21.253684 systemd-logind[1304]: New session 5 of user core. Feb 12 19:59:21.681257 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 12 19:59:21.684378 systemd[1]: sshd@2-10.200.8.16:22-10.200.12.6:49948.service: Deactivated successfully. Feb 12 19:59:21.685697 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:59:21.685918 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:59:21.687072 systemd-logind[1304]: Removed session 5. Feb 12 19:59:21.785844 systemd[1]: Started sshd@3-10.200.8.16:22-10.200.12.6:49954.service. Feb 12 19:59:22.403506 sshd[1575]: Accepted publickey for core from 10.200.12.6 port 49954 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 19:59:22.405153 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:59:22.410502 systemd[1]: Started session-6.scope. Feb 12 19:59:22.411109 systemd-logind[1304]: New session 6 of user core. Feb 12 19:59:22.843609 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 12 19:59:22.846744 systemd[1]: sshd@3-10.200.8.16:22-10.200.12.6:49954.service: Deactivated successfully. Feb 12 19:59:22.847734 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:59:22.848427 systemd-logind[1304]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:59:22.849206 systemd-logind[1304]: Removed session 6. Feb 12 19:59:22.949361 systemd[1]: Started sshd@4-10.200.8.16:22-10.200.12.6:49956.service. Feb 12 19:59:23.567509 sshd[1581]: Accepted publickey for core from 10.200.12.6 port 49956 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 19:59:23.569092 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:59:23.573732 systemd[1]: Started session-7.scope. Feb 12 19:59:23.574347 systemd-logind[1304]: New session 7 of user core. Feb 12 19:59:24.170802 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:59:24.171080 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:59:25.076703 update_engine[1305]: I0212 19:59:25.076020 1305 update_attempter.cc:509] Updating boot flags... Feb 12 19:59:25.130333 systemd[1]: Starting docker.service... Feb 12 19:59:25.228856 env[1610]: time="2024-02-12T19:59:25.228796795Z" level=info msg="Starting up" Feb 12 19:59:25.244895 env[1610]: time="2024-02-12T19:59:25.244859128Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:59:25.245078 env[1610]: time="2024-02-12T19:59:25.245059829Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:59:25.245167 env[1610]: time="2024-02-12T19:59:25.245149629Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:59:25.245243 env[1610]: time="2024-02-12T19:59:25.245230329Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:59:25.247551 env[1610]: time="2024-02-12T19:59:25.247197833Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:59:25.247551 env[1610]: time="2024-02-12T19:59:25.247220833Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:59:25.247551 env[1610]: time="2024-02-12T19:59:25.247238833Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:59:25.247551 env[1610]: time="2024-02-12T19:59:25.247250533Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:59:25.256902 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1340925111-merged.mount: Deactivated successfully. Feb 12 19:59:25.350107 env[1610]: time="2024-02-12T19:59:25.348458940Z" level=info msg="Loading containers: start." Feb 12 19:59:25.535023 kernel: Initializing XFRM netlink socket Feb 12 19:59:25.579572 env[1610]: time="2024-02-12T19:59:25.579528012Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:59:25.686803 systemd-networkd[1469]: docker0: Link UP Feb 12 19:59:25.712891 env[1610]: time="2024-02-12T19:59:25.712854584Z" level=info msg="Loading containers: done." Feb 12 19:59:25.732686 env[1610]: time="2024-02-12T19:59:25.732641124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:59:25.732866 env[1610]: time="2024-02-12T19:59:25.732831125Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:59:25.732960 env[1610]: time="2024-02-12T19:59:25.732938025Z" level=info msg="Daemon has completed initialization" Feb 12 19:59:25.760889 systemd[1]: Started docker.service. Feb 12 19:59:25.765034 env[1610]: time="2024-02-12T19:59:25.764948890Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:59:25.781072 systemd[1]: Reloading. Feb 12 19:59:25.853557 /usr/lib/systemd/system-generators/torcx-generator[1821]: time="2024-02-12T19:59:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:59:25.858397 /usr/lib/systemd/system-generators/torcx-generator[1821]: time="2024-02-12T19:59:25Z" level=info msg="torcx already run" Feb 12 19:59:25.944590 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:59:25.944610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:59:25.962730 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:59:26.046080 systemd[1]: Started kubelet.service. Feb 12 19:59:26.119788 kubelet[1882]: E0212 19:59:26.119729 1882 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:59:26.121409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:59:26.121518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:59:31.017896 env[1317]: time="2024-02-12T19:59:31.017728944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 12 19:59:31.641185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1947275514.mount: Deactivated successfully. Feb 12 19:59:33.532771 env[1317]: time="2024-02-12T19:59:33.532710354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:33.541369 env[1317]: time="2024-02-12T19:59:33.541320065Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:33.544286 env[1317]: time="2024-02-12T19:59:33.544256568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:33.551180 env[1317]: time="2024-02-12T19:59:33.551145676Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:33.551782 env[1317]: time="2024-02-12T19:59:33.551746277Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 12 19:59:33.562040 env[1317]: time="2024-02-12T19:59:33.562012990Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 12 19:59:35.563225 env[1317]: time="2024-02-12T19:59:35.563170469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:35.570152 env[1317]: time="2024-02-12T19:59:35.570110676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:35.574831 env[1317]: time="2024-02-12T19:59:35.574797981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:35.578847 env[1317]: time="2024-02-12T19:59:35.578811585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:35.579457 env[1317]: time="2024-02-12T19:59:35.579421686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 12 19:59:35.591636 env[1317]: time="2024-02-12T19:59:35.591600799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 12 19:59:36.320924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:59:36.321206 systemd[1]: Stopped kubelet.service. Feb 12 19:59:36.323096 systemd[1]: Started kubelet.service. Feb 12 19:59:36.401416 kubelet[1915]: E0212 19:59:36.401365 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:59:36.405491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:59:36.405609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:59:36.864339 env[1317]: time="2024-02-12T19:59:36.864284005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:36.871075 env[1317]: time="2024-02-12T19:59:36.871039412Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:36.875910 env[1317]: time="2024-02-12T19:59:36.875879217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:36.879603 env[1317]: time="2024-02-12T19:59:36.879575220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:36.880197 env[1317]: time="2024-02-12T19:59:36.880166921Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 12 19:59:36.890381 env[1317]: time="2024-02-12T19:59:36.890342231Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 19:59:37.955221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106923040.mount: Deactivated successfully. Feb 12 19:59:38.519358 env[1317]: time="2024-02-12T19:59:38.519302242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:38.524785 env[1317]: time="2024-02-12T19:59:38.524745446Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:38.528782 env[1317]: time="2024-02-12T19:59:38.528745450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:38.532379 env[1317]: time="2024-02-12T19:59:38.532341853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:38.532835 env[1317]: time="2024-02-12T19:59:38.532802953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 12 19:59:38.543189 env[1317]: time="2024-02-12T19:59:38.543167563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:59:38.965666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744045786.mount: Deactivated successfully. Feb 12 19:59:38.985570 env[1317]: time="2024-02-12T19:59:38.985526053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:38.993003 env[1317]: time="2024-02-12T19:59:38.992957959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:38.996715 env[1317]: time="2024-02-12T19:59:38.996682063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:39.002137 env[1317]: time="2024-02-12T19:59:39.002108267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:39.002543 env[1317]: time="2024-02-12T19:59:39.002514368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:59:39.012976 env[1317]: time="2024-02-12T19:59:39.012949876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 12 19:59:39.478067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466650107.mount: Deactivated successfully. Feb 12 19:59:44.070967 env[1317]: time="2024-02-12T19:59:44.070903159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:44.075221 env[1317]: time="2024-02-12T19:59:44.075176761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:44.078434 env[1317]: time="2024-02-12T19:59:44.078378963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:44.081900 env[1317]: time="2024-02-12T19:59:44.081866865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:44.082739 env[1317]: time="2024-02-12T19:59:44.082706166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 12 19:59:44.092279 env[1317]: time="2024-02-12T19:59:44.092250771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:59:44.566459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102916664.mount: Deactivated successfully. Feb 12 19:59:45.260436 env[1317]: time="2024-02-12T19:59:45.260381461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:45.270046 env[1317]: time="2024-02-12T19:59:45.270009067Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:45.274340 env[1317]: time="2024-02-12T19:59:45.274304869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:45.278343 env[1317]: time="2024-02-12T19:59:45.278312571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:45.278745 env[1317]: time="2024-02-12T19:59:45.278714972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 19:59:46.570911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:59:46.571188 systemd[1]: Stopped kubelet.service. Feb 12 19:59:46.577105 systemd[1]: Started kubelet.service. Feb 12 19:59:46.644899 kubelet[2000]: E0212 19:59:46.644844 2000 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:59:46.646867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:59:46.647049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:59:48.480612 systemd[1]: Stopped kubelet.service. Feb 12 19:59:48.494556 systemd[1]: Reloading. Feb 12 19:59:48.573499 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2024-02-12T19:59:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:59:48.588479 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2024-02-12T19:59:48Z" level=info msg="torcx already run" Feb 12 19:59:48.661620 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:59:48.661641 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:59:48.679328 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:59:48.767790 systemd[1]: Started kubelet.service. Feb 12 19:59:48.812317 kubelet[2092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:59:48.812656 kubelet[2092]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:59:48.812656 kubelet[2092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:59:48.812777 kubelet[2092]: I0212 19:59:48.812698 2092 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:59:49.063382 kubelet[2092]: I0212 19:59:49.063274 2092 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:59:49.063382 kubelet[2092]: I0212 19:59:49.063300 2092 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:59:49.063807 kubelet[2092]: I0212 19:59:49.063777 2092 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:59:49.068341 kubelet[2092]: E0212 19:59:49.068321 2092 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.068476 kubelet[2092]: I0212 19:59:49.068422 2092 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:59:49.073902 kubelet[2092]: I0212 19:59:49.073880 2092 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:59:49.074136 kubelet[2092]: I0212 19:59:49.074117 2092 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:59:49.074304 kubelet[2092]: I0212 19:59:49.074285 2092 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:59:49.074447 kubelet[2092]: I0212 19:59:49.074335 2092 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:59:49.074447 kubelet[2092]: I0212 19:59:49.074350 2092 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:59:49.074534 kubelet[2092]: I0212 19:59:49.074469 2092 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:59:49.074592 kubelet[2092]: I0212 19:59:49.074578 2092 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:59:49.074635 kubelet[2092]: I0212 19:59:49.074604 2092 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:59:49.074692 kubelet[2092]: I0212 19:59:49.074642 2092 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:59:49.074692 kubelet[2092]: I0212 19:59:49.074662 2092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:59:49.075513 kubelet[2092]: I0212 19:59:49.075493 2092 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:59:49.075901 kubelet[2092]: W0212 19:59:49.075886 2092 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:59:49.076555 kubelet[2092]: I0212 19:59:49.076538 2092 server.go:1232] "Started kubelet" Feb 12 19:59:49.076794 kubelet[2092]: W0212 19:59:49.076756 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d5221102be&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.076900 kubelet[2092]: E0212 19:59:49.076890 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d5221102be&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.083645 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:59:49.083726 kubelet[2092]: E0212 19:59:49.083037 2092 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-d5221102be.17b335f01bb06616", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-d5221102be", UID:"ci-3510.3.2-a-d5221102be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d5221102be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 59, 49, 76518422, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 59, 49, 76518422, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-d5221102be"}': 'Post "https://10.200.8.16:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.16:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:59:49.083726 kubelet[2092]: W0212 19:59:49.083268 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.083726 kubelet[2092]: E0212 19:59:49.083303 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.083908 kubelet[2092]: I0212 19:59:49.083357 2092 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:59:49.084557 kubelet[2092]: I0212 19:59:49.084358 2092 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:59:49.085114 kubelet[2092]: I0212 19:59:49.085089 2092 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:59:49.086129 kubelet[2092]: I0212 19:59:49.086106 2092 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:59:49.087305 kubelet[2092]: I0212 19:59:49.087285 2092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:59:49.088075 kubelet[2092]: E0212 19:59:49.087012 2092 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:59:49.088155 kubelet[2092]: E0212 19:59:49.088086 2092 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:59:49.091243 kubelet[2092]: E0212 19:59:49.090530 2092 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d5221102be\" not found" Feb 12 19:59:49.091243 kubelet[2092]: I0212 19:59:49.090556 2092 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:59:49.091243 kubelet[2092]: I0212 19:59:49.090669 2092 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:59:49.091243 kubelet[2092]: I0212 19:59:49.090725 2092 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:59:49.091243 kubelet[2092]: W0212 19:59:49.091042 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.091243 kubelet[2092]: E0212 19:59:49.091085 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.092157 kubelet[2092]: E0212 19:59:49.091687 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="200ms" Feb 12 19:59:49.152221 kubelet[2092]: I0212 19:59:49.152194 2092 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:59:49.152221 kubelet[2092]: I0212 19:59:49.152220 2092 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:59:49.152443 kubelet[2092]: I0212 19:59:49.152242 2092 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:59:49.192657 kubelet[2092]: I0212 19:59:49.192634 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.193146 kubelet[2092]: E0212 19:59:49.193120 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.201671 kubelet[2092]: I0212 19:59:49.201650 2092 policy_none.go:49] "None policy: Start" Feb 12 19:59:49.202362 kubelet[2092]: I0212 19:59:49.202323 2092 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:59:49.202447 kubelet[2092]: I0212 19:59:49.202369 2092 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:59:49.209552 systemd[1]: Created slice kubepods.slice. Feb 12 19:59:49.214475 kubelet[2092]: I0212 19:59:49.214454 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:59:49.216558 kubelet[2092]: I0212 19:59:49.216537 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:59:49.216652 kubelet[2092]: I0212 19:59:49.216565 2092 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:59:49.216652 kubelet[2092]: I0212 19:59:49.216586 2092 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:59:49.216652 kubelet[2092]: E0212 19:59:49.216651 2092 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:59:49.218042 kubelet[2092]: W0212 19:59:49.217909 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.218042 kubelet[2092]: E0212 19:59:49.217947 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.218820 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:59:49.222905 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:59:49.229622 kubelet[2092]: I0212 19:59:49.229605 2092 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:59:49.230032 kubelet[2092]: I0212 19:59:49.229965 2092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:59:49.232250 kubelet[2092]: E0212 19:59:49.232115 2092 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-d5221102be\" not found" Feb 12 19:59:49.292594 kubelet[2092]: E0212 19:59:49.292556 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="400ms" Feb 12 19:59:49.317157 kubelet[2092]: I0212 19:59:49.316929 2092 topology_manager.go:215] "Topology Admit Handler" podUID="00b715281efe6a05e2f2dfc773df4652" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.319558 kubelet[2092]: I0212 19:59:49.319535 2092 topology_manager.go:215] "Topology Admit Handler" podUID="53f7b1bb9ec480e6427a8496a2b10fda" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.321804 kubelet[2092]: I0212 19:59:49.321625 2092 topology_manager.go:215] "Topology Admit Handler" podUID="d061f7f1dc1dba4dc2e51f2cc245ce20" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.328311 systemd[1]: Created slice kubepods-burstable-pod00b715281efe6a05e2f2dfc773df4652.slice. Feb 12 19:59:49.338875 systemd[1]: Created slice kubepods-burstable-pod53f7b1bb9ec480e6427a8496a2b10fda.slice. Feb 12 19:59:49.343271 systemd[1]: Created slice kubepods-burstable-podd061f7f1dc1dba4dc2e51f2cc245ce20.slice. Feb 12 19:59:49.395023 kubelet[2092]: I0212 19:59:49.394971 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.395368 kubelet[2092]: E0212 19:59:49.395347 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.491769 kubelet[2092]: I0212 19:59:49.491729 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00b715281efe6a05e2f2dfc773df4652-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-d5221102be\" (UID: \"00b715281efe6a05e2f2dfc773df4652\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.491769 kubelet[2092]: I0212 19:59:49.491788 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492046 kubelet[2092]: I0212 19:59:49.491827 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492046 kubelet[2092]: I0212 19:59:49.491863 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492046 kubelet[2092]: I0212 19:59:49.491893 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53f7b1bb9ec480e6427a8496a2b10fda-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d5221102be\" (UID: \"53f7b1bb9ec480e6427a8496a2b10fda\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492046 kubelet[2092]: I0212 19:59:49.491955 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53f7b1bb9ec480e6427a8496a2b10fda-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d5221102be\" (UID: \"53f7b1bb9ec480e6427a8496a2b10fda\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492046 kubelet[2092]: I0212 19:59:49.492034 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53f7b1bb9ec480e6427a8496a2b10fda-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-d5221102be\" (UID: \"53f7b1bb9ec480e6427a8496a2b10fda\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492320 kubelet[2092]: I0212 19:59:49.492071 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.492320 kubelet[2092]: I0212 19:59:49.492111 2092 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.638393 env[1317]: time="2024-02-12T19:59:49.638337671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-d5221102be,Uid:00b715281efe6a05e2f2dfc773df4652,Namespace:kube-system,Attempt:0,}" Feb 12 19:59:49.642214 env[1317]: time="2024-02-12T19:59:49.642180595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-d5221102be,Uid:53f7b1bb9ec480e6427a8496a2b10fda,Namespace:kube-system,Attempt:0,}" Feb 12 19:59:49.646041 env[1317]: time="2024-02-12T19:59:49.645764210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-d5221102be,Uid:d061f7f1dc1dba4dc2e51f2cc245ce20,Namespace:kube-system,Attempt:0,}" Feb 12 19:59:49.694045 kubelet[2092]: E0212 19:59:49.693985 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="800ms" Feb 12 19:59:49.797761 kubelet[2092]: I0212 19:59:49.797727 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.798072 kubelet[2092]: E0212 19:59:49.798049 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:49.906065 kubelet[2092]: W0212 19:59:49.905948 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:49.906065 kubelet[2092]: E0212 19:59:49.905986 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.011895 kubelet[2092]: W0212 19:59:50.011833 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d5221102be&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.011895 kubelet[2092]: E0212 19:59:50.011900 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d5221102be&limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.091501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625419085.mount: Deactivated successfully. Feb 12 19:59:50.116098 env[1317]: time="2024-02-12T19:59:50.116056516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.120090 env[1317]: time="2024-02-12T19:59:50.120056141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.131261 env[1317]: time="2024-02-12T19:59:50.131224690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.135038 env[1317]: time="2024-02-12T19:59:50.135006508Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.139533 env[1317]: time="2024-02-12T19:59:50.139495848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.142519 env[1317]: time="2024-02-12T19:59:50.142485942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.144561 env[1317]: time="2024-02-12T19:59:50.144528106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.147130 env[1317]: time="2024-02-12T19:59:50.147100186Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.152127 env[1317]: time="2024-02-12T19:59:50.152091942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.155381 env[1317]: time="2024-02-12T19:59:50.155351344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.156404 kubelet[2092]: W0212 19:59:50.156320 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.156404 kubelet[2092]: E0212 19:59:50.156360 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.16:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.174969 env[1317]: time="2024-02-12T19:59:50.174928155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.188758 env[1317]: time="2024-02-12T19:59:50.188717386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:59:50.231243 env[1317]: time="2024-02-12T19:59:50.231173512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:59:50.231243 env[1317]: time="2024-02-12T19:59:50.231209913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:59:50.231519 env[1317]: time="2024-02-12T19:59:50.231224214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:59:50.231640 env[1317]: time="2024-02-12T19:59:50.231497922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:59:50.231640 env[1317]: time="2024-02-12T19:59:50.231528323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:59:50.231640 env[1317]: time="2024-02-12T19:59:50.231549324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:59:50.232261 env[1317]: time="2024-02-12T19:59:50.232206944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49 pid=2138 runtime=io.containerd.runc.v2 Feb 12 19:59:50.232395 env[1317]: time="2024-02-12T19:59:50.232305047Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41d8f3da767c181087713ce7a66fdf1c934c95f50cd49b458d1ea86a0dad1b74 pid=2137 runtime=io.containerd.runc.v2 Feb 12 19:59:50.256661 systemd[1]: Started cri-containerd-2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49.scope. Feb 12 19:59:50.272054 systemd[1]: Started cri-containerd-41d8f3da767c181087713ce7a66fdf1c934c95f50cd49b458d1ea86a0dad1b74.scope. Feb 12 19:59:50.279908 env[1317]: time="2024-02-12T19:59:50.279842932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:59:50.280548 env[1317]: time="2024-02-12T19:59:50.280514653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:59:50.280694 env[1317]: time="2024-02-12T19:59:50.280666458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:59:50.281061 env[1317]: time="2024-02-12T19:59:50.280987468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d pid=2189 runtime=io.containerd.runc.v2 Feb 12 19:59:50.304581 systemd[1]: Started cri-containerd-8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d.scope. Feb 12 19:59:50.350167 kubelet[2092]: W0212 19:59:50.350059 2092 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.350167 kubelet[2092]: E0212 19:59:50.350129 2092 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.16:6443: connect: connection refused Feb 12 19:59:50.354595 env[1317]: time="2024-02-12T19:59:50.352836012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-d5221102be,Uid:53f7b1bb9ec480e6427a8496a2b10fda,Namespace:kube-system,Attempt:0,} returns sandbox id \"41d8f3da767c181087713ce7a66fdf1c934c95f50cd49b458d1ea86a0dad1b74\"" Feb 12 19:59:50.358788 env[1317]: time="2024-02-12T19:59:50.358751297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-d5221102be,Uid:00b715281efe6a05e2f2dfc773df4652,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49\"" Feb 12 19:59:50.360768 env[1317]: time="2024-02-12T19:59:50.360736659Z" level=info msg="CreateContainer within sandbox \"41d8f3da767c181087713ce7a66fdf1c934c95f50cd49b458d1ea86a0dad1b74\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:59:50.360934 env[1317]: time="2024-02-12T19:59:50.360905564Z" level=info msg="CreateContainer within sandbox \"2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:59:50.380681 env[1317]: time="2024-02-12T19:59:50.380636780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-d5221102be,Uid:d061f7f1dc1dba4dc2e51f2cc245ce20,Namespace:kube-system,Attempt:0,} returns sandbox id \"8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d\"" Feb 12 19:59:50.383601 env[1317]: time="2024-02-12T19:59:50.383575972Z" level=info msg="CreateContainer within sandbox \"8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:59:50.420778 env[1317]: time="2024-02-12T19:59:50.420693032Z" level=info msg="CreateContainer within sandbox \"2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805\"" Feb 12 19:59:50.421768 env[1317]: time="2024-02-12T19:59:50.421740264Z" level=info msg="StartContainer for \"76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805\"" Feb 12 19:59:50.434647 env[1317]: time="2024-02-12T19:59:50.434593066Z" level=info msg="CreateContainer within sandbox \"41d8f3da767c181087713ce7a66fdf1c934c95f50cd49b458d1ea86a0dad1b74\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a4909752c08a24b4fb6c22e8c166f4be69e118cf6503d32a8adff0949fa551f\"" Feb 12 19:59:50.435189 env[1317]: time="2024-02-12T19:59:50.435155283Z" level=info msg="StartContainer for \"9a4909752c08a24b4fb6c22e8c166f4be69e118cf6503d32a8adff0949fa551f\"" Feb 12 19:59:50.440700 systemd[1]: Started cri-containerd-76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805.scope. Feb 12 19:59:50.445050 env[1317]: time="2024-02-12T19:59:50.445013391Z" level=info msg="CreateContainer within sandbox \"8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e\"" Feb 12 19:59:50.446655 env[1317]: time="2024-02-12T19:59:50.446627042Z" level=info msg="StartContainer for \"45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e\"" Feb 12 19:59:50.475208 systemd[1]: Started cri-containerd-9a4909752c08a24b4fb6c22e8c166f4be69e118cf6503d32a8adff0949fa551f.scope. Feb 12 19:59:50.488627 systemd[1]: Started cri-containerd-45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e.scope. Feb 12 19:59:50.494551 kubelet[2092]: E0212 19:59:50.494520 2092 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": dial tcp 10.200.8.16:6443: connect: connection refused" interval="1.6s" Feb 12 19:59:50.574322 env[1317]: time="2024-02-12T19:59:50.574277829Z" level=info msg="StartContainer for \"76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805\" returns successfully" Feb 12 19:59:50.575187 env[1317]: time="2024-02-12T19:59:50.575148956Z" level=info msg="StartContainer for \"9a4909752c08a24b4fb6c22e8c166f4be69e118cf6503d32a8adff0949fa551f\" returns successfully" Feb 12 19:59:50.580392 env[1317]: time="2024-02-12T19:59:50.580361819Z" level=info msg="StartContainer for \"45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e\" returns successfully" Feb 12 19:59:50.600190 kubelet[2092]: I0212 19:59:50.600166 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:50.600624 kubelet[2092]: E0212 19:59:50.600600 2092 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.16:6443/api/v1/nodes\": dial tcp 10.200.8.16:6443: connect: connection refused" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:52.202653 kubelet[2092]: I0212 19:59:52.202626 2092 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:53.246491 kubelet[2092]: I0212 19:59:53.246451 2092 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:53.283136 kubelet[2092]: E0212 19:59:53.283039 2092 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-d5221102be.17b335f01bb06616", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-d5221102be", UID:"ci-3510.3.2-a-d5221102be", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d5221102be"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 59, 49, 76518422, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 59, 49, 76518422, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-d5221102be"}': 'namespaces "default" not found' (will not retry!) Feb 12 19:59:54.090268 kubelet[2092]: I0212 19:59:54.090235 2092 apiserver.go:52] "Watching apiserver" Feb 12 19:59:54.190812 kubelet[2092]: I0212 19:59:54.190781 2092 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:59:55.868218 systemd[1]: Reloading. Feb 12 19:59:55.952961 /usr/lib/systemd/system-generators/torcx-generator[2376]: time="2024-02-12T19:59:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:59:55.956154 /usr/lib/systemd/system-generators/torcx-generator[2376]: time="2024-02-12T19:59:55Z" level=info msg="torcx already run" Feb 12 19:59:55.960642 kubelet[2092]: W0212 19:59:55.960506 2092 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:59:56.053044 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:59:56.053063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:59:56.073558 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:59:56.182213 kubelet[2092]: I0212 19:59:56.182099 2092 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:59:56.184670 systemd[1]: Stopping kubelet.service... Feb 12 19:59:56.199420 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:59:56.199642 systemd[1]: Stopped kubelet.service. Feb 12 19:59:56.201904 systemd[1]: Started kubelet.service. Feb 12 19:59:56.281740 kubelet[2438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:59:56.282065 kubelet[2438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:59:56.282113 kubelet[2438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:59:56.282235 kubelet[2438]: I0212 19:59:56.282204 2438 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:59:56.286526 kubelet[2438]: I0212 19:59:56.286497 2438 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:59:56.286526 kubelet[2438]: I0212 19:59:56.286516 2438 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:59:56.286730 kubelet[2438]: I0212 19:59:56.286711 2438 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:59:56.288132 kubelet[2438]: I0212 19:59:56.288108 2438 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:59:56.289127 kubelet[2438]: I0212 19:59:56.289104 2438 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:59:56.295983 kubelet[2438]: I0212 19:59:56.295954 2438 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:59:56.296208 kubelet[2438]: I0212 19:59:56.296185 2438 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:59:56.296390 kubelet[2438]: I0212 19:59:56.296362 2438 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:59:56.296566 kubelet[2438]: I0212 19:59:56.296404 2438 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:59:56.296566 kubelet[2438]: I0212 19:59:56.296418 2438 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:59:56.296566 kubelet[2438]: I0212 19:59:56.296464 2438 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:59:56.296984 kubelet[2438]: I0212 19:59:56.296970 2438 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:59:56.297067 kubelet[2438]: I0212 19:59:56.297011 2438 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:59:56.297067 kubelet[2438]: I0212 19:59:56.297045 2438 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:59:56.297067 kubelet[2438]: I0212 19:59:56.297066 2438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:59:56.297745 kubelet[2438]: I0212 19:59:56.297629 2438 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:59:56.298200 kubelet[2438]: I0212 19:59:56.298179 2438 server.go:1232] "Started kubelet" Feb 12 19:59:56.300093 kubelet[2438]: I0212 19:59:56.300070 2438 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:59:56.306129 kubelet[2438]: I0212 19:59:56.306114 2438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:59:56.311786 kubelet[2438]: I0212 19:59:56.308376 2438 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:59:56.312687 kubelet[2438]: I0212 19:59:56.312669 2438 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:59:56.313165 kubelet[2438]: I0212 19:59:56.313149 2438 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:59:56.321922 kubelet[2438]: E0212 19:59:56.321903 2438 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:59:56.322058 kubelet[2438]: E0212 19:59:56.322046 2438 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:59:56.328067 kubelet[2438]: E0212 19:59:56.328045 2438 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d5221102be\" not found" Feb 12 19:59:56.328156 kubelet[2438]: I0212 19:59:56.328081 2438 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:59:56.328391 kubelet[2438]: I0212 19:59:56.328370 2438 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:59:56.328530 kubelet[2438]: I0212 19:59:56.328514 2438 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:59:56.337091 kubelet[2438]: I0212 19:59:56.337072 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:59:56.338373 kubelet[2438]: I0212 19:59:56.338356 2438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:59:56.338491 kubelet[2438]: I0212 19:59:56.338482 2438 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:59:56.338565 kubelet[2438]: I0212 19:59:56.338558 2438 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:59:56.338671 kubelet[2438]: E0212 19:59:56.338661 2438 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:59:56.393865 kubelet[2438]: I0212 19:59:56.393837 2438 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:59:56.394143 kubelet[2438]: I0212 19:59:56.394130 2438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:59:56.394231 kubelet[2438]: I0212 19:59:56.394223 2438 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:59:56.394525 kubelet[2438]: I0212 19:59:56.394507 2438 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:59:56.394650 kubelet[2438]: I0212 19:59:56.394641 2438 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 12 19:59:56.394722 kubelet[2438]: I0212 19:59:56.394715 2438 policy_none.go:49] "None policy: Start" Feb 12 19:59:56.396355 kubelet[2438]: I0212 19:59:56.396336 2438 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:59:56.396489 kubelet[2438]: I0212 19:59:56.396479 2438 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:59:56.397128 kubelet[2438]: I0212 19:59:56.397109 2438 state_mem.go:75] "Updated machine memory state" Feb 12 19:59:56.401344 kubelet[2438]: I0212 19:59:56.401329 2438 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:59:56.404090 kubelet[2438]: I0212 19:59:56.404075 2438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:59:56.431347 kubelet[2438]: I0212 19:59:56.431330 2438 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.439419 kubelet[2438]: I0212 19:59:56.439349 2438 topology_manager.go:215] "Topology Admit Handler" podUID="d061f7f1dc1dba4dc2e51f2cc245ce20" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.439949 kubelet[2438]: I0212 19:59:56.439931 2438 topology_manager.go:215] "Topology Admit Handler" podUID="00b715281efe6a05e2f2dfc773df4652" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.440125 kubelet[2438]: I0212 19:59:56.440113 2438 topology_manager.go:215] "Topology Admit Handler" podUID="53f7b1bb9ec480e6427a8496a2b10fda" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.446478 kubelet[2438]: W0212 19:59:56.446456 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:59:56.446707 kubelet[2438]: I0212 19:59:56.446690 2438 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.446863 kubelet[2438]: I0212 19:59:56.446851 2438 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.452812 kubelet[2438]: W0212 19:59:56.450738 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:59:56.457170 kubelet[2438]: W0212 19:59:56.457146 2438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:59:56.457350 kubelet[2438]: E0212 19:59:56.457326 2438 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-d5221102be\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530015 kubelet[2438]: I0212 19:59:56.529965 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530163 kubelet[2438]: I0212 19:59:56.530043 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530163 kubelet[2438]: I0212 19:59:56.530081 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53f7b1bb9ec480e6427a8496a2b10fda-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-d5221102be\" (UID: \"53f7b1bb9ec480e6427a8496a2b10fda\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530163 kubelet[2438]: I0212 19:59:56.530113 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530163 kubelet[2438]: I0212 19:59:56.530146 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530404 kubelet[2438]: I0212 19:59:56.530183 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d061f7f1dc1dba4dc2e51f2cc245ce20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-d5221102be\" (UID: \"d061f7f1dc1dba4dc2e51f2cc245ce20\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530404 kubelet[2438]: I0212 19:59:56.530216 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00b715281efe6a05e2f2dfc773df4652-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-d5221102be\" (UID: \"00b715281efe6a05e2f2dfc773df4652\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530404 kubelet[2438]: I0212 19:59:56.530267 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53f7b1bb9ec480e6427a8496a2b10fda-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d5221102be\" (UID: \"53f7b1bb9ec480e6427a8496a2b10fda\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:56.530404 kubelet[2438]: I0212 19:59:56.530309 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53f7b1bb9ec480e6427a8496a2b10fda-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d5221102be\" (UID: \"53f7b1bb9ec480e6427a8496a2b10fda\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" Feb 12 19:59:57.298256 kubelet[2438]: I0212 19:59:57.298214 2438 apiserver.go:52] "Watching apiserver" Feb 12 19:59:57.328859 kubelet[2438]: I0212 19:59:57.328814 2438 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:59:57.396755 kubelet[2438]: I0212 19:59:57.396715 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d5221102be" podStartSLOduration=2.396627202 podCreationTimestamp="2024-02-12 19:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:59:57.394703552 +0000 UTC m=+1.188065528" watchObservedRunningTime="2024-02-12 19:59:57.396627202 +0000 UTC m=+1.189989278" Feb 12 19:59:57.411381 kubelet[2438]: I0212 19:59:57.411349 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d5221102be" podStartSLOduration=1.411294379 podCreationTimestamp="2024-02-12 19:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:59:57.405557231 +0000 UTC m=+1.198919207" watchObservedRunningTime="2024-02-12 19:59:57.411294379 +0000 UTC m=+1.204656455" Feb 12 19:59:57.424429 kubelet[2438]: I0212 19:59:57.424389 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d5221102be" podStartSLOduration=1.424349415 podCreationTimestamp="2024-02-12 19:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:59:57.411896494 +0000 UTC m=+1.205258570" watchObservedRunningTime="2024-02-12 19:59:57.424349415 +0000 UTC m=+1.217711391" Feb 12 19:59:58.474363 sudo[1584]: pam_unix(sudo:session): session closed for user root Feb 12 19:59:58.588785 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 12 19:59:58.592376 systemd[1]: sshd@4-10.200.8.16:22-10.200.12.6:49956.service: Deactivated successfully. Feb 12 19:59:58.593536 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:59:58.593800 systemd[1]: session-7.scope: Consumed 3.605s CPU time. Feb 12 19:59:58.594549 systemd-logind[1304]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:59:58.595635 systemd-logind[1304]: Removed session 7. Feb 12 20:00:09.511424 kubelet[2438]: I0212 20:00:09.511392 2438 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:00:09.512521 env[1317]: time="2024-02-12T20:00:09.512475305Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:00:09.512889 kubelet[2438]: I0212 20:00:09.512686 2438 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:00:10.371070 kubelet[2438]: I0212 20:00:10.371031 2438 topology_manager.go:215] "Topology Admit Handler" podUID="0c6d97d2-7e00-4f43-8672-c7f705170382" podNamespace="kube-system" podName="kube-proxy-dqpqk" Feb 12 20:00:10.377229 systemd[1]: Created slice kubepods-besteffort-pod0c6d97d2_7e00_4f43_8672_c7f705170382.slice. Feb 12 20:00:10.400289 kubelet[2438]: I0212 20:00:10.400256 2438 topology_manager.go:215] "Topology Admit Handler" podUID="db57b715-35df-45bc-b92d-5e7efa5bf8c3" podNamespace="kube-flannel" podName="kube-flannel-ds-9tbjk" Feb 12 20:00:10.406434 systemd[1]: Created slice kubepods-burstable-poddb57b715_35df_45bc_b92d_5e7efa5bf8c3.slice. Feb 12 20:00:10.413473 kubelet[2438]: I0212 20:00:10.413449 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdzqg\" (UniqueName: \"kubernetes.io/projected/0c6d97d2-7e00-4f43-8672-c7f705170382-kube-api-access-sdzqg\") pod \"kube-proxy-dqpqk\" (UID: \"0c6d97d2-7e00-4f43-8672-c7f705170382\") " pod="kube-system/kube-proxy-dqpqk" Feb 12 20:00:10.413698 kubelet[2438]: I0212 20:00:10.413679 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c6d97d2-7e00-4f43-8672-c7f705170382-xtables-lock\") pod \"kube-proxy-dqpqk\" (UID: \"0c6d97d2-7e00-4f43-8672-c7f705170382\") " pod="kube-system/kube-proxy-dqpqk" Feb 12 20:00:10.413842 kubelet[2438]: I0212 20:00:10.413828 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c6d97d2-7e00-4f43-8672-c7f705170382-lib-modules\") pod \"kube-proxy-dqpqk\" (UID: \"0c6d97d2-7e00-4f43-8672-c7f705170382\") " pod="kube-system/kube-proxy-dqpqk" Feb 12 20:00:10.413983 kubelet[2438]: I0212 20:00:10.413970 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqccz\" (UniqueName: \"kubernetes.io/projected/db57b715-35df-45bc-b92d-5e7efa5bf8c3-kube-api-access-fqccz\") pod \"kube-flannel-ds-9tbjk\" (UID: \"db57b715-35df-45bc-b92d-5e7efa5bf8c3\") " pod="kube-flannel/kube-flannel-ds-9tbjk" Feb 12 20:00:10.414139 kubelet[2438]: I0212 20:00:10.414126 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db57b715-35df-45bc-b92d-5e7efa5bf8c3-xtables-lock\") pod \"kube-flannel-ds-9tbjk\" (UID: \"db57b715-35df-45bc-b92d-5e7efa5bf8c3\") " pod="kube-flannel/kube-flannel-ds-9tbjk" Feb 12 20:00:10.414278 kubelet[2438]: I0212 20:00:10.414264 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/db57b715-35df-45bc-b92d-5e7efa5bf8c3-run\") pod \"kube-flannel-ds-9tbjk\" (UID: \"db57b715-35df-45bc-b92d-5e7efa5bf8c3\") " pod="kube-flannel/kube-flannel-ds-9tbjk" Feb 12 20:00:10.414417 kubelet[2438]: I0212 20:00:10.414403 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/db57b715-35df-45bc-b92d-5e7efa5bf8c3-cni-plugin\") pod \"kube-flannel-ds-9tbjk\" (UID: \"db57b715-35df-45bc-b92d-5e7efa5bf8c3\") " pod="kube-flannel/kube-flannel-ds-9tbjk" Feb 12 20:00:10.414574 kubelet[2438]: I0212 20:00:10.414557 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c6d97d2-7e00-4f43-8672-c7f705170382-kube-proxy\") pod \"kube-proxy-dqpqk\" (UID: \"0c6d97d2-7e00-4f43-8672-c7f705170382\") " pod="kube-system/kube-proxy-dqpqk" Feb 12 20:00:10.414736 kubelet[2438]: I0212 20:00:10.414715 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/db57b715-35df-45bc-b92d-5e7efa5bf8c3-cni\") pod \"kube-flannel-ds-9tbjk\" (UID: \"db57b715-35df-45bc-b92d-5e7efa5bf8c3\") " pod="kube-flannel/kube-flannel-ds-9tbjk" Feb 12 20:00:10.414817 kubelet[2438]: I0212 20:00:10.414775 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/db57b715-35df-45bc-b92d-5e7efa5bf8c3-flannel-cfg\") pod \"kube-flannel-ds-9tbjk\" (UID: \"db57b715-35df-45bc-b92d-5e7efa5bf8c3\") " pod="kube-flannel/kube-flannel-ds-9tbjk" Feb 12 20:00:10.683409 env[1317]: time="2024-02-12T20:00:10.683287381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqpqk,Uid:0c6d97d2-7e00-4f43-8672-c7f705170382,Namespace:kube-system,Attempt:0,}" Feb 12 20:00:10.709474 env[1317]: time="2024-02-12T20:00:10.709430058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9tbjk,Uid:db57b715-35df-45bc-b92d-5e7efa5bf8c3,Namespace:kube-flannel,Attempt:0,}" Feb 12 20:00:10.721361 env[1317]: time="2024-02-12T20:00:10.721296274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:00:10.721361 env[1317]: time="2024-02-12T20:00:10.721331375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:00:10.721583 env[1317]: time="2024-02-12T20:00:10.721345175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:00:10.721654 env[1317]: time="2024-02-12T20:00:10.721567279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d82c86fdad1bbb2b306f44502ecff0c0237bea863d03046d1e3840c9f53b45f9 pid=2503 runtime=io.containerd.runc.v2 Feb 12 20:00:10.741357 systemd[1]: Started cri-containerd-d82c86fdad1bbb2b306f44502ecff0c0237bea863d03046d1e3840c9f53b45f9.scope. Feb 12 20:00:10.762161 env[1317]: time="2024-02-12T20:00:10.761443906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:00:10.762161 env[1317]: time="2024-02-12T20:00:10.761522807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:00:10.762161 env[1317]: time="2024-02-12T20:00:10.761550708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:00:10.762161 env[1317]: time="2024-02-12T20:00:10.761690610Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07 pid=2541 runtime=io.containerd.runc.v2 Feb 12 20:00:10.779267 env[1317]: time="2024-02-12T20:00:10.779220230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqpqk,Uid:0c6d97d2-7e00-4f43-8672-c7f705170382,Namespace:kube-system,Attempt:0,} returns sandbox id \"d82c86fdad1bbb2b306f44502ecff0c0237bea863d03046d1e3840c9f53b45f9\"" Feb 12 20:00:10.783178 env[1317]: time="2024-02-12T20:00:10.783140401Z" level=info msg="CreateContainer within sandbox \"d82c86fdad1bbb2b306f44502ecff0c0237bea863d03046d1e3840c9f53b45f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:00:10.793055 systemd[1]: Started cri-containerd-f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07.scope. Feb 12 20:00:10.823123 env[1317]: time="2024-02-12T20:00:10.823073929Z" level=info msg="CreateContainer within sandbox \"d82c86fdad1bbb2b306f44502ecff0c0237bea863d03046d1e3840c9f53b45f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04cd93579bf836799ec1a63fd652d8af622c8d2e6e0f668477480daa0b5e6e8a\"" Feb 12 20:00:10.823964 env[1317]: time="2024-02-12T20:00:10.823927445Z" level=info msg="StartContainer for \"04cd93579bf836799ec1a63fd652d8af622c8d2e6e0f668477480daa0b5e6e8a\"" Feb 12 20:00:10.844124 env[1317]: time="2024-02-12T20:00:10.844074312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9tbjk,Uid:db57b715-35df-45bc-b92d-5e7efa5bf8c3,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\"" Feb 12 20:00:10.848614 env[1317]: time="2024-02-12T20:00:10.848559194Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 12 20:00:10.859655 systemd[1]: Started cri-containerd-04cd93579bf836799ec1a63fd652d8af622c8d2e6e0f668477480daa0b5e6e8a.scope. Feb 12 20:00:10.900081 env[1317]: time="2024-02-12T20:00:10.900033332Z" level=info msg="StartContainer for \"04cd93579bf836799ec1a63fd652d8af622c8d2e6e0f668477480daa0b5e6e8a\" returns successfully" Feb 12 20:00:11.398281 kubelet[2438]: I0212 20:00:11.398246 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dqpqk" podStartSLOduration=1.398203333 podCreationTimestamp="2024-02-12 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:00:11.397700024 +0000 UTC m=+15.191062000" watchObservedRunningTime="2024-02-12 20:00:11.398203333 +0000 UTC m=+15.191565309" Feb 12 20:00:11.550629 systemd[1]: run-containerd-runc-k8s.io-d82c86fdad1bbb2b306f44502ecff0c0237bea863d03046d1e3840c9f53b45f9-runc.9IlAzq.mount: Deactivated successfully. Feb 12 20:00:14.553207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2526681608.mount: Deactivated successfully. Feb 12 20:00:14.647850 env[1317]: time="2024-02-12T20:00:14.647805835Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:14.653667 env[1317]: time="2024-02-12T20:00:14.653632031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:14.657475 env[1317]: time="2024-02-12T20:00:14.657446394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:14.661471 env[1317]: time="2024-02-12T20:00:14.661437760Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:14.661947 env[1317]: time="2024-02-12T20:00:14.661914967Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 12 20:00:14.664447 env[1317]: time="2024-02-12T20:00:14.664410508Z" level=info msg="CreateContainer within sandbox \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 12 20:00:14.687434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513896646.mount: Deactivated successfully. Feb 12 20:00:14.704523 env[1317]: time="2024-02-12T20:00:14.704472069Z" level=info msg="CreateContainer within sandbox \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37\"" Feb 12 20:00:14.706236 env[1317]: time="2024-02-12T20:00:14.705275182Z" level=info msg="StartContainer for \"16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37\"" Feb 12 20:00:14.723117 systemd[1]: Started cri-containerd-16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37.scope. Feb 12 20:00:14.757415 systemd[1]: cri-containerd-16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37.scope: Deactivated successfully. Feb 12 20:00:14.759766 env[1317]: time="2024-02-12T20:00:14.759693079Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb57b715_35df_45bc_b92d_5e7efa5bf8c3.slice/cri-containerd-16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37.scope/memory.events\": no such file or directory" Feb 12 20:00:14.764610 env[1317]: time="2024-02-12T20:00:14.764565059Z" level=info msg="StartContainer for \"16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37\" returns successfully" Feb 12 20:00:14.902840 env[1317]: time="2024-02-12T20:00:14.902774237Z" level=info msg="shim disconnected" id=16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37 Feb 12 20:00:14.903185 env[1317]: time="2024-02-12T20:00:14.902846538Z" level=warning msg="cleaning up after shim disconnected" id=16f952ce0c4b19ab23d5dac033c37e7cd85f218eac28bc6b7c715638b09adb37 namespace=k8s.io Feb 12 20:00:14.903185 env[1317]: time="2024-02-12T20:00:14.902863139Z" level=info msg="cleaning up dead shim" Feb 12 20:00:14.911525 env[1317]: time="2024-02-12T20:00:14.911485581Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:00:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2777 runtime=io.containerd.runc.v2\n" Feb 12 20:00:15.400610 env[1317]: time="2024-02-12T20:00:15.400569381Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 12 20:00:15.463453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722063467.mount: Deactivated successfully. Feb 12 20:00:17.395925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500595854.mount: Deactivated successfully. Feb 12 20:00:18.440589 env[1317]: time="2024-02-12T20:00:18.440538300Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:18.452088 env[1317]: time="2024-02-12T20:00:18.452045372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:18.459400 env[1317]: time="2024-02-12T20:00:18.459364981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:18.464780 env[1317]: time="2024-02-12T20:00:18.464748562Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:00:18.465404 env[1317]: time="2024-02-12T20:00:18.465369571Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 12 20:00:18.468830 env[1317]: time="2024-02-12T20:00:18.468796522Z" level=info msg="CreateContainer within sandbox \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 20:00:18.501954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900335225.mount: Deactivated successfully. Feb 12 20:00:18.517384 env[1317]: time="2024-02-12T20:00:18.517342348Z" level=info msg="CreateContainer within sandbox \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334\"" Feb 12 20:00:18.519787 env[1317]: time="2024-02-12T20:00:18.518073059Z" level=info msg="StartContainer for \"4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334\"" Feb 12 20:00:18.543257 systemd[1]: Started cri-containerd-4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334.scope. Feb 12 20:00:18.571703 systemd[1]: cri-containerd-4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334.scope: Deactivated successfully. Feb 12 20:00:18.579178 kubelet[2438]: I0212 20:00:18.579149 2438 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:00:18.585244 env[1317]: time="2024-02-12T20:00:18.585202462Z" level=info msg="StartContainer for \"4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334\" returns successfully" Feb 12 20:00:18.629985 kubelet[2438]: I0212 20:00:18.629941 2438 topology_manager.go:215] "Topology Admit Handler" podUID="c710647f-d7ce-4979-a8f4-c73a945bac31" podNamespace="kube-system" podName="coredns-5dd5756b68-drdp2" Feb 12 20:00:18.751744 kubelet[2438]: I0212 20:00:18.640448 2438 topology_manager.go:215] "Topology Admit Handler" podUID="a0430cb4-4042-461b-9474-fcc60976276d" podNamespace="kube-system" podName="coredns-5dd5756b68-g6xww" Feb 12 20:00:18.635875 systemd[1]: Created slice kubepods-burstable-podc710647f_d7ce_4979_a8f4_c73a945bac31.slice. Feb 12 20:00:18.645962 systemd[1]: Created slice kubepods-burstable-poda0430cb4_4042_461b_9474_fcc60976276d.slice. Feb 12 20:00:18.767753 kubelet[2438]: I0212 20:00:18.767716 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c710647f-d7ce-4979-a8f4-c73a945bac31-config-volume\") pod \"coredns-5dd5756b68-drdp2\" (UID: \"c710647f-d7ce-4979-a8f4-c73a945bac31\") " pod="kube-system/coredns-5dd5756b68-drdp2" Feb 12 20:00:18.767961 kubelet[2438]: I0212 20:00:18.767942 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0430cb4-4042-461b-9474-fcc60976276d-config-volume\") pod \"coredns-5dd5756b68-g6xww\" (UID: \"a0430cb4-4042-461b-9474-fcc60976276d\") " pod="kube-system/coredns-5dd5756b68-g6xww" Feb 12 20:00:18.768065 kubelet[2438]: I0212 20:00:18.767985 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnsw\" (UniqueName: \"kubernetes.io/projected/a0430cb4-4042-461b-9474-fcc60976276d-kube-api-access-5qnsw\") pod \"coredns-5dd5756b68-g6xww\" (UID: \"a0430cb4-4042-461b-9474-fcc60976276d\") " pod="kube-system/coredns-5dd5756b68-g6xww" Feb 12 20:00:18.768065 kubelet[2438]: I0212 20:00:18.768032 2438 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdff\" (UniqueName: \"kubernetes.io/projected/c710647f-d7ce-4979-a8f4-c73a945bac31-kube-api-access-gwdff\") pod \"coredns-5dd5756b68-drdp2\" (UID: \"c710647f-d7ce-4979-a8f4-c73a945bac31\") " pod="kube-system/coredns-5dd5756b68-drdp2" Feb 12 20:00:19.058935 env[1317]: time="2024-02-12T20:00:19.057037892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-drdp2,Uid:c710647f-d7ce-4979-a8f4-c73a945bac31,Namespace:kube-system,Attempt:0,}" Feb 12 20:00:19.058935 env[1317]: time="2024-02-12T20:00:19.057038092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g6xww,Uid:a0430cb4-4042-461b-9474-fcc60976276d,Namespace:kube-system,Attempt:0,}" Feb 12 20:00:19.164542 env[1317]: time="2024-02-12T20:00:19.164481759Z" level=info msg="shim disconnected" id=4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334 Feb 12 20:00:19.164542 env[1317]: time="2024-02-12T20:00:19.164540260Z" level=warning msg="cleaning up after shim disconnected" id=4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334 namespace=k8s.io Feb 12 20:00:19.164542 env[1317]: time="2024-02-12T20:00:19.164552061Z" level=info msg="cleaning up dead shim" Feb 12 20:00:19.173322 env[1317]: time="2024-02-12T20:00:19.173286388Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:00:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2835 runtime=io.containerd.runc.v2\n" Feb 12 20:00:19.218729 env[1317]: time="2024-02-12T20:00:19.218670350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g6xww,Uid:a0430cb4-4042-461b-9474-fcc60976276d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05fdc4f2e0d166c08b49362c46e75909702cec4dde90f5b84f1b2d584a58dbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 20:00:19.219239 kubelet[2438]: E0212 20:00:19.219209 2438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdc4f2e0d166c08b49362c46e75909702cec4dde90f5b84f1b2d584a58dbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 20:00:19.219360 kubelet[2438]: E0212 20:00:19.219278 2438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdc4f2e0d166c08b49362c46e75909702cec4dde90f5b84f1b2d584a58dbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-g6xww" Feb 12 20:00:19.219360 kubelet[2438]: E0212 20:00:19.219333 2438 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05fdc4f2e0d166c08b49362c46e75909702cec4dde90f5b84f1b2d584a58dbf7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-g6xww" Feb 12 20:00:19.219457 kubelet[2438]: E0212 20:00:19.219425 2438 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-g6xww_kube-system(a0430cb4-4042-461b-9474-fcc60976276d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-g6xww_kube-system(a0430cb4-4042-461b-9474-fcc60976276d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05fdc4f2e0d166c08b49362c46e75909702cec4dde90f5b84f1b2d584a58dbf7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-g6xww" podUID="a0430cb4-4042-461b-9474-fcc60976276d" Feb 12 20:00:19.225790 env[1317]: time="2024-02-12T20:00:19.225734753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-drdp2,Uid:c710647f-d7ce-4979-a8f4-c73a945bac31,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d554efe8e837cfe798a331d6f7c9fa34cbf7322e2e813d6f2a2380e33cf6013d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 20:00:19.226118 kubelet[2438]: E0212 20:00:19.226096 2438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d554efe8e837cfe798a331d6f7c9fa34cbf7322e2e813d6f2a2380e33cf6013d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 12 20:00:19.226231 kubelet[2438]: E0212 20:00:19.226145 2438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d554efe8e837cfe798a331d6f7c9fa34cbf7322e2e813d6f2a2380e33cf6013d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-drdp2" Feb 12 20:00:19.226231 kubelet[2438]: E0212 20:00:19.226170 2438 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d554efe8e837cfe798a331d6f7c9fa34cbf7322e2e813d6f2a2380e33cf6013d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-drdp2" Feb 12 20:00:19.226327 kubelet[2438]: E0212 20:00:19.226236 2438 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-drdp2_kube-system(c710647f-d7ce-4979-a8f4-c73a945bac31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-drdp2_kube-system(c710647f-d7ce-4979-a8f4-c73a945bac31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d554efe8e837cfe798a331d6f7c9fa34cbf7322e2e813d6f2a2380e33cf6013d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-drdp2" podUID="c710647f-d7ce-4979-a8f4-c73a945bac31" Feb 12 20:00:19.410205 env[1317]: time="2024-02-12T20:00:19.410157843Z" level=info msg="CreateContainer within sandbox \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 12 20:00:19.448045 env[1317]: time="2024-02-12T20:00:19.447984395Z" level=info msg="CreateContainer within sandbox \"f563c87bb0958f13706938c1fa5ddd4b0ea628f5453093c93990c1e8bbfb8a07\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e71cb711fd017acd957cd04b553bed5761b84b4aa55a210c7c265919abbf8091\"" Feb 12 20:00:19.450442 env[1317]: time="2024-02-12T20:00:19.448715206Z" level=info msg="StartContainer for \"e71cb711fd017acd957cd04b553bed5761b84b4aa55a210c7c265919abbf8091\"" Feb 12 20:00:19.464730 systemd[1]: Started cri-containerd-e71cb711fd017acd957cd04b553bed5761b84b4aa55a210c7c265919abbf8091.scope. Feb 12 20:00:19.513109 env[1317]: time="2024-02-12T20:00:19.506376047Z" level=info msg="StartContainer for \"e71cb711fd017acd957cd04b553bed5761b84b4aa55a210c7c265919abbf8091\" returns successfully" Feb 12 20:00:19.509392 systemd[1]: run-containerd-runc-k8s.io-4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334-runc.BsJwSt.mount: Deactivated successfully. Feb 12 20:00:19.509533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c3eaf458d4c91b9db73761a139bd9dd867bbff045a6b738b1d9b84ed3313334-rootfs.mount: Deactivated successfully. Feb 12 20:00:20.710043 systemd-networkd[1469]: flannel.1: Link UP Feb 12 20:00:20.710055 systemd-networkd[1469]: flannel.1: Gained carrier Feb 12 20:00:22.045254 systemd-networkd[1469]: flannel.1: Gained IPv6LL Feb 12 20:00:31.339682 env[1317]: time="2024-02-12T20:00:31.339618362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g6xww,Uid:a0430cb4-4042-461b-9474-fcc60976276d,Namespace:kube-system,Attempt:0,}" Feb 12 20:00:31.383034 systemd-networkd[1469]: cni0: Link UP Feb 12 20:00:31.383043 systemd-networkd[1469]: cni0: Gained carrier Feb 12 20:00:31.386848 systemd-networkd[1469]: cni0: Lost carrier Feb 12 20:00:31.401316 systemd-networkd[1469]: vethb8c652bc: Link UP Feb 12 20:00:31.408796 kernel: cni0: port 1(vethb8c652bc) entered blocking state Feb 12 20:00:31.408896 kernel: cni0: port 1(vethb8c652bc) entered disabled state Feb 12 20:00:31.410903 kernel: device vethb8c652bc entered promiscuous mode Feb 12 20:00:31.420116 kernel: cni0: port 1(vethb8c652bc) entered blocking state Feb 12 20:00:31.420176 kernel: cni0: port 1(vethb8c652bc) entered forwarding state Feb 12 20:00:31.420203 kernel: cni0: port 1(vethb8c652bc) entered disabled state Feb 12 20:00:31.430203 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb8c652bc: link becomes ready Feb 12 20:00:31.430267 kernel: cni0: port 1(vethb8c652bc) entered blocking state Feb 12 20:00:31.430298 kernel: cni0: port 1(vethb8c652bc) entered forwarding state Feb 12 20:00:31.432794 systemd-networkd[1469]: vethb8c652bc: Gained carrier Feb 12 20:00:31.433080 systemd-networkd[1469]: cni0: Gained carrier Feb 12 20:00:31.435167 env[1317]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 12 20:00:31.435167 env[1317]: delegateAdd: netconf sent to delegate plugin: Feb 12 20:00:31.449449 env[1317]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T20:00:31.449394480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:00:31.449631 env[1317]: time="2024-02-12T20:00:31.449432280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:00:31.449631 env[1317]: time="2024-02-12T20:00:31.449445781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:00:31.449772 env[1317]: time="2024-02-12T20:00:31.449619583Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc94a85e34b53209a4c636a4aaec7853170b6c37500b15acb22f8bd22b137713 pid=3095 runtime=io.containerd.runc.v2 Feb 12 20:00:31.466909 systemd[1]: run-containerd-runc-k8s.io-fc94a85e34b53209a4c636a4aaec7853170b6c37500b15acb22f8bd22b137713-runc.SSMfl4.mount: Deactivated successfully. Feb 12 20:00:31.471620 systemd[1]: Started cri-containerd-fc94a85e34b53209a4c636a4aaec7853170b6c37500b15acb22f8bd22b137713.scope. Feb 12 20:00:31.523651 env[1317]: time="2024-02-12T20:00:31.523608704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-g6xww,Uid:a0430cb4-4042-461b-9474-fcc60976276d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc94a85e34b53209a4c636a4aaec7853170b6c37500b15acb22f8bd22b137713\"" Feb 12 20:00:31.528109 env[1317]: time="2024-02-12T20:00:31.528073553Z" level=info msg="CreateContainer within sandbox \"fc94a85e34b53209a4c636a4aaec7853170b6c37500b15acb22f8bd22b137713\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:00:31.556383 env[1317]: time="2024-02-12T20:00:31.556344167Z" level=info msg="CreateContainer within sandbox \"fc94a85e34b53209a4c636a4aaec7853170b6c37500b15acb22f8bd22b137713\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"accafa77bae0ec36e91d019cc223acbba6be3d6a220477f4d4c2f84f86acf1ea\"" Feb 12 20:00:31.558719 env[1317]: time="2024-02-12T20:00:31.556946573Z" level=info msg="StartContainer for \"accafa77bae0ec36e91d019cc223acbba6be3d6a220477f4d4c2f84f86acf1ea\"" Feb 12 20:00:31.572718 systemd[1]: Started cri-containerd-accafa77bae0ec36e91d019cc223acbba6be3d6a220477f4d4c2f84f86acf1ea.scope. Feb 12 20:00:31.601107 env[1317]: time="2024-02-12T20:00:31.600986662Z" level=info msg="StartContainer for \"accafa77bae0ec36e91d019cc223acbba6be3d6a220477f4d4c2f84f86acf1ea\" returns successfully" Feb 12 20:00:32.450495 kubelet[2438]: I0212 20:00:32.449617 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9tbjk" podStartSLOduration=14.829416936 podCreationTimestamp="2024-02-12 20:00:10 +0000 UTC" firstStartedPulling="2024-02-12 20:00:10.84561054 +0000 UTC m=+14.638972616" lastFinishedPulling="2024-02-12 20:00:18.465769677 +0000 UTC m=+22.259131653" observedRunningTime="2024-02-12 20:00:20.42705113 +0000 UTC m=+24.220413106" watchObservedRunningTime="2024-02-12 20:00:32.449575973 +0000 UTC m=+36.242937949" Feb 12 20:00:32.450495 kubelet[2438]: I0212 20:00:32.449799 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-g6xww" podStartSLOduration=22.449774875 podCreationTimestamp="2024-02-12 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:00:32.449411371 +0000 UTC m=+36.242773347" watchObservedRunningTime="2024-02-12 20:00:32.449774875 +0000 UTC m=+36.243136851" Feb 12 20:00:32.605174 systemd-networkd[1469]: cni0: Gained IPv6LL Feb 12 20:00:33.245190 systemd-networkd[1469]: vethb8c652bc: Gained IPv6LL Feb 12 20:00:33.340582 env[1317]: time="2024-02-12T20:00:33.340529073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-drdp2,Uid:c710647f-d7ce-4979-a8f4-c73a945bac31,Namespace:kube-system,Attempt:0,}" Feb 12 20:00:33.400475 systemd-networkd[1469]: vethe588e147: Link UP Feb 12 20:00:33.407716 kernel: cni0: port 2(vethe588e147) entered blocking state Feb 12 20:00:33.407809 kernel: cni0: port 2(vethe588e147) entered disabled state Feb 12 20:00:33.410727 kernel: device vethe588e147 entered promiscuous mode Feb 12 20:00:33.419779 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:00:33.419857 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe588e147: link becomes ready Feb 12 20:00:33.419891 kernel: cni0: port 2(vethe588e147) entered blocking state Feb 12 20:00:33.425184 kernel: cni0: port 2(vethe588e147) entered forwarding state Feb 12 20:00:33.425320 systemd-networkd[1469]: vethe588e147: Gained carrier Feb 12 20:00:33.427276 env[1317]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000106628), "name":"cbr0", "type":"bridge"} Feb 12 20:00:33.427276 env[1317]: delegateAdd: netconf sent to delegate plugin: Feb 12 20:00:33.441077 env[1317]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-12T20:00:33.440900340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:00:33.441077 env[1317]: time="2024-02-12T20:00:33.440936741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:00:33.441077 env[1317]: time="2024-02-12T20:00:33.440951241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:00:33.442503 env[1317]: time="2024-02-12T20:00:33.441473046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d83c79db5ebd2f8520b949fbf2635e959837157bd5a8b936db27a7794f1a5e9e pid=3204 runtime=io.containerd.runc.v2 Feb 12 20:00:33.463581 systemd[1]: run-containerd-runc-k8s.io-d83c79db5ebd2f8520b949fbf2635e959837157bd5a8b936db27a7794f1a5e9e-runc.yCCzel.mount: Deactivated successfully. Feb 12 20:00:33.466744 systemd[1]: Started cri-containerd-d83c79db5ebd2f8520b949fbf2635e959837157bd5a8b936db27a7794f1a5e9e.scope. Feb 12 20:00:33.511130 env[1317]: time="2024-02-12T20:00:33.510322078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-drdp2,Uid:c710647f-d7ce-4979-a8f4-c73a945bac31,Namespace:kube-system,Attempt:0,} returns sandbox id \"d83c79db5ebd2f8520b949fbf2635e959837157bd5a8b936db27a7794f1a5e9e\"" Feb 12 20:00:33.515203 env[1317]: time="2024-02-12T20:00:33.515175130Z" level=info msg="CreateContainer within sandbox \"d83c79db5ebd2f8520b949fbf2635e959837157bd5a8b936db27a7794f1a5e9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:00:33.546299 env[1317]: time="2024-02-12T20:00:33.546239260Z" level=info msg="CreateContainer within sandbox \"d83c79db5ebd2f8520b949fbf2635e959837157bd5a8b936db27a7794f1a5e9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c52a0352a95e73f74a7dda2fedb7b3055b27156840c3691108bbfb84c375f29\"" Feb 12 20:00:33.548490 env[1317]: time="2024-02-12T20:00:33.546826566Z" level=info msg="StartContainer for \"6c52a0352a95e73f74a7dda2fedb7b3055b27156840c3691108bbfb84c375f29\"" Feb 12 20:00:33.562642 systemd[1]: Started cri-containerd-6c52a0352a95e73f74a7dda2fedb7b3055b27156840c3691108bbfb84c375f29.scope. Feb 12 20:00:33.594597 env[1317]: time="2024-02-12T20:00:33.594509574Z" level=info msg="StartContainer for \"6c52a0352a95e73f74a7dda2fedb7b3055b27156840c3691108bbfb84c375f29\" returns successfully" Feb 12 20:00:34.546828 kubelet[2438]: I0212 20:00:34.546761 2438 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-drdp2" podStartSLOduration=24.546707678 podCreationTimestamp="2024-02-12 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:00:34.455385827 +0000 UTC m=+38.248747903" watchObservedRunningTime="2024-02-12 20:00:34.546707678 +0000 UTC m=+38.340069754" Feb 12 20:00:35.165321 systemd-networkd[1469]: vethe588e147: Gained IPv6LL Feb 12 20:02:24.147718 systemd[1]: Started sshd@5-10.200.8.16:22-10.200.12.6:59476.service. Feb 12 20:02:24.761762 sshd[3756]: Accepted publickey for core from 10.200.12.6 port 59476 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:24.763475 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:24.768686 systemd[1]: Started session-8.scope. Feb 12 20:02:24.769156 systemd-logind[1304]: New session 8 of user core. Feb 12 20:02:25.267781 sshd[3756]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:25.270910 systemd[1]: sshd@5-10.200.8.16:22-10.200.12.6:59476.service: Deactivated successfully. Feb 12 20:02:25.271852 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:02:25.272379 systemd-logind[1304]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:02:25.273234 systemd-logind[1304]: Removed session 8. Feb 12 20:02:30.374537 systemd[1]: Started sshd@6-10.200.8.16:22-10.200.12.6:47914.service. Feb 12 20:02:30.998130 sshd[3789]: Accepted publickey for core from 10.200.12.6 port 47914 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:30.999761 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:31.005391 systemd[1]: Started session-9.scope. Feb 12 20:02:31.005845 systemd-logind[1304]: New session 9 of user core. Feb 12 20:02:31.510818 sshd[3789]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:31.513920 systemd[1]: sshd@6-10.200.8.16:22-10.200.12.6:47914.service: Deactivated successfully. Feb 12 20:02:31.514874 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:02:31.515571 systemd-logind[1304]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:02:31.516374 systemd-logind[1304]: Removed session 9. Feb 12 20:02:36.615138 systemd[1]: Started sshd@7-10.200.8.16:22-10.200.12.6:47916.service. Feb 12 20:02:37.229603 sshd[3844]: Accepted publickey for core from 10.200.12.6 port 47916 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:37.230960 sshd[3844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:37.235910 systemd-logind[1304]: New session 10 of user core. Feb 12 20:02:37.236483 systemd[1]: Started session-10.scope. Feb 12 20:02:37.736510 sshd[3844]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:37.739857 systemd[1]: sshd@7-10.200.8.16:22-10.200.12.6:47916.service: Deactivated successfully. Feb 12 20:02:37.740896 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:02:37.741738 systemd-logind[1304]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:02:37.742569 systemd-logind[1304]: Removed session 10. Feb 12 20:02:37.842942 systemd[1]: Started sshd@8-10.200.8.16:22-10.200.12.6:43768.service. Feb 12 20:02:38.461598 sshd[3856]: Accepted publickey for core from 10.200.12.6 port 43768 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:38.463192 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:38.468084 systemd[1]: Started session-11.scope. Feb 12 20:02:38.468676 systemd-logind[1304]: New session 11 of user core. Feb 12 20:02:39.058220 sshd[3856]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:39.061254 systemd[1]: sshd@8-10.200.8.16:22-10.200.12.6:43768.service: Deactivated successfully. Feb 12 20:02:39.062513 systemd-logind[1304]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:02:39.062621 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:02:39.063749 systemd-logind[1304]: Removed session 11. Feb 12 20:02:39.161926 systemd[1]: Started sshd@9-10.200.8.16:22-10.200.12.6:43774.service. Feb 12 20:02:39.782146 sshd[3866]: Accepted publickey for core from 10.200.12.6 port 43774 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:39.783701 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:39.788584 systemd-logind[1304]: New session 12 of user core. Feb 12 20:02:39.789513 systemd[1]: Started session-12.scope. Feb 12 20:02:40.273158 sshd[3866]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:40.276369 systemd[1]: sshd@9-10.200.8.16:22-10.200.12.6:43774.service: Deactivated successfully. Feb 12 20:02:40.277531 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:02:40.278444 systemd-logind[1304]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:02:40.279508 systemd-logind[1304]: Removed session 12. Feb 12 20:02:45.388469 systemd[1]: Started sshd@10-10.200.8.16:22-10.200.12.6:43788.service. Feb 12 20:02:46.013052 sshd[3900]: Accepted publickey for core from 10.200.12.6 port 43788 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:46.014545 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:46.019455 systemd[1]: Started session-13.scope. Feb 12 20:02:46.019906 systemd-logind[1304]: New session 13 of user core. Feb 12 20:02:46.502598 sshd[3900]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:46.505622 systemd[1]: sshd@10-10.200.8.16:22-10.200.12.6:43788.service: Deactivated successfully. Feb 12 20:02:46.506626 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:02:46.507372 systemd-logind[1304]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:02:46.508229 systemd-logind[1304]: Removed session 13. Feb 12 20:02:46.606789 systemd[1]: Started sshd@11-10.200.8.16:22-10.200.12.6:43794.service. Feb 12 20:02:47.224262 sshd[3932]: Accepted publickey for core from 10.200.12.6 port 43794 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:47.225881 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:47.230904 systemd-logind[1304]: New session 14 of user core. Feb 12 20:02:47.231418 systemd[1]: Started session-14.scope. Feb 12 20:02:47.841524 sshd[3932]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:47.844749 systemd[1]: sshd@11-10.200.8.16:22-10.200.12.6:43794.service: Deactivated successfully. Feb 12 20:02:47.846190 systemd-logind[1304]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:02:47.846288 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:02:47.847726 systemd-logind[1304]: Removed session 14. Feb 12 20:02:47.946239 systemd[1]: Started sshd@12-10.200.8.16:22-10.200.12.6:56770.service. Feb 12 20:02:48.563485 sshd[3942]: Accepted publickey for core from 10.200.12.6 port 56770 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:48.564986 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:48.570709 systemd[1]: Started session-15.scope. Feb 12 20:02:48.571447 systemd-logind[1304]: New session 15 of user core. Feb 12 20:02:51.195595 sshd[3942]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:51.198941 systemd[1]: sshd@12-10.200.8.16:22-10.200.12.6:56770.service: Deactivated successfully. Feb 12 20:02:51.199945 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:02:51.200753 systemd-logind[1304]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:02:51.201607 systemd-logind[1304]: Removed session 15. Feb 12 20:02:51.302093 systemd[1]: Started sshd@13-10.200.8.16:22-10.200.12.6:56786.service. Feb 12 20:02:51.924456 sshd[3967]: Accepted publickey for core from 10.200.12.6 port 56786 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:51.925932 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:51.931177 systemd-logind[1304]: New session 16 of user core. Feb 12 20:02:51.931655 systemd[1]: Started session-16.scope. Feb 12 20:02:52.591686 sshd[3967]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:52.595265 systemd[1]: sshd@13-10.200.8.16:22-10.200.12.6:56786.service: Deactivated successfully. Feb 12 20:02:52.596370 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:02:52.597315 systemd-logind[1304]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:02:52.598446 systemd-logind[1304]: Removed session 16. Feb 12 20:02:52.695358 systemd[1]: Started sshd@14-10.200.8.16:22-10.200.12.6:56788.service. Feb 12 20:02:53.325349 sshd[3990]: Accepted publickey for core from 10.200.12.6 port 56788 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:53.326963 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:53.332948 systemd[1]: Started session-17.scope. Feb 12 20:02:53.334232 systemd-logind[1304]: New session 17 of user core. Feb 12 20:02:53.811655 sshd[3990]: pam_unix(sshd:session): session closed for user core Feb 12 20:02:53.815516 systemd[1]: sshd@14-10.200.8.16:22-10.200.12.6:56788.service: Deactivated successfully. Feb 12 20:02:53.816384 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:02:53.816915 systemd-logind[1304]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:02:53.817732 systemd-logind[1304]: Removed session 17. Feb 12 20:02:58.916931 systemd[1]: Started sshd@15-10.200.8.16:22-10.200.12.6:57432.service. Feb 12 20:02:59.533407 sshd[4028]: Accepted publickey for core from 10.200.12.6 port 57432 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:02:59.535012 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:02:59.540359 systemd[1]: Started session-18.scope. Feb 12 20:02:59.540944 systemd-logind[1304]: New session 18 of user core. Feb 12 20:03:00.022459 sshd[4028]: pam_unix(sshd:session): session closed for user core Feb 12 20:03:00.025674 systemd[1]: sshd@15-10.200.8.16:22-10.200.12.6:57432.service: Deactivated successfully. Feb 12 20:03:00.026722 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:03:00.027454 systemd-logind[1304]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:03:00.028244 systemd-logind[1304]: Removed session 18. Feb 12 20:03:05.125986 systemd[1]: Started sshd@16-10.200.8.16:22-10.200.12.6:57446.service. Feb 12 20:03:05.739165 sshd[4061]: Accepted publickey for core from 10.200.12.6 port 57446 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:03:05.740827 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:03:05.745381 systemd-logind[1304]: New session 19 of user core. Feb 12 20:03:05.747350 systemd[1]: Started session-19.scope. Feb 12 20:03:06.229887 sshd[4061]: pam_unix(sshd:session): session closed for user core Feb 12 20:03:06.233229 systemd[1]: sshd@16-10.200.8.16:22-10.200.12.6:57446.service: Deactivated successfully. Feb 12 20:03:06.234336 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:03:06.235045 systemd-logind[1304]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:03:06.235802 systemd-logind[1304]: Removed session 19. Feb 12 20:03:11.339428 systemd[1]: Started sshd@17-10.200.8.16:22-10.200.12.6:49402.service. Feb 12 20:03:11.961462 sshd[4103]: Accepted publickey for core from 10.200.12.6 port 49402 ssh2: RSA SHA256:O9yTG6PKtgxWL/0m3BGiwi35nSo8w6cK1RNins02K7A Feb 12 20:03:11.963139 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:03:11.968938 systemd[1]: Started session-20.scope. Feb 12 20:03:11.970047 systemd-logind[1304]: New session 20 of user core. Feb 12 20:03:12.453399 sshd[4103]: pam_unix(sshd:session): session closed for user core Feb 12 20:03:12.456917 systemd-logind[1304]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:03:12.457200 systemd[1]: sshd@17-10.200.8.16:22-10.200.12.6:49402.service: Deactivated successfully. Feb 12 20:03:12.458197 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:03:12.459076 systemd-logind[1304]: Removed session 20. Feb 12 20:03:29.186525 systemd[1]: cri-containerd-45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e.scope: Deactivated successfully. Feb 12 20:03:29.186888 systemd[1]: cri-containerd-45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e.scope: Consumed 3.231s CPU time. Feb 12 20:03:29.207238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e-rootfs.mount: Deactivated successfully. Feb 12 20:03:29.251268 env[1317]: time="2024-02-12T20:03:29.251203850Z" level=info msg="shim disconnected" id=45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e Feb 12 20:03:29.251268 env[1317]: time="2024-02-12T20:03:29.251266351Z" level=warning msg="cleaning up after shim disconnected" id=45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e namespace=k8s.io Feb 12 20:03:29.251794 env[1317]: time="2024-02-12T20:03:29.251279351Z" level=info msg="cleaning up dead shim" Feb 12 20:03:29.258909 env[1317]: time="2024-02-12T20:03:29.258870327Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:03:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4205 runtime=io.containerd.runc.v2\n" Feb 12 20:03:29.696192 kubelet[2438]: E0212 20:03:29.695817 2438 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.16:41636->10.200.8.12:2379: read: connection timed out" Feb 12 20:03:29.697074 systemd[1]: cri-containerd-76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805.scope: Deactivated successfully. Feb 12 20:03:29.697449 systemd[1]: cri-containerd-76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805.scope: Consumed 1.269s CPU time. Feb 12 20:03:29.720082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805-rootfs.mount: Deactivated successfully. Feb 12 20:03:29.737266 env[1317]: time="2024-02-12T20:03:29.737217645Z" level=info msg="shim disconnected" id=76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805 Feb 12 20:03:29.737469 env[1317]: time="2024-02-12T20:03:29.737287746Z" level=warning msg="cleaning up after shim disconnected" id=76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805 namespace=k8s.io Feb 12 20:03:29.737469 env[1317]: time="2024-02-12T20:03:29.737302646Z" level=info msg="cleaning up dead shim" Feb 12 20:03:29.744728 env[1317]: time="2024-02-12T20:03:29.744692620Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:03:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4231 runtime=io.containerd.runc.v2\n" Feb 12 20:03:29.785850 kubelet[2438]: I0212 20:03:29.785820 2438 scope.go:117] "RemoveContainer" containerID="76abd55fbc24b9d4f62a764a6d71774a823c6c93eb51e1bea590e1f8943d0805" Feb 12 20:03:29.787970 kubelet[2438]: I0212 20:03:29.787937 2438 scope.go:117] "RemoveContainer" containerID="45c256e93f58d20d0f8d95629fbee6d5ac9b1c719d8686326ce252fac11a403e" Feb 12 20:03:29.789217 env[1317]: time="2024-02-12T20:03:29.789162068Z" level=info msg="CreateContainer within sandbox \"2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 20:03:29.790815 env[1317]: time="2024-02-12T20:03:29.790778485Z" level=info msg="CreateContainer within sandbox \"8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 20:03:29.829917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265085668.mount: Deactivated successfully. Feb 12 20:03:29.852819 env[1317]: time="2024-02-12T20:03:29.852774609Z" level=info msg="CreateContainer within sandbox \"2b4c1e15a78bea8c8a288e6d7e13b3eef3f14c8c03c45fd5bb5a2964055f6e49\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f14edca0fdd5cb1e87ba428c01ab8cb2cf939972c33eea750ba898f31f0f227a\"" Feb 12 20:03:29.853385 env[1317]: time="2024-02-12T20:03:29.853357315Z" level=info msg="StartContainer for \"f14edca0fdd5cb1e87ba428c01ab8cb2cf939972c33eea750ba898f31f0f227a\"" Feb 12 20:03:29.857091 env[1317]: time="2024-02-12T20:03:29.857052552Z" level=info msg="CreateContainer within sandbox \"8741562b5381c208196da21de8202d81ffcd5f0b8a9a2de4a831f62383f8934d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"81f399b626c66a22e289396440d8b5a7fb846708fc8492c81b137aa6d1c6028b\"" Feb 12 20:03:29.857758 env[1317]: time="2024-02-12T20:03:29.857723359Z" level=info msg="StartContainer for \"81f399b626c66a22e289396440d8b5a7fb846708fc8492c81b137aa6d1c6028b\"" Feb 12 20:03:29.878296 systemd[1]: Started cri-containerd-f14edca0fdd5cb1e87ba428c01ab8cb2cf939972c33eea750ba898f31f0f227a.scope. Feb 12 20:03:29.888957 systemd[1]: Started cri-containerd-81f399b626c66a22e289396440d8b5a7fb846708fc8492c81b137aa6d1c6028b.scope. Feb 12 20:03:29.956784 env[1317]: time="2024-02-12T20:03:29.954168630Z" level=info msg="StartContainer for \"f14edca0fdd5cb1e87ba428c01ab8cb2cf939972c33eea750ba898f31f0f227a\" returns successfully" Feb 12 20:03:29.963742 env[1317]: time="2024-02-12T20:03:29.963700826Z" level=info msg="StartContainer for \"81f399b626c66a22e289396440d8b5a7fb846708fc8492c81b137aa6d1c6028b\" returns successfully" Feb 12 20:03:30.209439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161149951.mount: Deactivated successfully. Feb 12 20:03:31.934204 kubelet[2438]: E0212 20:03:31.934077 2438 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-d5221102be.17b336218fe51d24", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-d5221102be", UID:"53f7b1bb9ec480e6427a8496a2b10fda", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d5221102be"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 3, 21, 479527716, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 3, 21, 479527716, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-d5221102be"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.16:41448->10.200.8.12:2379: read: connection timed out' (will not retry!) Feb 12 20:03:39.697091 kubelet[2438]: E0212 20:03:39.697040 2438 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:03:49.698242 kubelet[2438]: E0212 20:03:49.698189 2438 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:03:59.698866 kubelet[2438]: E0212 20:03:59.698821 2438 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d5221102be?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 20:04:05.937627 kubelet[2438]: E0212 20:04:05.937510 2438 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-d5221102be.17b336218fe51d24", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-d5221102be", UID:"53f7b1bb9ec480e6427a8496a2b10fda", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d5221102be"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 3, 21, 479527716, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 3, 25, 486387577, time.Local), Count:2, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-d5221102be"}': 'Timeout: request did not complete within requested timeout - context deadline exceeded' (will not retry!)