Dec 13 01:46:17.040250 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:46:17.040274 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:46:17.040284 kernel: BIOS-provided physical RAM map: Dec 13 01:46:17.040292 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:46:17.040297 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 01:46:17.040303 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 01:46:17.040314 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 01:46:17.040322 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 01:46:17.040329 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 01:46:17.040334 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 01:46:17.040342 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 01:46:17.040349 kernel: printk: bootconsole [earlyser0] enabled Dec 13 01:46:17.040356 kernel: NX (Execute Disable) protection: active Dec 13 01:46:17.040363 kernel: efi: EFI v2.70 by Microsoft Dec 13 01:46:17.040373 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 01:46:17.040382 kernel: random: crng init done Dec 13 01:46:17.040389 kernel: SMBIOS 3.1.0 present. Dec 13 01:46:17.040398 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 01:46:17.040404 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 01:46:17.040412 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 01:46:17.040420 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 01:46:17.040428 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 01:46:17.040437 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 01:46:17.040443 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 01:46:17.040453 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:46:17.040460 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 01:46:17.040469 kernel: tsc: Detected 2593.909 MHz processor Dec 13 01:46:17.040476 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:46:17.040484 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:46:17.040492 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 01:46:17.040500 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:46:17.040508 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 01:46:17.040517 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 01:46:17.040526 kernel: Using GB pages for direct mapping Dec 13 01:46:17.040533 kernel: Secure boot disabled Dec 13 01:46:17.040542 kernel: ACPI: Early table checksum verification disabled Dec 13 01:46:17.040548 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 01:46:17.040556 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040564 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040573 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 01:46:17.040585 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 01:46:17.040594 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040602 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040612 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040619 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040627 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040638 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040648 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:46:17.040655 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 01:46:17.040662 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 01:46:17.040671 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 01:46:17.040680 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 01:46:17.040688 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 01:46:17.040695 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 01:46:17.040706 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 01:46:17.040714 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 01:46:17.040723 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 01:46:17.040729 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 01:46:17.040738 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:46:17.040746 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:46:17.040755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 01:46:17.040762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 01:46:17.040770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 01:46:17.040781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 01:46:17.040791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 01:46:17.040808 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 01:46:17.040816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 01:46:17.040826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 01:46:17.040833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 01:46:17.040840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 01:46:17.040846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 01:46:17.040853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 01:46:17.040862 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 01:46:17.040868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 01:46:17.040875 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 01:46:17.040881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 01:46:17.040888 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 01:46:17.040895 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 01:46:17.040902 kernel: Zone ranges: Dec 13 01:46:17.040908 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:46:17.040915 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:46:17.040923 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:46:17.040930 kernel: Movable zone start for each node Dec 13 01:46:17.040937 kernel: Early memory node ranges Dec 13 01:46:17.040947 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:46:17.040954 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 01:46:17.040960 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 01:46:17.040967 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:46:17.040974 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 01:46:17.040981 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:46:17.040994 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:46:17.041001 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 01:46:17.041007 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 01:46:17.041015 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 01:46:17.041024 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:46:17.041034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:46:17.041041 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:46:17.041048 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 01:46:17.041058 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:46:17.041068 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 01:46:17.041077 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 01:46:17.041084 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:46:17.041095 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:46:17.041103 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 01:46:17.041112 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 01:46:17.041119 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:46:17.041127 kernel: Hyper-V: PV spinlocks enabled Dec 13 01:46:17.041135 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:46:17.041146 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 01:46:17.041153 kernel: Policy zone: Normal Dec 13 01:46:17.041163 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:46:17.041171 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:46:17.041181 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:46:17.041188 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:46:17.041196 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:46:17.041205 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 308056K reserved, 0K cma-reserved) Dec 13 01:46:17.041217 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:46:17.041224 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:46:17.041240 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:46:17.041254 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:46:17.041262 kernel: rcu: RCU event tracing is enabled. Dec 13 01:46:17.041269 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:46:17.041276 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:46:17.041284 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:46:17.041291 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:46:17.041302 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:46:17.041309 kernel: Using NULL legacy PIC Dec 13 01:46:17.041321 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 01:46:17.041329 kernel: Console: colour dummy device 80x25 Dec 13 01:46:17.041336 kernel: printk: console [tty1] enabled Dec 13 01:46:17.041346 kernel: printk: console [ttyS0] enabled Dec 13 01:46:17.041356 kernel: printk: bootconsole [earlyser0] disabled Dec 13 01:46:17.041366 kernel: ACPI: Core revision 20210730 Dec 13 01:46:17.041375 kernel: Failed to register legacy timer interrupt Dec 13 01:46:17.041383 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:46:17.041393 kernel: Hyper-V: Using IPI hypercalls Dec 13 01:46:17.041401 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) Dec 13 01:46:17.041410 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:46:17.041418 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:46:17.041429 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:46:17.041436 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:46:17.041445 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:46:17.041455 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:46:17.041465 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:46:17.041472 kernel: RETBleed: Vulnerable Dec 13 01:46:17.041480 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:46:17.041490 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:46:17.041498 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:46:17.041507 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:46:17.041514 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:46:17.041523 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:46:17.041531 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:46:17.041543 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:46:17.041550 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:46:17.041558 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:46:17.041568 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:46:17.041577 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 01:46:17.041585 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 01:46:17.041593 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 01:46:17.041604 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 01:46:17.041612 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:46:17.041621 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:46:17.041628 kernel: LSM: Security Framework initializing Dec 13 01:46:17.041637 kernel: SELinux: Initializing. Dec 13 01:46:17.041647 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:46:17.041655 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:46:17.041665 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:46:17.041672 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:46:17.041681 kernel: signal: max sigframe size: 3632 Dec 13 01:46:17.041690 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:46:17.041700 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:46:17.041707 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:46:17.041716 kernel: x86: Booting SMP configuration: Dec 13 01:46:17.041725 kernel: .... node #0, CPUs: #1 Dec 13 01:46:17.041738 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 01:46:17.041746 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:46:17.041756 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:46:17.041764 kernel: smpboot: Max logical packages: 1 Dec 13 01:46:17.041774 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Dec 13 01:46:17.041781 kernel: devtmpfs: initialized Dec 13 01:46:17.041791 kernel: x86/mm: Memory block size: 128MB Dec 13 01:46:17.041807 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 01:46:17.041816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:46:17.041827 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:46:17.041835 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:46:17.041845 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:46:17.041852 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:46:17.041863 kernel: audit: type=2000 audit(1734054375.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:46:17.041871 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:46:17.041878 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:46:17.041885 kernel: cpuidle: using governor menu Dec 13 01:46:17.041894 kernel: ACPI: bus type PCI registered Dec 13 01:46:17.041901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:46:17.041908 kernel: dca service started, version 1.12.1 Dec 13 01:46:17.041915 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:46:17.041923 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:46:17.041930 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:46:17.041937 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:46:17.041944 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:46:17.041952 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:46:17.041972 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:46:17.041982 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:46:17.041991 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:46:17.042001 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:46:17.042008 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:46:17.042017 kernel: ACPI: Interpreter enabled Dec 13 01:46:17.042027 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:46:17.042037 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:46:17.042045 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:46:17.042055 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 01:46:17.042064 kernel: iommu: Default domain type: Translated Dec 13 01:46:17.042076 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:46:17.042083 kernel: vgaarb: loaded Dec 13 01:46:17.042092 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:46:17.042101 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:46:17.042112 kernel: PTP clock support registered Dec 13 01:46:17.042119 kernel: Registered efivars operations Dec 13 01:46:17.042128 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:46:17.042137 kernel: PCI: System does not support PCI Dec 13 01:46:17.042149 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 01:46:17.042156 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:46:17.042165 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:46:17.042174 kernel: pnp: PnP ACPI init Dec 13 01:46:17.042184 kernel: pnp: PnP ACPI: found 3 devices Dec 13 01:46:17.042191 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:46:17.042198 kernel: NET: Registered PF_INET protocol family Dec 13 01:46:17.042208 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:46:17.042223 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:46:17.042231 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:46:17.042240 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:46:17.042249 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 01:46:17.042259 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:46:17.042266 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:46:17.042273 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:46:17.042283 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:46:17.042292 kernel: NET: Registered PF_XDP protocol family Dec 13 01:46:17.042302 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:46:17.042310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:46:17.042320 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 01:46:17.042329 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:46:17.042337 kernel: Initialise system trusted keyrings Dec 13 01:46:17.042344 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:46:17.042354 kernel: Key type asymmetric registered Dec 13 01:46:17.042362 kernel: Asymmetric key parser 'x509' registered Dec 13 01:46:17.042371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:46:17.042381 kernel: io scheduler mq-deadline registered Dec 13 01:46:17.042391 kernel: io scheduler kyber registered Dec 13 01:46:17.042399 kernel: io scheduler bfq registered Dec 13 01:46:17.042408 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:46:17.042415 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:46:17.042425 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:46:17.042433 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:46:17.042443 kernel: i8042: PNP: No PS/2 controller found. Dec 13 01:46:17.042567 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 01:46:17.042654 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:46:16 UTC (1734054376) Dec 13 01:46:17.042733 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 01:46:17.042745 kernel: fail to initialize ptp_kvm Dec 13 01:46:17.042752 kernel: intel_pstate: CPU model not supported Dec 13 01:46:17.042762 kernel: efifb: probing for efifb Dec 13 01:46:17.042770 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:46:17.042780 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:46:17.042787 kernel: efifb: scrolling: redraw Dec 13 01:46:17.042805 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:46:17.042815 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:46:17.042823 kernel: fb0: EFI VGA frame buffer device Dec 13 01:46:17.042832 kernel: pstore: Registered efi as persistent store backend Dec 13 01:46:17.042840 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:46:17.042850 kernel: Segment Routing with IPv6 Dec 13 01:46:17.042858 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:46:17.042866 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:46:17.042875 kernel: Key type dns_resolver registered Dec 13 01:46:17.042886 kernel: IPI shorthand broadcast: enabled Dec 13 01:46:17.042894 kernel: sched_clock: Marking stable (820364100, 24407600)->(1037940200, -193168500) Dec 13 01:46:17.042902 kernel: registered taskstats version 1 Dec 13 01:46:17.042911 kernel: Loading compiled-in X.509 certificates Dec 13 01:46:17.042921 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:46:17.042928 kernel: Key type .fscrypt registered Dec 13 01:46:17.042936 kernel: Key type fscrypt-provisioning registered Dec 13 01:46:17.042946 kernel: pstore: Using crash dump compression: deflate Dec 13 01:46:17.042958 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:46:17.042965 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:46:17.042973 kernel: ima: No architecture policies found Dec 13 01:46:17.042982 kernel: clk: Disabling unused clocks Dec 13 01:46:17.042992 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:46:17.042999 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:46:17.043006 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:46:17.043017 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:46:17.043026 kernel: Run /init as init process Dec 13 01:46:17.043034 kernel: with arguments: Dec 13 01:46:17.043044 kernel: /init Dec 13 01:46:17.043054 kernel: with environment: Dec 13 01:46:17.043062 kernel: HOME=/ Dec 13 01:46:17.043070 kernel: TERM=linux Dec 13 01:46:17.043078 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:46:17.043090 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:46:17.043102 systemd[1]: Detected virtualization microsoft. Dec 13 01:46:17.043111 systemd[1]: Detected architecture x86-64. Dec 13 01:46:17.043122 systemd[1]: Running in initrd. Dec 13 01:46:17.043130 systemd[1]: No hostname configured, using default hostname. Dec 13 01:46:17.043140 systemd[1]: Hostname set to . Dec 13 01:46:17.043148 systemd[1]: Initializing machine ID from random generator. Dec 13 01:46:17.043158 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:46:17.043166 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:46:17.043176 systemd[1]: Reached target cryptsetup.target. Dec 13 01:46:17.043184 systemd[1]: Reached target paths.target. Dec 13 01:46:17.043196 systemd[1]: Reached target slices.target. Dec 13 01:46:17.043205 systemd[1]: Reached target swap.target. Dec 13 01:46:17.043214 systemd[1]: Reached target timers.target. Dec 13 01:46:17.043222 systemd[1]: Listening on iscsid.socket. Dec 13 01:46:17.043232 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:46:17.043241 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:46:17.043250 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:46:17.043261 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:46:17.043271 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:46:17.043281 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:46:17.043289 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:46:17.043297 systemd[1]: Reached target sockets.target. Dec 13 01:46:17.043307 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:46:17.043317 systemd[1]: Finished network-cleanup.service. Dec 13 01:46:17.043325 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:46:17.043333 systemd[1]: Starting systemd-journald.service... Dec 13 01:46:17.043345 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:46:17.043356 systemd[1]: Starting systemd-resolved.service... Dec 13 01:46:17.043363 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:46:17.043373 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:46:17.043381 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:46:17.043392 kernel: audit: type=1130 audit(1734054377.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.043403 systemd-journald[183]: Journal started Dec 13 01:46:17.043450 systemd-journald[183]: Runtime Journal (/run/log/journal/a86601de729e44c1be775ce7d828aaa8) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:46:17.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.019052 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:46:17.055811 systemd[1]: Started systemd-journald.service. Dec 13 01:46:17.060335 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:46:17.097039 kernel: audit: type=1130 audit(1734054377.059:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.097071 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:46:17.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.070136 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:46:17.101954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:46:17.131510 kernel: audit: type=1130 audit(1734054377.069:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.125740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:46:17.126326 systemd-resolved[185]: Positive Trust Anchors: Dec 13 01:46:17.126337 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:46:17.141692 kernel: Bridge firewalling registered Dec 13 01:46:17.126390 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:46:17.130121 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 01:46:17.138208 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:46:17.160528 systemd[1]: Started systemd-resolved.service. Dec 13 01:46:17.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.164747 systemd[1]: Reached target nss-lookup.target. Dec 13 01:46:17.213994 kernel: audit: type=1130 audit(1734054377.160:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.214026 kernel: audit: type=1130 audit(1734054377.164:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.214059 kernel: audit: type=1130 audit(1734054377.194:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.214081 kernel: SCSI subsystem initialized Dec 13 01:46:17.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.192736 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:46:17.210045 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:46:17.228380 dracut-cmdline[200]: dracut-dracut-053 Dec 13 01:46:17.231834 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:46:17.259313 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:46:17.259343 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:46:17.259360 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:46:17.263103 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 01:46:17.263903 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:46:17.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.280823 kernel: audit: type=1130 audit(1734054377.267:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.281169 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:46:17.292384 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:46:17.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.312927 kernel: audit: type=1130 audit(1734054377.294:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.347819 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:46:17.366825 kernel: iscsi: registered transport (tcp) Dec 13 01:46:17.393411 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:46:17.393490 kernel: QLogic iSCSI HBA Driver Dec 13 01:46:17.422860 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:46:17.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.427872 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:46:17.444070 kernel: audit: type=1130 audit(1734054377.426:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.484822 kernel: raid6: avx512x4 gen() 18300 MB/s Dec 13 01:46:17.505805 kernel: raid6: avx512x4 xor() 7852 MB/s Dec 13 01:46:17.525810 kernel: raid6: avx512x2 gen() 18552 MB/s Dec 13 01:46:17.545814 kernel: raid6: avx512x2 xor() 29466 MB/s Dec 13 01:46:17.565816 kernel: raid6: avx512x1 gen() 18253 MB/s Dec 13 01:46:17.585810 kernel: raid6: avx512x1 xor() 25290 MB/s Dec 13 01:46:17.605818 kernel: raid6: avx2x4 gen() 18275 MB/s Dec 13 01:46:17.625807 kernel: raid6: avx2x4 xor() 7576 MB/s Dec 13 01:46:17.644819 kernel: raid6: avx2x2 gen() 18417 MB/s Dec 13 01:46:17.664811 kernel: raid6: avx2x2 xor() 22088 MB/s Dec 13 01:46:17.683808 kernel: raid6: avx2x1 gen() 14075 MB/s Dec 13 01:46:17.703808 kernel: raid6: avx2x1 xor() 19487 MB/s Dec 13 01:46:17.724813 kernel: raid6: sse2x4 gen() 11749 MB/s Dec 13 01:46:17.744813 kernel: raid6: sse2x4 xor() 7253 MB/s Dec 13 01:46:17.764807 kernel: raid6: sse2x2 gen() 12933 MB/s Dec 13 01:46:17.784811 kernel: raid6: sse2x2 xor() 7459 MB/s Dec 13 01:46:17.804806 kernel: raid6: sse2x1 gen() 11600 MB/s Dec 13 01:46:17.827544 kernel: raid6: sse2x1 xor() 5897 MB/s Dec 13 01:46:17.827581 kernel: raid6: using algorithm avx512x2 gen() 18552 MB/s Dec 13 01:46:17.827593 kernel: raid6: .... xor() 29466 MB/s, rmw enabled Dec 13 01:46:17.835356 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:46:17.850821 kernel: xor: automatically using best checksumming function avx Dec 13 01:46:17.944824 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:46:17.953444 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:46:17.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.956000 audit: BPF prog-id=7 op=LOAD Dec 13 01:46:17.956000 audit: BPF prog-id=8 op=LOAD Dec 13 01:46:17.958091 systemd[1]: Starting systemd-udevd.service... Dec 13 01:46:17.972889 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 01:46:17.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:17.977608 systemd[1]: Started systemd-udevd.service. Dec 13 01:46:17.980635 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:46:17.997257 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Dec 13 01:46:18.028654 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:46:18.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:18.033848 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:46:18.067171 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:46:18.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:18.113816 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:46:18.141813 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:46:18.147917 kernel: AES CTR mode by8 optimization enabled Dec 13 01:46:18.147959 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 01:46:18.166817 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:46:18.183210 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:46:18.183262 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:46:18.192820 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:46:18.205267 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:46:18.205321 kernel: scsi host1: storvsc_host_t Dec 13 01:46:18.205372 kernel: scsi host0: storvsc_host_t Dec 13 01:46:18.213175 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:46:18.213238 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:46:18.226818 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:46:18.237539 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:46:18.237593 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:46:18.253311 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:46:18.259722 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:46:18.259743 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:46:18.279744 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:46:18.279926 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:46:18.280090 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:46:18.280241 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:46:18.280398 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:46:18.280549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:46:18.280568 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:46:18.314821 kernel: hv_netvsc 6045bddd-d61a-6045-bddd-d61a6045bddd eth0: VF slot 1 added Dec 13 01:46:18.324813 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:46:18.332441 kernel: hv_pci eb2886b4-2ece-4509-8fc1-e99d0a25e40f: PCI VMBus probing: Using version 0x10004 Dec 13 01:46:18.403314 kernel: hv_pci eb2886b4-2ece-4509-8fc1-e99d0a25e40f: PCI host bridge to bus 2ece:00 Dec 13 01:46:18.403515 kernel: pci_bus 2ece:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 01:46:18.403681 kernel: pci_bus 2ece:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:46:18.403841 kernel: pci 2ece:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 01:46:18.404018 kernel: pci 2ece:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:46:18.404173 kernel: pci 2ece:00:02.0: enabling Extended Tags Dec 13 01:46:18.404326 kernel: pci 2ece:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2ece:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 01:46:18.404482 kernel: pci_bus 2ece:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:46:18.404628 kernel: pci 2ece:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:46:18.496824 kernel: mlx5_core 2ece:00:02.0: firmware version: 14.30.5000 Dec 13 01:46:18.762236 kernel: mlx5_core 2ece:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 01:46:18.762425 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Dec 13 01:46:18.762445 kernel: mlx5_core 2ece:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 01:46:18.762592 kernel: mlx5_core 2ece:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Dec 13 01:46:18.762710 kernel: hv_netvsc 6045bddd-d61a-6045-bddd-d61a6045bddd eth0: VF registering: eth1 Dec 13 01:46:18.762829 kernel: mlx5_core 2ece:00:02.0 eth1: joined to eth0 Dec 13 01:46:18.603585 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:46:18.672099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:46:18.774820 kernel: mlx5_core 2ece:00:02.0 enP11982s1: renamed from eth1 Dec 13 01:46:18.787609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:46:18.794376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:46:18.805672 systemd[1]: Starting disk-uuid.service... Dec 13 01:46:18.858630 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:46:19.825820 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:46:19.826210 disk-uuid[560]: The operation has completed successfully. Dec 13 01:46:19.911447 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:46:19.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:19.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:19.911550 systemd[1]: Finished disk-uuid.service. Dec 13 01:46:19.919531 systemd[1]: Starting verity-setup.service... Dec 13 01:46:19.948823 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:46:20.292125 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:46:20.298147 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:46:20.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.300351 systemd[1]: Finished verity-setup.service. Dec 13 01:46:20.376816 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:46:20.377374 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:46:20.381313 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:46:20.385554 systemd[1]: Starting ignition-setup.service... Dec 13 01:46:20.392308 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:46:20.414751 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:46:20.414811 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:46:20.414830 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:46:20.460557 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:46:20.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.465000 audit: BPF prog-id=9 op=LOAD Dec 13 01:46:20.466729 systemd[1]: Starting systemd-networkd.service... Dec 13 01:46:20.493008 systemd-networkd[830]: lo: Link UP Dec 13 01:46:20.494238 systemd-networkd[830]: lo: Gained carrier Dec 13 01:46:20.494867 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:46:20.496113 systemd-networkd[830]: Enumeration completed Dec 13 01:46:20.500372 systemd-networkd[830]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:46:20.507330 systemd[1]: Started systemd-networkd.service. Dec 13 01:46:20.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.511470 systemd[1]: Reached target network.target. Dec 13 01:46:20.517653 systemd[1]: Starting iscsiuio.service... Dec 13 01:46:20.524937 systemd[1]: Started iscsiuio.service. Dec 13 01:46:20.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.527437 systemd[1]: Starting iscsid.service... Dec 13 01:46:20.535923 iscsid[839]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:46:20.535923 iscsid[839]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 01:46:20.535923 iscsid[839]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:46:20.535923 iscsid[839]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:46:20.535923 iscsid[839]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:46:20.535923 iscsid[839]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:46:20.535923 iscsid[839]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:46:20.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.534698 systemd[1]: Started iscsid.service. Dec 13 01:46:20.556534 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:46:20.576617 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:46:20.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.578890 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:46:20.587213 kernel: mlx5_core 2ece:00:02.0 enP11982s1: Link up Dec 13 01:46:20.587234 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:46:20.591290 systemd[1]: Reached target remote-fs.target. Dec 13 01:46:20.596035 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:46:20.608494 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:46:20.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.619812 kernel: hv_netvsc 6045bddd-d61a-6045-bddd-d61a6045bddd eth0: Data path switched to VF: enP11982s1 Dec 13 01:46:20.625152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:46:20.624637 systemd-networkd[830]: enP11982s1: Link UP Dec 13 01:46:20.624770 systemd-networkd[830]: eth0: Link UP Dec 13 01:46:20.625016 systemd-networkd[830]: eth0: Gained carrier Dec 13 01:46:20.629153 systemd-networkd[830]: enP11982s1: Gained carrier Dec 13 01:46:20.644856 systemd-networkd[830]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:46:20.738966 systemd[1]: Finished ignition-setup.service. Dec 13 01:46:20.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:20.744056 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:46:22.373941 systemd-networkd[830]: eth0: Gained IPv6LL Dec 13 01:46:23.741842 ignition[854]: Ignition 2.14.0 Dec 13 01:46:23.741863 ignition[854]: Stage: fetch-offline Dec 13 01:46:23.741961 ignition[854]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:23.742019 ignition[854]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:23.841117 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:23.841333 ignition[854]: parsed url from cmdline: "" Dec 13 01:46:23.844309 ignition[854]: no config URL provided Dec 13 01:46:23.844326 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:46:23.844341 ignition[854]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:46:23.844349 ignition[854]: failed to fetch config: resource requires networking Dec 13 01:46:23.845771 ignition[854]: Ignition finished successfully Dec 13 01:46:23.855110 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:46:23.866313 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 13 01:46:23.866352 kernel: audit: type=1130 audit(1734054383.860:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:23.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:23.862317 systemd[1]: Starting ignition-fetch.service... Dec 13 01:46:23.870787 ignition[860]: Ignition 2.14.0 Dec 13 01:46:23.870793 ignition[860]: Stage: fetch Dec 13 01:46:23.870904 ignition[860]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:23.870929 ignition[860]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:23.874137 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:23.876254 ignition[860]: parsed url from cmdline: "" Dec 13 01:46:23.876268 ignition[860]: no config URL provided Dec 13 01:46:23.876293 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:46:23.876310 ignition[860]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:46:23.876394 ignition[860]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:46:23.986275 ignition[860]: GET result: OK Dec 13 01:46:23.986364 ignition[860]: config has been read from IMDS userdata Dec 13 01:46:23.986393 ignition[860]: parsing config with SHA512: b5e94048658415c64fe0ae9106f977c6986fc11470b37344664639ed648a173c4e9e435ff212d0a444382ba6e1fbb8198a3c14c717470eea55a17aa07e939bb3 Dec 13 01:46:23.993417 unknown[860]: fetched base config from "system" Dec 13 01:46:23.994556 unknown[860]: fetched base config from "system" Dec 13 01:46:23.995096 ignition[860]: fetch: fetch complete Dec 13 01:46:23.994563 unknown[860]: fetched user config from "azure" Dec 13 01:46:24.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:23.995102 ignition[860]: fetch: fetch passed Dec 13 01:46:24.017791 kernel: audit: type=1130 audit(1734054384.000:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:23.999084 systemd[1]: Finished ignition-fetch.service. Dec 13 01:46:23.995142 ignition[860]: Ignition finished successfully Dec 13 01:46:24.002301 systemd[1]: Starting ignition-kargs.service... Dec 13 01:46:24.026185 ignition[866]: Ignition 2.14.0 Dec 13 01:46:24.026195 ignition[866]: Stage: kargs Dec 13 01:46:24.026323 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:24.026350 ignition[866]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:24.028881 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:24.034556 ignition[866]: kargs: kargs passed Dec 13 01:46:24.034601 ignition[866]: Ignition finished successfully Dec 13 01:46:24.036562 systemd[1]: Finished ignition-kargs.service. Dec 13 01:46:24.053982 kernel: audit: type=1130 audit(1734054384.039:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.054295 systemd[1]: Starting ignition-disks.service... Dec 13 01:46:24.060393 ignition[872]: Ignition 2.14.0 Dec 13 01:46:24.060403 ignition[872]: Stage: disks Dec 13 01:46:24.060531 ignition[872]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:24.060564 ignition[872]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:24.066871 systemd[1]: Finished ignition-disks.service. Dec 13 01:46:24.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.063872 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:24.088874 kernel: audit: type=1130 audit(1734054384.068:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.069000 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:46:24.066015 ignition[872]: disks: disks passed Dec 13 01:46:24.084180 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:46:24.066075 ignition[872]: Ignition finished successfully Dec 13 01:46:24.088844 systemd[1]: Reached target local-fs.target. Dec 13 01:46:24.090693 systemd[1]: Reached target sysinit.target. Dec 13 01:46:24.092583 systemd[1]: Reached target basic.target. Dec 13 01:46:24.097288 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:46:24.146579 systemd-fsck[880]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 01:46:24.156952 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:46:24.173478 kernel: audit: type=1130 audit(1734054384.159:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.171542 systemd[1]: Mounting sysroot.mount... Dec 13 01:46:24.188815 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:46:24.189119 systemd[1]: Mounted sysroot.mount. Dec 13 01:46:24.192576 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:46:24.246432 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:46:24.252266 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 01:46:24.257621 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:46:24.257663 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:46:24.266871 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:46:24.304637 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:46:24.311160 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:46:24.325814 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (891) Dec 13 01:46:24.334333 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:46:24.341367 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:46:24.341395 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:46:24.341412 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:46:24.343236 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:46:24.357630 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:46:24.372493 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:46:24.377076 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:46:24.897196 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:46:24.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.912258 systemd[1]: Starting ignition-mount.service... Dec 13 01:46:24.916597 kernel: audit: type=1130 audit(1734054384.899:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.919476 systemd[1]: Starting sysroot-boot.service... Dec 13 01:46:24.927653 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 01:46:24.927775 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 01:46:24.950488 systemd[1]: Finished sysroot-boot.service. Dec 13 01:46:24.954634 ignition[958]: INFO : Ignition 2.14.0 Dec 13 01:46:24.954634 ignition[958]: INFO : Stage: mount Dec 13 01:46:24.954634 ignition[958]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:24.954634 ignition[958]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:24.990204 kernel: audit: type=1130 audit(1734054384.954:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.990229 kernel: audit: type=1130 audit(1734054384.975:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:24.973236 systemd[1]: Finished ignition-mount.service. Dec 13 01:46:24.992142 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:24.992142 ignition[958]: INFO : mount: mount passed Dec 13 01:46:24.992142 ignition[958]: INFO : Ignition finished successfully Dec 13 01:46:25.707215 coreos-metadata[890]: Dec 13 01:46:25.707 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:46:25.722340 coreos-metadata[890]: Dec 13 01:46:25.722 INFO Fetch successful Dec 13 01:46:25.759376 coreos-metadata[890]: Dec 13 01:46:25.759 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:46:25.775935 coreos-metadata[890]: Dec 13 01:46:25.775 INFO Fetch successful Dec 13 01:46:25.791500 coreos-metadata[890]: Dec 13 01:46:25.791 INFO wrote hostname ci-3510.3.6-a-d3376cd0d9 to /sysroot/etc/hostname Dec 13 01:46:25.793309 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 01:46:25.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:25.798719 systemd[1]: Starting ignition-files.service... Dec 13 01:46:25.818766 kernel: audit: type=1130 audit(1734054385.797:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:25.823468 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:46:25.837818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (969) Dec 13 01:46:25.850255 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:46:25.850287 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:46:25.850298 kernel: BTRFS info (device sda6): has skinny extents Dec 13 01:46:25.855026 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:46:25.868201 ignition[988]: INFO : Ignition 2.14.0 Dec 13 01:46:25.868201 ignition[988]: INFO : Stage: files Dec 13 01:46:25.872116 ignition[988]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:25.872116 ignition[988]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:25.886831 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:25.898315 ignition[988]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:46:25.901570 ignition[988]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:46:25.901570 ignition[988]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:46:25.966432 ignition[988]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:46:25.970496 ignition[988]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:46:25.978862 unknown[988]: wrote ssh authorized keys file for user: core Dec 13 01:46:25.981524 ignition[988]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:46:25.996298 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:46:26.002433 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:46:26.266375 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:46:26.374466 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:46:26.379541 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:46:26.449726 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (993) Dec 13 01:46:26.399024 systemd[1]: mnt-oem2937928763.mount: Deactivated successfully. Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937928763" Dec 13 01:46:26.452046 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937928763": device or resource busy Dec 13 01:46:26.452046 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2937928763", trying btrfs: device or resource busy Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937928763" Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2937928763" Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2937928763" Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2937928763" Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2658239557" Dec 13 01:46:26.452046 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2658239557": device or resource busy Dec 13 01:46:26.452046 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2658239557", trying btrfs: device or resource busy Dec 13 01:46:26.452046 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2658239557" Dec 13 01:46:26.412413 systemd[1]: mnt-oem2658239557.mount: Deactivated successfully. Dec 13 01:46:26.522332 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2658239557" Dec 13 01:46:26.522332 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem2658239557" Dec 13 01:46:26.522332 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem2658239557" Dec 13 01:46:26.522332 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 01:46:26.522332 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:46:26.522332 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:46:26.961624 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK Dec 13 01:46:27.375634 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:46:27.375634 ignition[988]: INFO : files: op(13): [started] processing unit "waagent.service" Dec 13 01:46:27.375634 ignition[988]: INFO : files: op(13): [finished] processing unit "waagent.service" Dec 13 01:46:27.375634 ignition[988]: INFO : files: op(14): [started] processing unit "nvidia.service" Dec 13 01:46:27.375634 ignition[988]: INFO : files: op(14): [finished] processing unit "nvidia.service" Dec 13 01:46:27.375634 ignition[988]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(17): [started] setting preset to enabled for "waagent.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(17): [finished] setting preset to enabled for "waagent.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:46:27.396188 ignition[988]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:46:27.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.442197 ignition[988]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:46:27.442197 ignition[988]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:46:27.442197 ignition[988]: INFO : files: files passed Dec 13 01:46:27.442197 ignition[988]: INFO : Ignition finished successfully Dec 13 01:46:27.452363 kernel: audit: type=1130 audit(1734054387.429:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.424492 systemd[1]: Finished ignition-files.service. Dec 13 01:46:27.445040 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:46:27.460925 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:46:27.462783 systemd[1]: Starting ignition-quench.service... Dec 13 01:46:27.470429 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:46:27.472720 systemd[1]: Finished ignition-quench.service. Dec 13 01:46:27.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.483326 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:46:27.487411 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:46:27.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.489893 systemd[1]: Reached target ignition-complete.target. Dec 13 01:46:27.494781 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:46:27.509460 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:46:27.509542 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:46:27.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.515761 systemd[1]: Reached target initrd-fs.target. Dec 13 01:46:27.519593 systemd[1]: Reached target initrd.target. Dec 13 01:46:27.523135 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:46:27.526773 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:46:27.537102 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:46:27.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.541893 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:46:27.551988 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:46:27.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.621075 iscsid[839]: iscsid shutting down. Dec 13 01:46:27.553118 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:46:27.553513 systemd[1]: Stopped target timers.target. Dec 13 01:46:27.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.628938 ignition[1027]: INFO : Ignition 2.14.0 Dec 13 01:46:27.628938 ignition[1027]: INFO : Stage: umount Dec 13 01:46:27.628938 ignition[1027]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 01:46:27.628938 ignition[1027]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 01:46:27.628938 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:46:27.628938 ignition[1027]: INFO : umount: umount passed Dec 13 01:46:27.628938 ignition[1027]: INFO : Ignition finished successfully Dec 13 01:46:27.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.554340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:46:27.554439 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:46:27.555033 systemd[1]: Stopped target initrd.target. Dec 13 01:46:27.555292 systemd[1]: Stopped target basic.target. Dec 13 01:46:27.555698 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:46:27.556111 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:46:27.556509 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:46:27.557370 systemd[1]: Stopped target remote-fs.target. Dec 13 01:46:27.557793 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:46:27.558379 systemd[1]: Stopped target sysinit.target. Dec 13 01:46:27.558791 systemd[1]: Stopped target local-fs.target. Dec 13 01:46:27.559200 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:46:27.559611 systemd[1]: Stopped target swap.target. Dec 13 01:46:27.560002 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:46:27.560130 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:46:27.560546 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:46:27.561050 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:46:27.561158 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:46:27.561637 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:46:27.561747 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:46:27.562084 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:46:27.562199 systemd[1]: Stopped ignition-files.service. Dec 13 01:46:27.562514 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:46:27.562622 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 01:46:27.563899 systemd[1]: Stopping ignition-mount.service... Dec 13 01:46:27.566548 systemd[1]: Stopping iscsid.service... Dec 13 01:46:27.595196 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:46:27.598948 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:46:27.599173 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:46:27.601603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:46:27.601753 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:46:27.605900 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:46:27.606024 systemd[1]: Stopped iscsid.service. Dec 13 01:46:27.608507 systemd[1]: Stopping iscsiuio.service... Dec 13 01:46:27.612585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:46:27.612690 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:46:27.618302 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:46:27.618411 systemd[1]: Stopped iscsiuio.service. Dec 13 01:46:27.623175 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:46:27.625300 systemd[1]: Stopped ignition-mount.service. Dec 13 01:46:27.628964 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:46:27.629020 systemd[1]: Stopped ignition-disks.service. Dec 13 01:46:27.631206 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:46:27.631255 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:46:27.634912 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:46:27.639464 systemd[1]: Stopped ignition-fetch.service. Dec 13 01:46:27.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.741040 systemd[1]: Stopped target network.target. Dec 13 01:46:27.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.743052 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:46:27.743147 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:46:27.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.745304 systemd[1]: Stopped target paths.target. Dec 13 01:46:27.747122 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:46:27.754185 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:46:27.754657 systemd[1]: Stopped target slices.target. Dec 13 01:46:27.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.785000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:46:27.755167 systemd[1]: Stopped target sockets.target. Dec 13 01:46:27.755836 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:46:27.755881 systemd[1]: Closed iscsid.socket. Dec 13 01:46:27.756383 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:46:27.756412 systemd[1]: Closed iscsiuio.socket. Dec 13 01:46:27.756890 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:46:27.756933 systemd[1]: Stopped ignition-setup.service. Dec 13 01:46:27.757483 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:46:27.757818 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:46:27.760067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:46:27.771188 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:46:27.771281 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:46:27.772916 systemd-networkd[830]: eth0: DHCPv6 lease lost Dec 13 01:46:27.787000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:46:27.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.783645 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:46:27.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.783769 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:46:27.786099 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:46:27.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.786140 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:46:27.809720 systemd[1]: Stopping network-cleanup.service... Dec 13 01:46:27.811813 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:46:27.811894 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:46:27.815139 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:46:27.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.815196 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:46:27.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.819106 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:46:27.819157 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:46:27.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.821364 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:46:27.827748 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:46:27.886846 kernel: hv_netvsc 6045bddd-d61a-6045-bddd-d61a6045bddd eth0: Data path switched from VF: enP11982s1 Dec 13 01:46:27.828298 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:46:27.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.828428 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:46:27.834346 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:46:27.834390 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:46:27.839968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:46:27.840002 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:46:27.843780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:46:27.843864 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:46:27.848119 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:46:27.848165 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:46:27.850090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:46:27.850130 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:46:27.856897 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:46:27.870902 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:46:27.870957 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:46:27.875155 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:46:27.875201 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:46:27.886963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:46:27.887014 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:46:27.927340 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:46:27.933101 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:46:27.935296 systemd[1]: Stopped network-cleanup.service. Dec 13 01:46:27.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.939249 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:46:27.941836 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:46:27.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:27.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:28.277975 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:46:28.278111 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:46:28.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:28.284068 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:46:28.288257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:46:28.288322 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:46:28.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:28.300706 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:46:28.311669 systemd[1]: Switching root. Dec 13 01:46:28.336339 systemd-journald[183]: Journal stopped Dec 13 01:46:42.370100 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:46:42.370127 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:46:42.370141 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:46:42.370150 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:46:42.370161 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:46:42.370171 kernel: SELinux: policy capability open_perms=1 Dec 13 01:46:42.370183 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:46:42.370195 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:46:42.370204 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:46:42.370215 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:46:42.370223 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:46:42.370234 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:46:42.370244 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 01:46:42.370254 kernel: audit: type=1403 audit(1734054390.729:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:46:42.370267 systemd[1]: Successfully loaded SELinux policy in 312.711ms. Dec 13 01:46:42.370280 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 52.190ms. Dec 13 01:46:42.370293 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:46:42.370303 systemd[1]: Detected virtualization microsoft. Dec 13 01:46:42.370317 systemd[1]: Detected architecture x86-64. Dec 13 01:46:42.370328 systemd[1]: Detected first boot. Dec 13 01:46:42.370339 systemd[1]: Hostname set to . Dec 13 01:46:42.370350 systemd[1]: Initializing machine ID from random generator. Dec 13 01:46:42.370361 kernel: audit: type=1400 audit(1734054391.653:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:46:42.370372 kernel: audit: type=1400 audit(1734054391.668:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:46:42.370383 kernel: audit: type=1400 audit(1734054391.668:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:46:42.370395 kernel: audit: type=1334 audit(1734054391.691:85): prog-id=10 op=LOAD Dec 13 01:46:42.370409 kernel: audit: type=1334 audit(1734054391.691:86): prog-id=10 op=UNLOAD Dec 13 01:46:42.370419 kernel: audit: type=1334 audit(1734054391.696:87): prog-id=11 op=LOAD Dec 13 01:46:42.370430 kernel: audit: type=1334 audit(1734054391.696:88): prog-id=11 op=UNLOAD Dec 13 01:46:42.370441 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:46:42.370451 kernel: audit: type=1400 audit(1734054393.305:89): avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:46:42.370464 kernel: audit: type=1300 audit(1734054393.305:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:42.370477 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:46:42.370488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:46:42.370500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:46:42.370512 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:46:42.370523 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 01:46:42.370533 kernel: audit: type=1334 audit(1734054401.824:91): prog-id=12 op=LOAD Dec 13 01:46:42.370543 kernel: audit: type=1334 audit(1734054401.824:92): prog-id=3 op=UNLOAD Dec 13 01:46:42.370557 kernel: audit: type=1334 audit(1734054401.828:93): prog-id=13 op=LOAD Dec 13 01:46:42.370572 kernel: audit: type=1334 audit(1734054401.832:94): prog-id=14 op=LOAD Dec 13 01:46:42.370582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:46:42.370595 kernel: audit: type=1334 audit(1734054401.833:95): prog-id=4 op=UNLOAD Dec 13 01:46:42.370604 kernel: audit: type=1334 audit(1734054401.833:96): prog-id=5 op=UNLOAD Dec 13 01:46:42.370616 kernel: audit: type=1131 audit(1734054401.834:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.370627 systemd[1]: Stopped initrd-switch-root.service. Dec 13 01:46:42.370638 kernel: audit: type=1334 audit(1734054401.877:98): prog-id=12 op=UNLOAD Dec 13 01:46:42.370652 kernel: audit: type=1130 audit(1734054401.884:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.370662 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:46:42.370674 kernel: audit: type=1131 audit(1734054401.884:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.370684 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:46:42.370696 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:46:42.370709 systemd[1]: Created slice system-getty.slice. Dec 13 01:46:42.370722 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:46:42.370743 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:46:42.370763 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:46:42.370784 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:46:42.370826 systemd[1]: Created slice user.slice. Dec 13 01:46:42.370845 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:46:42.370864 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:46:42.370884 systemd[1]: Set up automount boot.automount. Dec 13 01:46:42.370909 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:46:42.370927 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 01:46:42.370951 systemd[1]: Stopped target initrd-fs.target. Dec 13 01:46:42.370971 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 01:46:42.370992 systemd[1]: Reached target integritysetup.target. Dec 13 01:46:42.371010 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:46:42.371028 systemd[1]: Reached target remote-fs.target. Dec 13 01:46:42.371049 systemd[1]: Reached target slices.target. Dec 13 01:46:42.371067 systemd[1]: Reached target swap.target. Dec 13 01:46:42.371085 systemd[1]: Reached target torcx.target. Dec 13 01:46:42.371108 systemd[1]: Reached target veritysetup.target. Dec 13 01:46:42.371125 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:46:42.371143 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:46:42.371161 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:46:42.371181 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:46:42.371203 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:46:42.371220 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:46:42.371240 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:46:42.371259 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:46:42.371277 systemd[1]: Mounting media.mount... Dec 13 01:46:42.371295 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:42.371318 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:46:42.371338 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:46:42.371357 systemd[1]: Mounting tmp.mount... Dec 13 01:46:42.371382 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:46:42.371400 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:46:42.371420 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:46:42.371439 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:46:42.371461 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:46:42.371481 systemd[1]: Starting modprobe@drm.service... Dec 13 01:46:42.371502 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:46:42.371521 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:46:42.371540 systemd[1]: Starting modprobe@loop.service... Dec 13 01:46:42.371564 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:46:42.371585 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:46:42.371602 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 01:46:42.371618 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:46:42.371632 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:46:42.371647 systemd[1]: Stopped systemd-journald.service. Dec 13 01:46:42.371665 systemd[1]: Starting systemd-journald.service... Dec 13 01:46:42.371679 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:46:42.371695 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:46:42.371709 kernel: loop: module loaded Dec 13 01:46:42.371719 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:46:42.371732 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:46:42.371742 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:46:42.371755 systemd[1]: Stopped verity-setup.service. Dec 13 01:46:42.371765 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:42.371776 kernel: fuse: init (API version 7.34) Dec 13 01:46:42.371785 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:46:42.371914 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:46:42.371928 systemd[1]: Mounted media.mount. Dec 13 01:46:42.371939 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:46:42.371950 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:46:42.371962 systemd[1]: Mounted tmp.mount. Dec 13 01:46:42.371973 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:46:42.371985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:46:42.372005 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:46:42.372017 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:46:42.372033 systemd-journald[1142]: Journal started Dec 13 01:46:42.372084 systemd-journald[1142]: Runtime Journal (/run/log/journal/683a02a2ee4746a79d93b63f6a40ca47) is 8.0M, max 159.0M, 151.0M free. Dec 13 01:46:30.729000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:46:31.653000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:46:31.668000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:46:31.668000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:46:31.691000 audit: BPF prog-id=10 op=LOAD Dec 13 01:46:31.691000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:46:31.696000 audit: BPF prog-id=11 op=LOAD Dec 13 01:46:31.696000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:46:33.305000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 01:46:33.305000 audit[1061]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:33.305000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:46:33.312000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 01:46:33.312000 audit[1061]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:33.312000 audit: CWD cwd="/" Dec 13 01:46:33.312000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:33.312000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:33.312000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 01:46:41.824000 audit: BPF prog-id=12 op=LOAD Dec 13 01:46:41.824000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:46:41.828000 audit: BPF prog-id=13 op=LOAD Dec 13 01:46:41.832000 audit: BPF prog-id=14 op=LOAD Dec 13 01:46:41.833000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:46:41.833000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:46:41.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:41.877000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:46:41.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:41.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.235000 audit: BPF prog-id=15 op=LOAD Dec 13 01:46:42.235000 audit: BPF prog-id=16 op=LOAD Dec 13 01:46:42.235000 audit: BPF prog-id=17 op=LOAD Dec 13 01:46:42.235000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:46:42.235000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:46:42.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.366000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:46:42.366000 audit[1142]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd9d0bcd90 a2=4000 a3=7ffd9d0bce2c items=0 ppid=1 pid=1142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:42.366000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:46:41.823105 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:46:33.269394 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:46:41.834711 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:46:33.269906 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:46:33.269929 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:46:33.269967 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 01:46:33.269980 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 01:46:33.270027 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 01:46:33.270041 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 01:46:42.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:33.270273 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 01:46:33.270315 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 01:46:42.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:33.270329 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 01:46:33.293350 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 01:46:33.293398 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 01:46:33.293418 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 01:46:33.293432 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 01:46:33.293451 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 01:46:33.293464 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 01:46:40.897417 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:46:40.897650 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:46:40.897765 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:46:40.897940 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 01:46:40.897986 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 01:46:40.898036 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T01:46:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 01:46:42.380819 systemd[1]: Started systemd-journald.service. Dec 13 01:46:42.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.381664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:46:42.382972 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:46:42.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.385426 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:46:42.385566 systemd[1]: Finished modprobe@drm.service. Dec 13 01:46:42.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.387873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:46:42.388071 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:46:42.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.390740 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:46:42.391071 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:46:42.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.393377 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:46:42.393761 systemd[1]: Finished modprobe@loop.service. Dec 13 01:46:42.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.396349 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:46:42.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.399272 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:46:42.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.401930 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:46:42.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.404718 systemd[1]: Reached target network-pre.target. Dec 13 01:46:42.409445 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:46:42.417421 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:46:42.419265 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:46:42.433364 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:46:42.437090 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:46:42.439642 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:46:42.441101 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:46:42.443176 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:46:42.444480 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:46:42.448543 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:46:42.459195 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:46:42.463415 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:46:42.467671 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:46:42.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.472196 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:46:42.473141 systemd-journald[1142]: Time spent on flushing to /var/log/journal/683a02a2ee4746a79d93b63f6a40ca47 is 23.845ms for 1153 entries. Dec 13 01:46:42.473141 systemd-journald[1142]: System Journal (/var/log/journal/683a02a2ee4746a79d93b63f6a40ca47) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:46:42.558258 systemd-journald[1142]: Received client request to flush runtime journal. Dec 13 01:46:42.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.478043 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:46:42.562391 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:46:42.481922 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:46:42.506303 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:46:42.559361 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:46:42.962635 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:46:42.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:42.967548 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:46:43.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:43.288128 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:46:43.555139 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:46:43.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:43.557000 audit: BPF prog-id=18 op=LOAD Dec 13 01:46:43.557000 audit: BPF prog-id=19 op=LOAD Dec 13 01:46:43.557000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:46:43.557000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:46:43.559075 systemd[1]: Starting systemd-udevd.service... Dec 13 01:46:43.577290 systemd-udevd[1189]: Using default interface naming scheme 'v252'. Dec 13 01:46:43.782009 systemd[1]: Started systemd-udevd.service. Dec 13 01:46:43.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:43.785000 audit: BPF prog-id=20 op=LOAD Dec 13 01:46:43.786886 systemd[1]: Starting systemd-networkd.service... Dec 13 01:46:43.824559 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 01:46:43.877000 audit[1201]: AVC avc: denied { confidentiality } for pid=1201 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:46:43.899426 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:46:43.899548 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:46:43.899583 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:46:43.903958 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:46:43.902000 audit: BPF prog-id=21 op=LOAD Dec 13 01:46:43.902000 audit: BPF prog-id=22 op=LOAD Dec 13 01:46:43.902000 audit: BPF prog-id=23 op=LOAD Dec 13 01:46:43.904590 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:46:43.922902 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:46:43.934026 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:46:43.934119 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:46:43.934150 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:46:43.812664 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:46:43.859736 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:46:43.859777 systemd-journald[1142]: Time jumped backwards, rotating. Dec 13 01:46:43.859858 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:46:43.859919 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:46:43.859945 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:46:43.877000 audit[1201]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555ce086f9a0 a1=f884 a2=7f7c93235bc5 a3=5 items=12 ppid=1189 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:43.877000 audit: CWD cwd="/" Dec 13 01:46:43.877000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=1 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=2 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=3 name=(null) inode=15450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=4 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=5 name=(null) inode=15451 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=6 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=7 name=(null) inode=15452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=8 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=9 name=(null) inode=15453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=10 name=(null) inode=15449 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PATH item=11 name=(null) inode=15454 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:46:43.877000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:46:43.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:43.874787 systemd[1]: Started systemd-userdbd.service. Dec 13 01:46:44.021286 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1205) Dec 13 01:46:44.099942 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 01:46:44.109343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:46:44.179426 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:46:44.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:44.183868 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:46:44.224179 systemd-networkd[1195]: lo: Link UP Dec 13 01:46:44.224189 systemd-networkd[1195]: lo: Gained carrier Dec 13 01:46:44.224760 systemd-networkd[1195]: Enumeration completed Dec 13 01:46:44.224896 systemd[1]: Started systemd-networkd.service. Dec 13 01:46:44.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:44.228862 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:46:44.263586 systemd-networkd[1195]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:46:44.316941 kernel: mlx5_core 2ece:00:02.0 enP11982s1: Link up Dec 13 01:46:44.337195 kernel: hv_netvsc 6045bddd-d61a-6045-bddd-d61a6045bddd eth0: Data path switched to VF: enP11982s1 Dec 13 01:46:44.337047 systemd-networkd[1195]: enP11982s1: Link UP Dec 13 01:46:44.337207 systemd-networkd[1195]: eth0: Link UP Dec 13 01:46:44.337213 systemd-networkd[1195]: eth0: Gained carrier Dec 13 01:46:44.343192 systemd-networkd[1195]: enP11982s1: Gained carrier Dec 13 01:46:44.373052 systemd-networkd[1195]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:46:44.454785 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:46:44.480999 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:46:44.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:44.483588 systemd[1]: Reached target cryptsetup.target. Dec 13 01:46:44.487027 systemd[1]: Starting lvm2-activation.service... Dec 13 01:46:44.491444 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:46:44.512800 systemd[1]: Finished lvm2-activation.service. Dec 13 01:46:44.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:44.515164 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:46:44.517306 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:46:44.517342 systemd[1]: Reached target local-fs.target. Dec 13 01:46:44.519571 systemd[1]: Reached target machines.target. Dec 13 01:46:44.522921 systemd[1]: Starting ldconfig.service... Dec 13 01:46:44.525055 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:46:44.525156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:46:44.526344 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:46:44.529352 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:46:44.532904 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:46:44.536326 systemd[1]: Starting systemd-sysext.service... Dec 13 01:46:44.576172 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1271 (bootctl) Dec 13 01:46:44.577500 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:46:45.023343 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:46:45.055526 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:46:45.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.078645 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:46:45.078878 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:46:45.113916 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:46:45.136365 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:46:45.138908 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:46:45.139271 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:46:45.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.157914 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:46:45.162539 (sd-sysext)[1283]: Using extensions 'kubernetes'. Dec 13 01:46:45.163001 (sd-sysext)[1283]: Merged extensions into '/usr'. Dec 13 01:46:45.178239 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:45.184525 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:46:45.185785 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.187409 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:46:45.194709 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:46:45.196942 systemd[1]: Starting modprobe@loop.service... Dec 13 01:46:45.197916 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.198069 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:46:45.198213 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:45.202123 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:46:45.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.203338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:46:45.203515 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:46:45.205330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:46:45.205483 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:46:45.206989 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:46:45.207105 systemd[1]: Finished modprobe@loop.service. Dec 13 01:46:45.208524 systemd[1]: Finished systemd-sysext.service. Dec 13 01:46:45.212328 systemd[1]: Starting ensure-sysext.service... Dec 13 01:46:45.214990 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:46:45.215060 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.216342 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:46:45.223569 systemd[1]: Reloading. Dec 13 01:46:45.235677 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:46:45.268440 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:46:45.286470 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2024-12-13T01:46:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:46:45.286507 /usr/lib/systemd/system-generators/torcx-generator[1310]: time="2024-12-13T01:46:45Z" level=info msg="torcx already run" Dec 13 01:46:45.305669 systemd-tmpfiles[1290]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:46:45.383406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:46:45.383427 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:46:45.400206 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:46:45.462000 audit: BPF prog-id=24 op=LOAD Dec 13 01:46:45.462000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:46:45.462000 audit: BPF prog-id=25 op=LOAD Dec 13 01:46:45.462000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:46:45.462000 audit: BPF prog-id=26 op=LOAD Dec 13 01:46:45.462000 audit: BPF prog-id=27 op=LOAD Dec 13 01:46:45.462000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:46:45.462000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:46:45.465000 audit: BPF prog-id=28 op=LOAD Dec 13 01:46:45.465000 audit: BPF prog-id=29 op=LOAD Dec 13 01:46:45.465000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:46:45.465000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:46:45.466000 audit: BPF prog-id=30 op=LOAD Dec 13 01:46:45.466000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:46:45.466000 audit: BPF prog-id=31 op=LOAD Dec 13 01:46:45.466000 audit: BPF prog-id=32 op=LOAD Dec 13 01:46:45.466000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:46:45.467000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:46:45.481025 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:45.481389 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.482760 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:46:45.486382 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:46:45.489381 systemd[1]: Starting modprobe@loop.service... Dec 13 01:46:45.490666 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.490932 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:46:45.491198 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:45.493858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:46:45.494192 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:46:45.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.496123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:46:45.496248 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:46:45.500050 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:46:45.500186 systemd[1]: Finished modprobe@loop.service. Dec 13 01:46:45.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.504188 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:45.504514 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.506200 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:46:45.508470 systemd[1]: Starting modprobe@drm.service... Dec 13 01:46:45.511284 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:46:45.514158 systemd[1]: Starting modprobe@loop.service... Dec 13 01:46:45.515703 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.515940 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:46:45.516183 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:46:45.519675 systemd[1]: Finished ensure-sysext.service. Dec 13 01:46:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.521195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:46:45.521304 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:46:45.521916 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:46:45.522015 systemd[1]: Finished modprobe@drm.service. Dec 13 01:46:45.522292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:46:45.522390 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:46:45.522726 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:46:45.522823 systemd[1]: Finished modprobe@loop.service. Dec 13 01:46:45.523874 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:46:45.523919 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:46:45.778333 systemd-fsck[1279]: fsck.fat 4.2 (2021-01-31) Dec 13 01:46:45.778333 systemd-fsck[1279]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 01:46:45.780310 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:46:45.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.785353 systemd[1]: Mounting boot.mount... Dec 13 01:46:45.798574 systemd[1]: Mounted boot.mount. Dec 13 01:46:45.811894 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:46:45.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:45.859057 systemd-networkd[1195]: eth0: Gained IPv6LL Dec 13 01:46:45.864768 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:46:45.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.530067 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:46:46.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.534168 systemd[1]: Starting audit-rules.service... Dec 13 01:46:46.537404 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:46:46.541476 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:46:46.545000 audit: BPF prog-id=33 op=LOAD Dec 13 01:46:46.549000 audit: BPF prog-id=34 op=LOAD Dec 13 01:46:46.548358 systemd[1]: Starting systemd-resolved.service... Dec 13 01:46:46.552757 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:46:46.556758 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:46:46.580000 audit[1390]: SYSTEM_BOOT pid=1390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.588437 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:46:46.590985 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:46:46.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.593111 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:46:46.641321 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:46:46.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.668107 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:46:46.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.670519 systemd[1]: Reached target time-set.target. Dec 13 01:46:46.757575 systemd-resolved[1387]: Positive Trust Anchors: Dec 13 01:46:46.757649 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:46:46.757702 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:46:46.806492 systemd-resolved[1387]: Using system hostname 'ci-3510.3.6-a-d3376cd0d9'. Dec 13 01:46:46.808189 systemd[1]: Started systemd-resolved.service. Dec 13 01:46:46.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.810945 systemd[1]: Reached target network.target. Dec 13 01:46:46.813827 kernel: kauditd_printk_skb: 121 callbacks suppressed Dec 13 01:46:46.813901 kernel: audit: type=1130 audit(1734054406.809:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:46:46.826310 systemd[1]: Reached target network-online.target. Dec 13 01:46:46.828324 systemd[1]: Reached target nss-lookup.target. Dec 13 01:46:46.871000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:46:46.873990 augenrules[1405]: No rules Dec 13 01:46:46.875247 systemd[1]: Finished audit-rules.service. Dec 13 01:46:46.880913 kernel: audit: type=1305 audit(1734054406.871:206): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:46:46.880984 kernel: audit: type=1300 audit(1734054406.871:206): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9ab9e360 a2=420 a3=0 items=0 ppid=1384 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:46.881013 kernel: audit: type=1327 audit(1734054406.871:206): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:46:46.871000 audit[1405]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9ab9e360 a2=420 a3=0 items=0 ppid=1384 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:46:46.871000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:46:46.945686 systemd-timesyncd[1388]: Contacted time server 188.125.64.6:123 (0.flatcar.pool.ntp.org). Dec 13 01:46:46.945804 systemd-timesyncd[1388]: Initial clock synchronization to Fri 2024-12-13 01:46:46.934130 UTC. Dec 13 01:46:51.211965 ldconfig[1270]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:46:51.224966 systemd[1]: Finished ldconfig.service. Dec 13 01:46:51.228676 systemd[1]: Starting systemd-update-done.service... Dec 13 01:46:51.248603 systemd[1]: Finished systemd-update-done.service. Dec 13 01:46:51.251116 systemd[1]: Reached target sysinit.target. Dec 13 01:46:51.253118 systemd[1]: Started motdgen.path. Dec 13 01:46:51.254759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:46:51.257529 systemd[1]: Started logrotate.timer. Dec 13 01:46:51.259313 systemd[1]: Started mdadm.timer. Dec 13 01:46:51.261016 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:46:51.263000 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:46:51.263030 systemd[1]: Reached target paths.target. Dec 13 01:46:51.264799 systemd[1]: Reached target timers.target. Dec 13 01:46:51.267197 systemd[1]: Listening on dbus.socket. Dec 13 01:46:51.270136 systemd[1]: Starting docker.socket... Dec 13 01:46:51.274836 systemd[1]: Listening on sshd.socket. Dec 13 01:46:51.276723 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:46:51.277225 systemd[1]: Listening on docker.socket. Dec 13 01:46:51.279360 systemd[1]: Reached target sockets.target. Dec 13 01:46:51.281335 systemd[1]: Reached target basic.target. Dec 13 01:46:51.283165 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:46:51.283199 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:46:51.284328 systemd[1]: Starting containerd.service... Dec 13 01:46:51.287383 systemd[1]: Starting dbus.service... Dec 13 01:46:51.290343 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:46:51.293782 systemd[1]: Starting extend-filesystems.service... Dec 13 01:46:51.295988 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:46:51.308761 systemd[1]: Starting kubelet.service... Dec 13 01:46:51.312908 systemd[1]: Starting motdgen.service... Dec 13 01:46:51.318597 systemd[1]: Started nvidia.service. Dec 13 01:46:51.323938 systemd[1]: Starting prepare-helm.service... Dec 13 01:46:51.327207 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:46:51.330678 systemd[1]: Starting sshd-keygen.service... Dec 13 01:46:51.335695 systemd[1]: Starting systemd-logind.service... Dec 13 01:46:51.339170 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:46:51.347978 jq[1415]: false Dec 13 01:46:51.339279 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:46:51.339825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:46:51.342942 systemd[1]: Starting update-engine.service... Dec 13 01:46:51.346511 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:46:51.351807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:46:51.352057 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:46:51.356610 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:46:51.356818 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:46:51.365992 jq[1432]: true Dec 13 01:46:51.390761 jq[1440]: true Dec 13 01:46:51.392877 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:46:51.393133 systemd[1]: Finished motdgen.service. Dec 13 01:46:51.401375 extend-filesystems[1416]: Found loop1 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda1 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda2 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda3 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found usr Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda4 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda6 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda7 Dec 13 01:46:51.403920 extend-filesystems[1416]: Found sda9 Dec 13 01:46:51.403920 extend-filesystems[1416]: Checking size of /dev/sda9 Dec 13 01:46:51.465988 tar[1439]: linux-amd64/helm Dec 13 01:46:51.479799 extend-filesystems[1416]: Old size kept for /dev/sda9 Dec 13 01:46:51.479799 extend-filesystems[1416]: Found sr0 Dec 13 01:46:51.483366 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:46:51.490196 env[1441]: time="2024-12-13T01:46:51.479825954Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:46:51.483549 systemd[1]: Finished extend-filesystems.service. Dec 13 01:46:51.528396 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:46:51.533157 systemd-logind[1428]: New seat seat0. Dec 13 01:46:51.573635 env[1441]: time="2024-12-13T01:46:51.572900385Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:46:51.573635 env[1441]: time="2024-12-13T01:46:51.573068692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585000 env[1441]: time="2024-12-13T01:46:51.584949548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585000 env[1441]: time="2024-12-13T01:46:51.584997921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585306 env[1441]: time="2024-12-13T01:46:51.585272070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585373 env[1441]: time="2024-12-13T01:46:51.585307751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585373 env[1441]: time="2024-12-13T01:46:51.585325541Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:46:51.585373 env[1441]: time="2024-12-13T01:46:51.585337834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585486 env[1441]: time="2024-12-13T01:46:51.585441877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585737 env[1441]: time="2024-12-13T01:46:51.585706131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:46:51.585995 env[1441]: time="2024-12-13T01:46:51.585963190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:46:51.586061 env[1441]: time="2024-12-13T01:46:51.585995172Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:46:51.586108 env[1441]: time="2024-12-13T01:46:51.586061136Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:46:51.586108 env[1441]: time="2024-12-13T01:46:51.586079326Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.611831141Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.611877815Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.611913495Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.611953873Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.611974562Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.611992951Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.612011841Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.612030531Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.612047621Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.612067810Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.612086500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.612153 env[1441]: time="2024-12-13T01:46:51.612104290Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.613586973Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.613699511Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614017136Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614056115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614075005Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614138669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614156660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614174850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614190041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614205932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614221724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614236715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614251707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.614526 env[1441]: time="2024-12-13T01:46:51.614272796Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:46:51.615046 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:46:51.615219 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615307526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615326915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615339608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615355499Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615373289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615392979Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615415666Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:46:51.617007 env[1441]: time="2024-12-13T01:46:51.615452346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.615625251Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.615675023Z" level=info msg="Connect containerd service" Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.615706906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.616259301Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.616489574Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.616526554Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:46:51.617217 env[1441]: time="2024-12-13T01:46:51.616563434Z" level=info msg="containerd successfully booted in 0.137591s" Dec 13 01:46:51.644744 env[1441]: time="2024-12-13T01:46:51.620274789Z" level=info msg="Start subscribing containerd event" Dec 13 01:46:51.644744 env[1441]: time="2024-12-13T01:46:51.620332658Z" level=info msg="Start recovering state" Dec 13 01:46:51.644744 env[1441]: time="2024-12-13T01:46:51.620397622Z" level=info msg="Start event monitor" Dec 13 01:46:51.644744 env[1441]: time="2024-12-13T01:46:51.620421309Z" level=info msg="Start snapshots syncer" Dec 13 01:46:51.644744 env[1441]: time="2024-12-13T01:46:51.620435001Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:46:51.644744 env[1441]: time="2024-12-13T01:46:51.620445495Z" level=info msg="Start streaming server" Dec 13 01:46:51.621090 dbus-daemon[1414]: [system] SELinux support is enabled Dec 13 01:46:51.618224 systemd[1]: Started containerd.service. Dec 13 01:46:51.621482 systemd[1]: Started dbus.service. Dec 13 01:46:51.626171 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:46:51.626201 systemd[1]: Reached target system-config.target. Dec 13 01:46:51.629439 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:46:51.629460 systemd[1]: Reached target user-config.target. Dec 13 01:46:51.633131 systemd[1]: Started systemd-logind.service. Dec 13 01:46:51.685707 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 01:46:52.124911 update_engine[1431]: I1213 01:46:52.123559 1431 main.cc:92] Flatcar Update Engine starting Dec 13 01:46:52.162969 systemd[1]: Started update-engine.service. Dec 13 01:46:52.168426 systemd[1]: Started locksmithd.service. Dec 13 01:46:52.170954 update_engine[1431]: I1213 01:46:52.170919 1431 update_check_scheduler.cc:74] Next update check in 3m14s Dec 13 01:46:52.297538 tar[1439]: linux-amd64/LICENSE Dec 13 01:46:52.297711 tar[1439]: linux-amd64/README.md Dec 13 01:46:52.304008 systemd[1]: Finished prepare-helm.service. Dec 13 01:46:52.841718 systemd[1]: Started kubelet.service. Dec 13 01:46:53.073552 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:46:53.100009 systemd[1]: Finished sshd-keygen.service. Dec 13 01:46:53.104287 systemd[1]: Starting issuegen.service... Dec 13 01:46:53.107834 systemd[1]: Started waagent.service. Dec 13 01:46:53.118868 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:46:53.119070 systemd[1]: Finished issuegen.service. Dec 13 01:46:53.122850 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:46:53.132074 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:46:53.136377 systemd[1]: Started getty@tty1.service. Dec 13 01:46:53.140354 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:46:53.143656 systemd[1]: Reached target getty.target. Dec 13 01:46:53.146710 systemd[1]: Reached target multi-user.target. Dec 13 01:46:53.151464 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:46:53.163097 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:46:53.163235 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:46:53.165944 systemd[1]: Startup finished in 720ms (firmware) + 24.378s (loader) + 980ms (kernel) + 13.439s (initrd) + 23.144s (userspace) = 1min 2.663s. Dec 13 01:46:53.430882 login[1543]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:46:53.433227 login[1544]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:46:53.455682 systemd[1]: Created slice user-500.slice. Dec 13 01:46:53.457214 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:46:53.462684 systemd-logind[1428]: New session 1 of user core. Dec 13 01:46:53.466910 systemd-logind[1428]: New session 2 of user core. Dec 13 01:46:53.472035 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:46:53.473754 systemd[1]: Starting user@500.service... Dec 13 01:46:53.489858 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:46:53.700481 systemd[1547]: Queued start job for default target default.target. Dec 13 01:46:53.701165 systemd[1547]: Reached target paths.target. Dec 13 01:46:53.701199 systemd[1547]: Reached target sockets.target. Dec 13 01:46:53.701219 systemd[1547]: Reached target timers.target. Dec 13 01:46:53.701235 systemd[1547]: Reached target basic.target. Dec 13 01:46:53.701356 systemd[1]: Started user@500.service. Dec 13 01:46:53.702573 systemd[1]: Started session-1.scope. Dec 13 01:46:53.703409 systemd[1]: Started session-2.scope. Dec 13 01:46:53.704618 systemd[1547]: Reached target default.target. Dec 13 01:46:53.704682 systemd[1547]: Startup finished in 205ms. Dec 13 01:46:53.714953 kubelet[1524]: E1213 01:46:53.714864 1524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:46:53.718142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:46:53.718296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:46:53.718558 systemd[1]: kubelet.service: Consumed 1.116s CPU time. Dec 13 01:46:53.731585 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:46:59.582119 waagent[1538]: 2024-12-13T01:46:59.581994Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 01:46:59.586035 waagent[1538]: 2024-12-13T01:46:59.585953Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 01:46:59.588705 waagent[1538]: 2024-12-13T01:46:59.588638Z INFO Daemon Daemon Python: 3.9.16 Dec 13 01:46:59.591243 waagent[1538]: 2024-12-13T01:46:59.591169Z INFO Daemon Daemon Run daemon Dec 13 01:46:59.593696 waagent[1538]: 2024-12-13T01:46:59.593633Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 01:46:59.606809 waagent[1538]: 2024-12-13T01:46:59.606699Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:46:59.613570 waagent[1538]: 2024-12-13T01:46:59.613463Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.614755Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.615529Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.616917Z INFO Daemon Daemon Activate resource disk Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.617699Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.625690Z INFO Daemon Daemon Found device: None Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.626906Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.627739Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.629408Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.630412Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.639957Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.642582Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.643527Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:46:59.656377 waagent[1538]: 2024-12-13T01:46:59.644347Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:46:59.753206 waagent[1538]: 2024-12-13T01:46:59.749021Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:46:59.821543 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:46:59.842411 waagent[1538]: 2024-12-13T01:46:59.842221Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:46:59.857495 waagent[1538]: 2024-12-13T01:46:59.843826Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:46:59.857495 waagent[1538]: 2024-12-13T01:46:59.845097Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:46:59.857495 waagent[1538]: 2024-12-13T01:46:59.846040Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:46:59.857495 waagent[1538]: 2024-12-13T01:46:59.847185Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:46:59.857495 waagent[1538]: 2024-12-13T01:46:59.847839Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:47:00.045479 waagent[1538]: 2024-12-13T01:47:00.045384Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:47:00.051432 waagent[1538]: 2024-12-13T01:47:00.051372Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:47:00.054780 waagent[1538]: 2024-12-13T01:47:00.054702Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:47:00.677665 waagent[1538]: 2024-12-13T01:47:00.677510Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:47:00.687715 waagent[1538]: 2024-12-13T01:47:00.687638Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 01:47:00.692680 waagent[1538]: 2024-12-13T01:47:00.688764Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 01:47:00.766425 waagent[1538]: 2024-12-13T01:47:00.766294Z INFO Daemon Daemon Found private key matching thumbprint AB872016C03CCF4EB1D82BA5E5EA7851FA21F4E3 Dec 13 01:47:00.776697 waagent[1538]: 2024-12-13T01:47:00.767755Z INFO Daemon Daemon Certificate with thumbprint 1354B9DF1A7636F5309583753E5F153D5B29286C has no matching private key. Dec 13 01:47:00.776697 waagent[1538]: 2024-12-13T01:47:00.768750Z INFO Daemon Daemon Fetch goal state completed Dec 13 01:47:00.812633 waagent[1538]: 2024-12-13T01:47:00.812548Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 331950ce-43f2-4977-8b85-ceb8d43a8b9b New eTag: 328598177552531434] Dec 13 01:47:00.821466 waagent[1538]: 2024-12-13T01:47:00.814546Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:47:00.828728 waagent[1538]: 2024-12-13T01:47:00.828646Z INFO Daemon Daemon Starting provisioning Dec 13 01:47:00.835900 waagent[1538]: 2024-12-13T01:47:00.830003Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:47:00.835900 waagent[1538]: 2024-12-13T01:47:00.830923Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-d3376cd0d9] Dec 13 01:47:00.847122 waagent[1538]: 2024-12-13T01:47:00.847004Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-d3376cd0d9] Dec 13 01:47:00.855023 waagent[1538]: 2024-12-13T01:47:00.848600Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:47:00.855023 waagent[1538]: 2024-12-13T01:47:00.850168Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:47:00.863542 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 01:47:00.863793 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 01:47:00.863876 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 01:47:00.864244 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:47:00.869932 systemd-networkd[1195]: eth0: DHCPv6 lease lost Dec 13 01:47:00.871257 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:47:00.871446 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:47:00.873735 systemd[1]: Starting systemd-networkd.service... Dec 13 01:47:00.905576 systemd-networkd[1592]: enP11982s1: Link UP Dec 13 01:47:00.905587 systemd-networkd[1592]: enP11982s1: Gained carrier Dec 13 01:47:00.906839 systemd-networkd[1592]: eth0: Link UP Dec 13 01:47:00.906849 systemd-networkd[1592]: eth0: Gained carrier Dec 13 01:47:00.907362 systemd-networkd[1592]: lo: Link UP Dec 13 01:47:00.907372 systemd-networkd[1592]: lo: Gained carrier Dec 13 01:47:00.907674 systemd-networkd[1592]: eth0: Gained IPv6LL Dec 13 01:47:00.907955 systemd-networkd[1592]: Enumeration completed Dec 13 01:47:00.908064 systemd[1]: Started systemd-networkd.service. Dec 13 01:47:00.908748 systemd-networkd[1592]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:47:00.910313 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:47:00.914464 waagent[1538]: 2024-12-13T01:47:00.914299Z INFO Daemon Daemon Create user account if not exists Dec 13 01:47:00.917162 waagent[1538]: 2024-12-13T01:47:00.917090Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:47:00.918349 waagent[1538]: 2024-12-13T01:47:00.918292Z INFO Daemon Daemon Configure sudoer Dec 13 01:47:00.923364 waagent[1538]: 2024-12-13T01:47:00.923294Z INFO Daemon Daemon Configure sshd Dec 13 01:47:00.924539 waagent[1538]: 2024-12-13T01:47:00.924482Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:47:00.937972 systemd-networkd[1592]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:47:00.940730 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:47:02.064021 waagent[1538]: 2024-12-13T01:47:02.063923Z INFO Daemon Daemon Provisioning complete Dec 13 01:47:02.079292 waagent[1538]: 2024-12-13T01:47:02.079215Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:47:02.086213 waagent[1538]: 2024-12-13T01:47:02.080475Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:47:02.086213 waagent[1538]: 2024-12-13T01:47:02.082281Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 01:47:02.346882 waagent[1601]: 2024-12-13T01:47:02.346704Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 01:47:02.388571 waagent[1601]: 2024-12-13T01:47:02.388469Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:47:02.388775 waagent[1601]: 2024-12-13T01:47:02.388708Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:47:02.402045 waagent[1601]: 2024-12-13T01:47:02.401976Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 01:47:02.402209 waagent[1601]: 2024-12-13T01:47:02.402159Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 01:47:02.468631 waagent[1601]: 2024-12-13T01:47:02.468504Z INFO ExtHandler ExtHandler Found private key matching thumbprint AB872016C03CCF4EB1D82BA5E5EA7851FA21F4E3 Dec 13 01:47:02.468857 waagent[1601]: 2024-12-13T01:47:02.468796Z INFO ExtHandler ExtHandler Certificate with thumbprint 1354B9DF1A7636F5309583753E5F153D5B29286C has no matching private key. Dec 13 01:47:02.469124 waagent[1601]: 2024-12-13T01:47:02.469070Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 01:47:02.482734 waagent[1601]: 2024-12-13T01:47:02.482660Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: ee559d73-a799-4e42-b46c-77fdaf64bca0 New eTag: 328598177552531434] Dec 13 01:47:02.483274 waagent[1601]: 2024-12-13T01:47:02.483215Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 01:47:02.569320 waagent[1601]: 2024-12-13T01:47:02.569190Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:47:02.589917 waagent[1601]: 2024-12-13T01:47:02.589803Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1601 Dec 13 01:47:02.593332 waagent[1601]: 2024-12-13T01:47:02.593262Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:47:02.594550 waagent[1601]: 2024-12-13T01:47:02.594492Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:47:02.696534 waagent[1601]: 2024-12-13T01:47:02.696416Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:47:02.696907 waagent[1601]: 2024-12-13T01:47:02.696831Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:47:02.704906 waagent[1601]: 2024-12-13T01:47:02.704836Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:47:02.705375 waagent[1601]: 2024-12-13T01:47:02.705316Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:47:02.706421 waagent[1601]: 2024-12-13T01:47:02.706353Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 01:47:02.707709 waagent[1601]: 2024-12-13T01:47:02.707650Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:47:02.708329 waagent[1601]: 2024-12-13T01:47:02.708272Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:47:02.708965 waagent[1601]: 2024-12-13T01:47:02.708900Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:47:02.709139 waagent[1601]: 2024-12-13T01:47:02.709088Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:47:02.709337 waagent[1601]: 2024-12-13T01:47:02.709289Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:47:02.709547 waagent[1601]: 2024-12-13T01:47:02.709501Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:47:02.710285 waagent[1601]: 2024-12-13T01:47:02.710228Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:47:02.710591 waagent[1601]: 2024-12-13T01:47:02.710539Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:47:02.710813 waagent[1601]: 2024-12-13T01:47:02.710760Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:47:02.711007 waagent[1601]: 2024-12-13T01:47:02.710943Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:47:02.711340 waagent[1601]: 2024-12-13T01:47:02.711289Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:47:02.711500 waagent[1601]: 2024-12-13T01:47:02.711434Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:47:02.711819 waagent[1601]: 2024-12-13T01:47:02.711768Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:47:02.712011 waagent[1601]: 2024-12-13T01:47:02.711963Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:47:02.713415 waagent[1601]: 2024-12-13T01:47:02.713357Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:47:02.717647 waagent[1601]: 2024-12-13T01:47:02.717544Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:47:02.717647 waagent[1601]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:47:02.717647 waagent[1601]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:47:02.717647 waagent[1601]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:47:02.717647 waagent[1601]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:47:02.717647 waagent[1601]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:47:02.717647 waagent[1601]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:47:02.737324 waagent[1601]: 2024-12-13T01:47:02.737264Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 01:47:02.737991 waagent[1601]: 2024-12-13T01:47:02.737949Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:47:02.738842 waagent[1601]: 2024-12-13T01:47:02.738796Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 01:47:02.760286 waagent[1601]: 2024-12-13T01:47:02.760212Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1592' Dec 13 01:47:02.780810 waagent[1601]: 2024-12-13T01:47:02.780735Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 01:47:02.845398 waagent[1601]: 2024-12-13T01:47:02.845284Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:47:02.845398 waagent[1601]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:47:02.845398 waagent[1601]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:47:02.845398 waagent[1601]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:d6:1a brd ff:ff:ff:ff:ff:ff Dec 13 01:47:02.845398 waagent[1601]: 3: enP11982s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:d6:1a brd ff:ff:ff:ff:ff:ff\ altname enP11982p0s2 Dec 13 01:47:02.845398 waagent[1601]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:47:02.845398 waagent[1601]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:47:02.845398 waagent[1601]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:47:02.845398 waagent[1601]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:47:02.845398 waagent[1601]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:47:02.845398 waagent[1601]: 2: eth0 inet6 fe80::6245:bdff:fedd:d61a/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:47:03.059621 waagent[1601]: 2024-12-13T01:47:03.059492Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 01:47:03.969184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:47:03.969430 systemd[1]: Stopped kubelet.service. Dec 13 01:47:03.969488 systemd[1]: kubelet.service: Consumed 1.116s CPU time. Dec 13 01:47:03.973032 systemd[1]: Starting kubelet.service... Dec 13 01:47:04.052916 systemd[1]: Started kubelet.service. Dec 13 01:47:04.087948 waagent[1538]: 2024-12-13T01:47:04.087062Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 01:47:04.092352 waagent[1538]: 2024-12-13T01:47:04.092296Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 01:47:04.661101 kubelet[1641]: E1213 01:47:04.661045 1641 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:04.666153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:04.666312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:05.615438 waagent[1647]: 2024-12-13T01:47:05.615325Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 01:47:05.616164 waagent[1647]: 2024-12-13T01:47:05.616096Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 01:47:05.616306 waagent[1647]: 2024-12-13T01:47:05.616253Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 01:47:05.616448 waagent[1647]: 2024-12-13T01:47:05.616402Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 01:47:05.626196 waagent[1647]: 2024-12-13T01:47:05.626092Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:47:05.626582 waagent[1647]: 2024-12-13T01:47:05.626525Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:47:05.626746 waagent[1647]: 2024-12-13T01:47:05.626695Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:47:05.638690 waagent[1647]: 2024-12-13T01:47:05.638613Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:47:05.650801 waagent[1647]: 2024-12-13T01:47:05.650738Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:47:05.651739 waagent[1647]: 2024-12-13T01:47:05.651675Z INFO ExtHandler Dec 13 01:47:05.651898 waagent[1647]: 2024-12-13T01:47:05.651840Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 72d6f673-3682-4c2c-844d-ce53572054a1 eTag: 328598177552531434 source: Fabric] Dec 13 01:47:05.652602 waagent[1647]: 2024-12-13T01:47:05.652545Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:47:05.653689 waagent[1647]: 2024-12-13T01:47:05.653628Z INFO ExtHandler Dec 13 01:47:05.653823 waagent[1647]: 2024-12-13T01:47:05.653775Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:47:05.660769 waagent[1647]: 2024-12-13T01:47:05.660714Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:47:05.661213 waagent[1647]: 2024-12-13T01:47:05.661165Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 01:47:05.682583 waagent[1647]: 2024-12-13T01:47:05.682499Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 01:47:05.752054 waagent[1647]: 2024-12-13T01:47:05.751922Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AB872016C03CCF4EB1D82BA5E5EA7851FA21F4E3', 'hasPrivateKey': True} Dec 13 01:47:05.753185 waagent[1647]: 2024-12-13T01:47:05.753114Z INFO ExtHandler Downloaded certificate {'thumbprint': '1354B9DF1A7636F5309583753E5F153D5B29286C', 'hasPrivateKey': False} Dec 13 01:47:05.754164 waagent[1647]: 2024-12-13T01:47:05.754103Z INFO ExtHandler Fetch goal state completed Dec 13 01:47:05.773272 waagent[1647]: 2024-12-13T01:47:05.773173Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 01:47:05.784710 waagent[1647]: 2024-12-13T01:47:05.784609Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1647 Dec 13 01:47:05.787783 waagent[1647]: 2024-12-13T01:47:05.787711Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:47:05.788799 waagent[1647]: 2024-12-13T01:47:05.788738Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 01:47:05.789107 waagent[1647]: 2024-12-13T01:47:05.789051Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 01:47:05.791109 waagent[1647]: 2024-12-13T01:47:05.791051Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:47:05.796136 waagent[1647]: 2024-12-13T01:47:05.796081Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:47:05.796498 waagent[1647]: 2024-12-13T01:47:05.796442Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:47:05.804689 waagent[1647]: 2024-12-13T01:47:05.804630Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:47:05.805182 waagent[1647]: 2024-12-13T01:47:05.805126Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 01:47:05.811230 waagent[1647]: 2024-12-13T01:47:05.811135Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:47:05.812281 waagent[1647]: 2024-12-13T01:47:05.812215Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 01:47:05.814117 waagent[1647]: 2024-12-13T01:47:05.814055Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:47:05.814521 waagent[1647]: 2024-12-13T01:47:05.814464Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:47:05.814682 waagent[1647]: 2024-12-13T01:47:05.814633Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:47:05.815273 waagent[1647]: 2024-12-13T01:47:05.815215Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:47:05.815551 waagent[1647]: 2024-12-13T01:47:05.815498Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:47:05.815551 waagent[1647]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:47:05.815551 waagent[1647]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:47:05.815551 waagent[1647]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:47:05.815551 waagent[1647]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:47:05.815551 waagent[1647]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:47:05.815551 waagent[1647]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:47:05.819267 waagent[1647]: 2024-12-13T01:47:05.819123Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:47:05.819454 waagent[1647]: 2024-12-13T01:47:05.819398Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:47:05.819734 waagent[1647]: 2024-12-13T01:47:05.819683Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:47:05.821284 waagent[1647]: 2024-12-13T01:47:05.821222Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:47:05.821434 waagent[1647]: 2024-12-13T01:47:05.821384Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:47:05.821569 waagent[1647]: 2024-12-13T01:47:05.821524Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:47:05.822213 waagent[1647]: 2024-12-13T01:47:05.822152Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:47:05.824841 waagent[1647]: 2024-12-13T01:47:05.824545Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:47:05.826273 waagent[1647]: 2024-12-13T01:47:05.826192Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:47:05.826371 waagent[1647]: 2024-12-13T01:47:05.826306Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:47:05.831023 waagent[1647]: 2024-12-13T01:47:05.830812Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:47:05.845862 waagent[1647]: 2024-12-13T01:47:05.845789Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:47:05.845862 waagent[1647]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:47:05.845862 waagent[1647]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:47:05.845862 waagent[1647]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:d6:1a brd ff:ff:ff:ff:ff:ff Dec 13 01:47:05.845862 waagent[1647]: 3: enP11982s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:d6:1a brd ff:ff:ff:ff:ff:ff\ altname enP11982p0s2 Dec 13 01:47:05.845862 waagent[1647]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:47:05.845862 waagent[1647]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:47:05.845862 waagent[1647]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:47:05.845862 waagent[1647]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:47:05.845862 waagent[1647]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 01:47:05.845862 waagent[1647]: 2: eth0 inet6 fe80::6245:bdff:fedd:d61a/64 scope link \ valid_lft forever preferred_lft forever Dec 13 01:47:05.846566 waagent[1647]: 2024-12-13T01:47:05.846508Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 01:47:05.893280 waagent[1647]: 2024-12-13T01:47:05.893137Z INFO ExtHandler ExtHandler Dec 13 01:47:05.893445 waagent[1647]: 2024-12-13T01:47:05.893371Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ef34e33b-d236-4365-82d1-aa072a2184de correlation 66b3ba67-6b9f-41c4-aac6-f292ab3c1e17 created: 2024-12-13T01:45:38.489220Z] Dec 13 01:47:05.894517 waagent[1647]: 2024-12-13T01:47:05.894451Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:47:05.896443 waagent[1647]: 2024-12-13T01:47:05.896383Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Dec 13 01:47:05.930202 waagent[1647]: 2024-12-13T01:47:05.930117Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 01:47:05.951542 waagent[1647]: 2024-12-13T01:47:05.951402Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0862D3BC-642E-4FD9-A6E7-796A14705F94;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 01:47:05.984816 waagent[1647]: 2024-12-13T01:47:05.984694Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 01:47:05.984816 waagent[1647]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:47:05.984816 waagent[1647]: pkts bytes target prot opt in out source destination Dec 13 01:47:05.984816 waagent[1647]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:47:05.984816 waagent[1647]: pkts bytes target prot opt in out source destination Dec 13 01:47:05.984816 waagent[1647]: Chain OUTPUT (policy ACCEPT 3 packets, 156 bytes) Dec 13 01:47:05.984816 waagent[1647]: pkts bytes target prot opt in out source destination Dec 13 01:47:05.984816 waagent[1647]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:47:05.984816 waagent[1647]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:47:05.984816 waagent[1647]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:47:05.992346 waagent[1647]: 2024-12-13T01:47:05.992234Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:47:05.992346 waagent[1647]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:47:05.992346 waagent[1647]: pkts bytes target prot opt in out source destination Dec 13 01:47:05.992346 waagent[1647]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:47:05.992346 waagent[1647]: pkts bytes target prot opt in out source destination Dec 13 01:47:05.992346 waagent[1647]: Chain OUTPUT (policy ACCEPT 3 packets, 156 bytes) Dec 13 01:47:05.992346 waagent[1647]: pkts bytes target prot opt in out source destination Dec 13 01:47:05.992346 waagent[1647]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:47:05.992346 waagent[1647]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:47:05.992346 waagent[1647]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:47:05.992922 waagent[1647]: 2024-12-13T01:47:05.992855Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:47:14.787309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:47:14.787615 systemd[1]: Stopped kubelet.service. Dec 13 01:47:14.789653 systemd[1]: Starting kubelet.service... Dec 13 01:47:14.868364 systemd[1]: Started kubelet.service. Dec 13 01:47:14.913916 kubelet[1699]: E1213 01:47:14.913854 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:14.915871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:14.916043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:25.037398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:47:25.037718 systemd[1]: Stopped kubelet.service. Dec 13 01:47:25.039754 systemd[1]: Starting kubelet.service... Dec 13 01:47:25.372371 systemd[1]: Started kubelet.service. Dec 13 01:47:25.651852 kubelet[1709]: E1213 01:47:25.651732 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:25.653653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:25.653811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:31.988247 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 01:47:35.787341 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:47:35.787651 systemd[1]: Stopped kubelet.service. Dec 13 01:47:35.789510 systemd[1]: Starting kubelet.service... Dec 13 01:47:35.947743 systemd[1]: Started kubelet.service. Dec 13 01:47:36.398639 kubelet[1719]: E1213 01:47:36.398580 1719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:36.400561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:36.400718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:37.630740 update_engine[1431]: I1213 01:47:37.630681 1431 update_attempter.cc:509] Updating boot flags... Dec 13 01:47:45.813145 systemd[1]: Created slice system-sshd.slice. Dec 13 01:47:45.814955 systemd[1]: Started sshd@0-10.200.8.37:22-10.200.16.10:47278.service. Dec 13 01:47:46.463620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:47:46.463964 systemd[1]: Stopped kubelet.service. Dec 13 01:47:46.465711 systemd[1]: Starting kubelet.service... Dec 13 01:47:46.573422 sshd[1768]: Accepted publickey for core from 10.200.16.10 port 47278 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:47:46.575029 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:46.579641 systemd-logind[1428]: New session 3 of user core. Dec 13 01:47:46.580271 systemd[1]: Started session-3.scope. Dec 13 01:47:46.803215 systemd[1]: Started kubelet.service. Dec 13 01:47:46.847184 kubelet[1775]: E1213 01:47:46.847129 1775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:46.849020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:46.849191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:47.119413 systemd[1]: Started sshd@1-10.200.8.37:22-10.200.16.10:47290.service. Dec 13 01:47:47.743735 sshd[1783]: Accepted publickey for core from 10.200.16.10 port 47290 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:47:47.745185 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:47.749846 systemd[1]: Started session-4.scope. Dec 13 01:47:47.750464 systemd-logind[1428]: New session 4 of user core. Dec 13 01:47:48.190988 sshd[1783]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:48.194379 systemd[1]: sshd@1-10.200.8.37:22-10.200.16.10:47290.service: Deactivated successfully. Dec 13 01:47:48.195426 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:47:48.196211 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:47:48.197146 systemd-logind[1428]: Removed session 4. Dec 13 01:47:48.295272 systemd[1]: Started sshd@2-10.200.8.37:22-10.200.16.10:47296.service. Dec 13 01:47:48.920070 sshd[1789]: Accepted publickey for core from 10.200.16.10 port 47296 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:47:48.921726 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:48.927364 systemd[1]: Started session-5.scope. Dec 13 01:47:48.927808 systemd-logind[1428]: New session 5 of user core. Dec 13 01:47:49.366515 sshd[1789]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:49.369345 systemd[1]: sshd@2-10.200.8.37:22-10.200.16.10:47296.service: Deactivated successfully. Dec 13 01:47:49.370209 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:47:49.370860 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:47:49.371655 systemd-logind[1428]: Removed session 5. Dec 13 01:47:49.470389 systemd[1]: Started sshd@3-10.200.8.37:22-10.200.16.10:50586.service. Dec 13 01:47:50.095297 sshd[1795]: Accepted publickey for core from 10.200.16.10 port 50586 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:47:50.097018 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:50.102675 systemd[1]: Started session-6.scope. Dec 13 01:47:50.103424 systemd-logind[1428]: New session 6 of user core. Dec 13 01:47:50.544209 sshd[1795]: pam_unix(sshd:session): session closed for user core Dec 13 01:47:50.547115 systemd[1]: sshd@3-10.200.8.37:22-10.200.16.10:50586.service: Deactivated successfully. Dec 13 01:47:50.547938 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:47:50.548546 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:47:50.549276 systemd-logind[1428]: Removed session 6. Dec 13 01:47:50.648255 systemd[1]: Started sshd@4-10.200.8.37:22-10.200.16.10:50598.service. Dec 13 01:47:51.275613 sshd[1801]: Accepted publickey for core from 10.200.16.10 port 50598 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:47:51.277343 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:47:51.282938 systemd[1]: Started session-7.scope. Dec 13 01:47:51.283521 systemd-logind[1428]: New session 7 of user core. Dec 13 01:47:51.820977 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:47:51.821290 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:47:51.855612 systemd[1]: Starting docker.service... Dec 13 01:47:51.903334 env[1814]: time="2024-12-13T01:47:51.903292002Z" level=info msg="Starting up" Dec 13 01:47:51.904773 env[1814]: time="2024-12-13T01:47:51.904736393Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:47:51.904773 env[1814]: time="2024-12-13T01:47:51.904756993Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:47:51.904957 env[1814]: time="2024-12-13T01:47:51.904778193Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:47:51.904957 env[1814]: time="2024-12-13T01:47:51.904791193Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:47:51.906338 env[1814]: time="2024-12-13T01:47:51.906317384Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:47:51.906429 env[1814]: time="2024-12-13T01:47:51.906419184Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:47:51.906478 env[1814]: time="2024-12-13T01:47:51.906468683Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:47:51.906566 env[1814]: time="2024-12-13T01:47:51.906557683Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:47:51.913400 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4181154078-merged.mount: Deactivated successfully. Dec 13 01:47:52.000779 env[1814]: time="2024-12-13T01:47:52.000728036Z" level=info msg="Loading containers: start." Dec 13 01:47:52.130911 kernel: Initializing XFRM netlink socket Dec 13 01:47:52.160772 env[1814]: time="2024-12-13T01:47:52.160729766Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 01:47:52.273711 systemd-networkd[1592]: docker0: Link UP Dec 13 01:47:52.312102 env[1814]: time="2024-12-13T01:47:52.312053463Z" level=info msg="Loading containers: done." Dec 13 01:47:52.324179 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1554619534-merged.mount: Deactivated successfully. Dec 13 01:47:52.340829 env[1814]: time="2024-12-13T01:47:52.340780725Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:47:52.341064 env[1814]: time="2024-12-13T01:47:52.341009036Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 01:47:52.341144 env[1814]: time="2024-12-13T01:47:52.341124242Z" level=info msg="Daemon has completed initialization" Dec 13 01:47:52.372214 systemd[1]: Started docker.service. Dec 13 01:47:52.378942 env[1814]: time="2024-12-13T01:47:52.378863962Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:47:56.860982 env[1441]: time="2024-12-13T01:47:56.860917877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:47:57.037165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:47:57.037404 systemd[1]: Stopped kubelet.service. Dec 13 01:47:57.039433 systemd[1]: Starting kubelet.service... Dec 13 01:47:57.205627 systemd[1]: Started kubelet.service. Dec 13 01:47:57.252499 kubelet[1938]: E1213 01:47:57.252444 1938 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:47:57.254361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:47:57.254519 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:47:58.036973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount308597707.mount: Deactivated successfully. Dec 13 01:48:00.412908 env[1441]: time="2024-12-13T01:48:00.412835388Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:00.419464 env[1441]: time="2024-12-13T01:48:00.419421256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:00.435823 env[1441]: time="2024-12-13T01:48:00.435772022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:00.441152 env[1441]: time="2024-12-13T01:48:00.441107039Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:00.441957 env[1441]: time="2024-12-13T01:48:00.441919172Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:48:00.452350 env[1441]: time="2024-12-13T01:48:00.452308795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:48:02.657299 env[1441]: time="2024-12-13T01:48:02.657242683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:02.663486 env[1441]: time="2024-12-13T01:48:02.663447422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:02.667876 env[1441]: time="2024-12-13T01:48:02.667819991Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:02.673161 env[1441]: time="2024-12-13T01:48:02.673120795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:02.674121 env[1441]: time="2024-12-13T01:48:02.674085932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:48:02.685023 env[1441]: time="2024-12-13T01:48:02.684991853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:48:04.318372 env[1441]: time="2024-12-13T01:48:04.318303276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:04.324972 env[1441]: time="2024-12-13T01:48:04.324928018Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:04.333236 env[1441]: time="2024-12-13T01:48:04.333192320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:04.338454 env[1441]: time="2024-12-13T01:48:04.338418911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:04.339076 env[1441]: time="2024-12-13T01:48:04.339040934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:48:04.349321 env[1441]: time="2024-12-13T01:48:04.349285208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:48:05.567520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119065924.mount: Deactivated successfully. Dec 13 01:48:06.141093 env[1441]: time="2024-12-13T01:48:06.141034852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:06.145542 env[1441]: time="2024-12-13T01:48:06.145502607Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:06.149766 env[1441]: time="2024-12-13T01:48:06.149728453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:06.154035 env[1441]: time="2024-12-13T01:48:06.154002201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:06.154471 env[1441]: time="2024-12-13T01:48:06.154440917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:48:06.164532 env[1441]: time="2024-12-13T01:48:06.164503365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:48:06.704272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58270757.mount: Deactivated successfully. Dec 13 01:48:07.287173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:48:07.287423 systemd[1]: Stopped kubelet.service. Dec 13 01:48:07.289240 systemd[1]: Starting kubelet.service... Dec 13 01:48:07.369376 systemd[1]: Started kubelet.service. Dec 13 01:48:07.413507 kubelet[1970]: E1213 01:48:07.413451 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:48:07.415767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:48:07.415878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:48:08.603987 env[1441]: time="2024-12-13T01:48:08.603931392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:08.612255 env[1441]: time="2024-12-13T01:48:08.612190263Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:08.618252 env[1441]: time="2024-12-13T01:48:08.618214261Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:08.623535 env[1441]: time="2024-12-13T01:48:08.623502135Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:08.624169 env[1441]: time="2024-12-13T01:48:08.624134756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:48:08.634209 env[1441]: time="2024-12-13T01:48:08.634178186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:48:09.174992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458154950.mount: Deactivated successfully. Dec 13 01:48:09.198256 env[1441]: time="2024-12-13T01:48:09.198203052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:09.208462 env[1441]: time="2024-12-13T01:48:09.208417879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:09.214865 env[1441]: time="2024-12-13T01:48:09.214827084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:09.220145 env[1441]: time="2024-12-13T01:48:09.220108453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:09.220744 env[1441]: time="2024-12-13T01:48:09.220711872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:48:09.231271 env[1441]: time="2024-12-13T01:48:09.231242210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:48:09.850867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050972338.mount: Deactivated successfully. Dec 13 01:48:13.096084 env[1441]: time="2024-12-13T01:48:13.095957476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:13.103074 env[1441]: time="2024-12-13T01:48:13.103014780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:13.107070 env[1441]: time="2024-12-13T01:48:13.107030896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:13.110517 env[1441]: time="2024-12-13T01:48:13.110481795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:13.111220 env[1441]: time="2024-12-13T01:48:13.111185216Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:48:16.438061 systemd[1]: Stopped kubelet.service. Dec 13 01:48:16.440687 systemd[1]: Starting kubelet.service... Dec 13 01:48:16.468767 systemd[1]: Reloading. Dec 13 01:48:16.572152 /usr/lib/systemd/system-generators/torcx-generator[2070]: time="2024-12-13T01:48:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:48:16.572189 /usr/lib/systemd/system-generators/torcx-generator[2070]: time="2024-12-13T01:48:16Z" level=info msg="torcx already run" Dec 13 01:48:16.665614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:48:16.665640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:48:16.688422 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:16.794511 systemd[1]: Started kubelet.service. Dec 13 01:48:16.797420 systemd[1]: Stopping kubelet.service... Dec 13 01:48:16.798114 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:48:16.798312 systemd[1]: Stopped kubelet.service. Dec 13 01:48:16.800179 systemd[1]: Starting kubelet.service... Dec 13 01:48:17.066706 systemd[1]: Started kubelet.service. Dec 13 01:48:17.699489 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:17.699489 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:48:17.699489 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:17.700166 kubelet[2139]: I1213 01:48:17.699701 2139 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:48:18.211227 kubelet[2139]: I1213 01:48:18.211187 2139 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:48:18.211227 kubelet[2139]: I1213 01:48:18.211217 2139 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:48:18.211513 kubelet[2139]: I1213 01:48:18.211488 2139 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:48:18.582426 kubelet[2139]: E1213 01:48:18.582391 2139 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:18.584175 kubelet[2139]: I1213 01:48:18.584145 2139 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:18.594454 kubelet[2139]: I1213 01:48:18.594433 2139 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:48:18.644721 kubelet[2139]: I1213 01:48:18.644656 2139 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:48:18.645809 kubelet[2139]: I1213 01:48:18.645767 2139 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:48:18.646521 kubelet[2139]: I1213 01:48:18.646463 2139 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:48:18.646608 kubelet[2139]: I1213 01:48:18.646532 2139 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:48:18.646716 kubelet[2139]: I1213 01:48:18.646676 2139 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:18.646831 kubelet[2139]: I1213 01:48:18.646814 2139 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:48:18.646939 kubelet[2139]: I1213 01:48:18.646845 2139 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:48:18.646939 kubelet[2139]: I1213 01:48:18.646930 2139 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:48:18.647026 kubelet[2139]: I1213 01:48:18.646956 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:48:18.689802 kubelet[2139]: W1213 01:48:18.689737 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:18.689992 kubelet[2139]: E1213 01:48:18.689818 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:18.689992 kubelet[2139]: W1213 01:48:18.689922 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-d3376cd0d9&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:18.689992 kubelet[2139]: E1213 01:48:18.689967 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-d3376cd0d9&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:18.690133 kubelet[2139]: I1213 01:48:18.690081 2139 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:48:18.835380 kubelet[2139]: I1213 01:48:18.834781 2139 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:48:18.943965 kubelet[2139]: W1213 01:48:18.943868 2139 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:48:18.951273 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:48:18.951385 kubelet[2139]: I1213 01:48:18.944813 2139 server.go:1256] "Started kubelet" Dec 13 01:48:18.951385 kubelet[2139]: I1213 01:48:18.946714 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:48:18.951385 kubelet[2139]: I1213 01:48:18.947495 2139 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:48:18.951385 kubelet[2139]: I1213 01:48:18.947553 2139 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:48:18.951385 kubelet[2139]: I1213 01:48:18.948559 2139 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:48:18.951385 kubelet[2139]: I1213 01:48:18.951355 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:48:18.958124 kubelet[2139]: I1213 01:48:18.958099 2139 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:48:18.958831 kubelet[2139]: I1213 01:48:18.958805 2139 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:48:18.958948 kubelet[2139]: I1213 01:48:18.958873 2139 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:48:19.542408 kubelet[2139]: W1213 01:48:19.542331 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.542408 kubelet[2139]: E1213 01:48:19.542411 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.542666 kubelet[2139]: E1213 01:48:19.542542 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-d3376cd0d9?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="200ms" Dec 13 01:48:19.548337 kubelet[2139]: I1213 01:48:19.548302 2139 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:48:19.548496 kubelet[2139]: I1213 01:48:19.548437 2139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:48:19.564875 kubelet[2139]: I1213 01:48:19.564851 2139 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:48:19.591187 kubelet[2139]: I1213 01:48:19.591154 2139 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.592376 kubelet[2139]: E1213 01:48:19.592353 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.592852 kubelet[2139]: I1213 01:48:19.592835 2139 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:48:19.593195 kubelet[2139]: I1213 01:48:19.593181 2139 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:48:19.593300 kubelet[2139]: I1213 01:48:19.593289 2139 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:19.594528 kubelet[2139]: E1213 01:48:19.594500 2139 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-d3376cd0d9.181099611ec53a8b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-d3376cd0d9,UID:ci-3510.3.6-a-d3376cd0d9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-d3376cd0d9,},FirstTimestamp:2024-12-13 01:48:18.944760459 +0000 UTC m=+1.871689343,LastTimestamp:2024-12-13 01:48:18.944760459 +0000 UTC m=+1.871689343,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-d3376cd0d9,}" Dec 13 01:48:19.611528 kubelet[2139]: I1213 01:48:19.611508 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:48:19.612445 kubelet[2139]: I1213 01:48:19.612421 2139 policy_none.go:49] "None policy: Start" Dec 13 01:48:19.613386 kubelet[2139]: I1213 01:48:19.613370 2139 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:48:19.613512 kubelet[2139]: I1213 01:48:19.613501 2139 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:48:19.614006 kubelet[2139]: I1213 01:48:19.613989 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:48:19.614296 kubelet[2139]: I1213 01:48:19.614277 2139 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:48:19.614418 kubelet[2139]: I1213 01:48:19.614406 2139 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:48:19.614579 kubelet[2139]: E1213 01:48:19.614560 2139 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:48:19.617249 kubelet[2139]: W1213 01:48:19.617203 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.617338 kubelet[2139]: E1213 01:48:19.617280 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.646619 systemd[1]: Created slice kubepods.slice. Dec 13 01:48:19.651553 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 01:48:19.654328 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 01:48:19.662593 kubelet[2139]: I1213 01:48:19.662561 2139 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:48:19.662836 kubelet[2139]: I1213 01:48:19.662816 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:48:19.666019 kubelet[2139]: E1213 01:48:19.665999 2139 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-d3376cd0d9\" not found" Dec 13 01:48:19.713334 kubelet[2139]: W1213 01:48:19.713294 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.713334 kubelet[2139]: E1213 01:48:19.713340 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.715597 kubelet[2139]: I1213 01:48:19.715573 2139 topology_manager.go:215] "Topology Admit Handler" podUID="5ba162e925ee1be13fbe2abf5535e20b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.717191 kubelet[2139]: I1213 01:48:19.717161 2139 topology_manager.go:215] "Topology Admit Handler" podUID="8164eb9916f500bebaeba07b160aa1fe" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.718932 kubelet[2139]: I1213 01:48:19.718903 2139 topology_manager.go:215] "Topology Admit Handler" podUID="8d63dd564bc2f087d6526b64d817069d" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.724866 systemd[1]: Created slice kubepods-burstable-pod5ba162e925ee1be13fbe2abf5535e20b.slice. Dec 13 01:48:19.740822 systemd[1]: Created slice kubepods-burstable-pod8d63dd564bc2f087d6526b64d817069d.slice. Dec 13 01:48:19.745268 systemd[1]: Created slice kubepods-burstable-pod8164eb9916f500bebaeba07b160aa1fe.slice. Dec 13 01:48:19.746315 kubelet[2139]: E1213 01:48:19.746290 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-d3376cd0d9?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="400ms" Dec 13 01:48:19.764590 kubelet[2139]: I1213 01:48:19.764559 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d63dd564bc2f087d6526b64d817069d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8d63dd564bc2f087d6526b64d817069d\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764590 kubelet[2139]: I1213 01:48:19.764602 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764773 kubelet[2139]: I1213 01:48:19.764633 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764773 kubelet[2139]: I1213 01:48:19.764659 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764773 kubelet[2139]: I1213 01:48:19.764684 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8164eb9916f500bebaeba07b160aa1fe-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8164eb9916f500bebaeba07b160aa1fe\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764773 kubelet[2139]: I1213 01:48:19.764708 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d63dd564bc2f087d6526b64d817069d-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8d63dd564bc2f087d6526b64d817069d\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764773 kubelet[2139]: I1213 01:48:19.764734 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d63dd564bc2f087d6526b64d817069d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8d63dd564bc2f087d6526b64d817069d\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764998 kubelet[2139]: I1213 01:48:19.764771 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.764998 kubelet[2139]: I1213 01:48:19.764818 2139 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.794410 kubelet[2139]: I1213 01:48:19.794321 2139 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.795868 kubelet[2139]: E1213 01:48:19.795842 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:19.994592 kubelet[2139]: W1213 01:48:19.994549 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-d3376cd0d9&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:19.994592 kubelet[2139]: E1213 01:48:19.994600 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-d3376cd0d9&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:20.039729 env[1441]: time="2024-12-13T01:48:20.039667969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-d3376cd0d9,Uid:5ba162e925ee1be13fbe2abf5535e20b,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:20.045356 env[1441]: time="2024-12-13T01:48:20.045263204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-d3376cd0d9,Uid:8d63dd564bc2f087d6526b64d817069d,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:20.048115 env[1441]: time="2024-12-13T01:48:20.048082573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-d3376cd0d9,Uid:8164eb9916f500bebaeba07b160aa1fe,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:20.147600 kubelet[2139]: E1213 01:48:20.147559 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-d3376cd0d9?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="800ms" Dec 13 01:48:20.197869 kubelet[2139]: I1213 01:48:20.197837 2139 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:20.198254 kubelet[2139]: E1213 01:48:20.198226 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:20.402979 kubelet[2139]: W1213 01:48:20.402865 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:20.402979 kubelet[2139]: E1213 01:48:20.402924 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:20.729012 kubelet[2139]: E1213 01:48:20.728902 2139 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:20.948418 kubelet[2139]: E1213 01:48:20.948376 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-d3376cd0d9?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="1.6s" Dec 13 01:48:21.000649 kubelet[2139]: I1213 01:48:21.000541 2139 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:21.001345 kubelet[2139]: E1213 01:48:21.001313 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:21.132859 kubelet[2139]: W1213 01:48:21.132819 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:21.132859 kubelet[2139]: E1213 01:48:21.132863 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:21.634765 kubelet[2139]: W1213 01:48:21.634694 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:21.634765 kubelet[2139]: E1213 01:48:21.634769 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:22.083537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2722721738.mount: Deactivated successfully. Dec 13 01:48:22.116783 env[1441]: time="2024-12-13T01:48:22.116720003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.119275 env[1441]: time="2024-12-13T01:48:22.119233361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.130850 env[1441]: time="2024-12-13T01:48:22.130812128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.134575 env[1441]: time="2024-12-13T01:48:22.134536714Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.140146 env[1441]: time="2024-12-13T01:48:22.140108643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.143447 env[1441]: time="2024-12-13T01:48:22.143411119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.148618 env[1441]: time="2024-12-13T01:48:22.148561438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.151616 env[1441]: time="2024-12-13T01:48:22.151576908Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.160645 env[1441]: time="2024-12-13T01:48:22.160606716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.164415 env[1441]: time="2024-12-13T01:48:22.164380603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.182844 env[1441]: time="2024-12-13T01:48:22.182802129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.193871 env[1441]: time="2024-12-13T01:48:22.193829884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:22.216503 kubelet[2139]: W1213 01:48:22.216448 2139 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:22.216827 kubelet[2139]: E1213 01:48:22.216510 2139 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Dec 13 01:48:22.242741 env[1441]: time="2024-12-13T01:48:22.242647711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:22.242999 env[1441]: time="2024-12-13T01:48:22.242963618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:22.243174 env[1441]: time="2024-12-13T01:48:22.243139623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:22.243546 env[1441]: time="2024-12-13T01:48:22.243491131Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97a8400f700a5812287abf522fd756d9a76635da6943f24e9a299615af1ab85b pid=2179 runtime=io.containerd.runc.v2 Dec 13 01:48:22.252746 env[1441]: time="2024-12-13T01:48:22.252675043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:22.252931 env[1441]: time="2024-12-13T01:48:22.252744944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:22.252931 env[1441]: time="2024-12-13T01:48:22.252759645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:22.253075 env[1441]: time="2024-12-13T01:48:22.252935849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3412451a5d688bf108decf166717d7662dd59d3302bb1bf6d645b3658749d5de pid=2196 runtime=io.containerd.runc.v2 Dec 13 01:48:22.271289 systemd[1]: Started cri-containerd-97a8400f700a5812287abf522fd756d9a76635da6943f24e9a299615af1ab85b.scope. Dec 13 01:48:22.287729 systemd[1]: Started cri-containerd-3412451a5d688bf108decf166717d7662dd59d3302bb1bf6d645b3658749d5de.scope. Dec 13 01:48:22.293763 env[1441]: time="2024-12-13T01:48:22.290780523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:22.293763 env[1441]: time="2024-12-13T01:48:22.290835724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:22.293763 env[1441]: time="2024-12-13T01:48:22.290866425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:22.294106 env[1441]: time="2024-12-13T01:48:22.291079530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbabc3cccae4fcc7d566446e24cb037a25e285a539249c95b67ca9bb683d150c pid=2240 runtime=io.containerd.runc.v2 Dec 13 01:48:22.318316 systemd[1]: Started cri-containerd-bbabc3cccae4fcc7d566446e24cb037a25e285a539249c95b67ca9bb683d150c.scope. Dec 13 01:48:22.365037 env[1441]: time="2024-12-13T01:48:22.364291421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-d3376cd0d9,Uid:5ba162e925ee1be13fbe2abf5535e20b,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a8400f700a5812287abf522fd756d9a76635da6943f24e9a299615af1ab85b\"" Dec 13 01:48:22.371932 env[1441]: time="2024-12-13T01:48:22.371875196Z" level=info msg="CreateContainer within sandbox \"97a8400f700a5812287abf522fd756d9a76635da6943f24e9a299615af1ab85b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:48:22.374267 env[1441]: time="2024-12-13T01:48:22.374189949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-d3376cd0d9,Uid:8d63dd564bc2f087d6526b64d817069d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3412451a5d688bf108decf166717d7662dd59d3302bb1bf6d645b3658749d5de\"" Dec 13 01:48:22.377215 env[1441]: time="2024-12-13T01:48:22.377187218Z" level=info msg="CreateContainer within sandbox \"3412451a5d688bf108decf166717d7662dd59d3302bb1bf6d645b3658749d5de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:48:22.398055 env[1441]: time="2024-12-13T01:48:22.398009599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-d3376cd0d9,Uid:8164eb9916f500bebaeba07b160aa1fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbabc3cccae4fcc7d566446e24cb037a25e285a539249c95b67ca9bb683d150c\"" Dec 13 01:48:22.400982 env[1441]: time="2024-12-13T01:48:22.400944667Z" level=info msg="CreateContainer within sandbox \"bbabc3cccae4fcc7d566446e24cb037a25e285a539249c95b67ca9bb683d150c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:48:22.435776 env[1441]: time="2024-12-13T01:48:22.435730871Z" level=info msg="CreateContainer within sandbox \"97a8400f700a5812287abf522fd756d9a76635da6943f24e9a299615af1ab85b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d3706e85ed9e62fa926aa9783a709a607105f9dc557b735b2a4cf482a515e93\"" Dec 13 01:48:22.436405 env[1441]: time="2024-12-13T01:48:22.436373285Z" level=info msg="StartContainer for \"4d3706e85ed9e62fa926aa9783a709a607105f9dc557b735b2a4cf482a515e93\"" Dec 13 01:48:22.453342 env[1441]: time="2024-12-13T01:48:22.453295376Z" level=info msg="CreateContainer within sandbox \"3412451a5d688bf108decf166717d7662dd59d3302bb1bf6d645b3658749d5de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f934a240defc31f00af22d5244736b07a756e8116ae695539f04dd46417356ed\"" Dec 13 01:48:22.453546 systemd[1]: Started cri-containerd-4d3706e85ed9e62fa926aa9783a709a607105f9dc557b735b2a4cf482a515e93.scope. Dec 13 01:48:22.458827 env[1441]: time="2024-12-13T01:48:22.457926183Z" level=info msg="StartContainer for \"f934a240defc31f00af22d5244736b07a756e8116ae695539f04dd46417356ed\"" Dec 13 01:48:22.472989 env[1441]: time="2024-12-13T01:48:22.472943730Z" level=info msg="CreateContainer within sandbox \"bbabc3cccae4fcc7d566446e24cb037a25e285a539249c95b67ca9bb683d150c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0ba2d99077bb180fc33df51007830b650e8c51161e9f9a3887b8478db27c030\"" Dec 13 01:48:22.473698 env[1441]: time="2024-12-13T01:48:22.473667747Z" level=info msg="StartContainer for \"a0ba2d99077bb180fc33df51007830b650e8c51161e9f9a3887b8478db27c030\"" Dec 13 01:48:22.492072 systemd[1]: Started cri-containerd-f934a240defc31f00af22d5244736b07a756e8116ae695539f04dd46417356ed.scope. Dec 13 01:48:22.517553 systemd[1]: Started cri-containerd-a0ba2d99077bb180fc33df51007830b650e8c51161e9f9a3887b8478db27c030.scope. Dec 13 01:48:22.549365 kubelet[2139]: E1213 01:48:22.549333 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-d3376cd0d9?timeout=10s\": dial tcp 10.200.8.37:6443: connect: connection refused" interval="3.2s" Dec 13 01:48:22.551557 env[1441]: time="2024-12-13T01:48:22.551428743Z" level=info msg="StartContainer for \"4d3706e85ed9e62fa926aa9783a709a607105f9dc557b735b2a4cf482a515e93\" returns successfully" Dec 13 01:48:22.580011 env[1441]: time="2024-12-13T01:48:22.579938401Z" level=info msg="StartContainer for \"f934a240defc31f00af22d5244736b07a756e8116ae695539f04dd46417356ed\" returns successfully" Dec 13 01:48:22.604229 kubelet[2139]: I1213 01:48:22.603780 2139 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:22.604229 kubelet[2139]: E1213 01:48:22.604203 2139 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:22.655752 env[1441]: time="2024-12-13T01:48:22.655633450Z" level=info msg="StartContainer for \"a0ba2d99077bb180fc33df51007830b650e8c51161e9f9a3887b8478db27c030\" returns successfully" Dec 13 01:48:24.690557 kubelet[2139]: I1213 01:48:24.690523 2139 apiserver.go:52] "Watching apiserver" Dec 13 01:48:24.759125 kubelet[2139]: I1213 01:48:24.759081 2139 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:48:24.947027 kubelet[2139]: E1213 01:48:24.946915 2139 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-d3376cd0d9" not found Dec 13 01:48:25.314576 kubelet[2139]: E1213 01:48:25.314530 2139 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-d3376cd0d9" not found Dec 13 01:48:25.746127 kubelet[2139]: E1213 01:48:25.746009 2139 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.6-a-d3376cd0d9" not found Dec 13 01:48:25.760936 kubelet[2139]: E1213 01:48:25.760906 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-d3376cd0d9\" not found" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:25.807065 kubelet[2139]: I1213 01:48:25.807032 2139 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:25.818190 kubelet[2139]: I1213 01:48:25.818150 2139 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:27.325050 systemd[1]: Reloading. Dec 13 01:48:27.433245 /usr/lib/systemd/system-generators/torcx-generator[2430]: time="2024-12-13T01:48:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:48:27.433763 /usr/lib/systemd/system-generators/torcx-generator[2430]: time="2024-12-13T01:48:27Z" level=info msg="torcx already run" Dec 13 01:48:27.529213 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:48:27.529233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:48:27.545955 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:27.662653 kubelet[2139]: I1213 01:48:27.662122 2139 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:27.662881 systemd[1]: Stopping kubelet.service... Dec 13 01:48:27.682364 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:48:27.682595 systemd[1]: Stopped kubelet.service. Dec 13 01:48:27.682656 systemd[1]: kubelet.service: Consumed 1.006s CPU time. Dec 13 01:48:27.684711 systemd[1]: Starting kubelet.service... Dec 13 01:48:27.861575 systemd[1]: Started kubelet.service. Dec 13 01:48:28.299391 kubelet[2496]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:28.299749 kubelet[2496]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:48:28.299749 kubelet[2496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:28.299876 kubelet[2496]: I1213 01:48:28.299820 2496 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:48:28.304491 kubelet[2496]: I1213 01:48:28.304461 2496 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:48:28.304491 kubelet[2496]: I1213 01:48:28.304484 2496 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:48:28.304726 kubelet[2496]: I1213 01:48:28.304706 2496 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:48:28.306139 kubelet[2496]: I1213 01:48:28.306114 2496 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:48:28.307821 kubelet[2496]: I1213 01:48:28.307786 2496 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:28.317854 kubelet[2496]: I1213 01:48:28.317820 2496 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:48:28.318250 kubelet[2496]: I1213 01:48:28.318230 2496 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:48:28.318608 kubelet[2496]: I1213 01:48:28.318579 2496 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:48:28.318756 kubelet[2496]: I1213 01:48:28.318615 2496 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:48:28.318756 kubelet[2496]: I1213 01:48:28.318629 2496 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:48:28.318756 kubelet[2496]: I1213 01:48:28.318672 2496 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:28.318909 kubelet[2496]: I1213 01:48:28.318793 2496 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:48:28.318909 kubelet[2496]: I1213 01:48:28.318811 2496 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:48:28.319355 kubelet[2496]: I1213 01:48:28.319265 2496 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:48:28.319948 kubelet[2496]: I1213 01:48:28.319931 2496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:48:28.324697 kubelet[2496]: I1213 01:48:28.324677 2496 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:48:28.324925 kubelet[2496]: I1213 01:48:28.324909 2496 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:48:28.325361 kubelet[2496]: I1213 01:48:28.325342 2496 server.go:1256] "Started kubelet" Dec 13 01:48:28.328194 kubelet[2496]: I1213 01:48:28.327673 2496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:48:28.334939 kubelet[2496]: I1213 01:48:28.334648 2496 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:48:28.337001 kubelet[2496]: I1213 01:48:28.335604 2496 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:48:28.337001 kubelet[2496]: I1213 01:48:28.336986 2496 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:48:28.337186 kubelet[2496]: I1213 01:48:28.337166 2496 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:48:28.341407 kubelet[2496]: I1213 01:48:28.339420 2496 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:48:28.341407 kubelet[2496]: I1213 01:48:28.341015 2496 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:48:28.341407 kubelet[2496]: I1213 01:48:28.341154 2496 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:48:28.344149 kubelet[2496]: I1213 01:48:28.343004 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:48:28.344246 kubelet[2496]: I1213 01:48:28.344160 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:48:28.344246 kubelet[2496]: I1213 01:48:28.344185 2496 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:48:28.344246 kubelet[2496]: I1213 01:48:28.344203 2496 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:48:28.344366 kubelet[2496]: E1213 01:48:28.344274 2496 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:48:28.355621 kubelet[2496]: I1213 01:48:28.354707 2496 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:48:28.355621 kubelet[2496]: I1213 01:48:28.354882 2496 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:48:28.356842 kubelet[2496]: I1213 01:48:28.356825 2496 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:48:28.410014 kubelet[2496]: I1213 01:48:28.409989 2496 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:48:28.410200 kubelet[2496]: I1213 01:48:28.410189 2496 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:48:28.410305 kubelet[2496]: I1213 01:48:28.410296 2496 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:28.410521 kubelet[2496]: I1213 01:48:28.410488 2496 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:48:28.410599 kubelet[2496]: I1213 01:48:28.410593 2496 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:48:28.410642 kubelet[2496]: I1213 01:48:28.410637 2496 policy_none.go:49] "None policy: Start" Dec 13 01:48:28.411328 kubelet[2496]: I1213 01:48:28.411312 2496 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:48:28.411430 kubelet[2496]: I1213 01:48:28.411424 2496 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:48:28.411635 kubelet[2496]: I1213 01:48:28.411624 2496 state_mem.go:75] "Updated machine memory state" Dec 13 01:48:28.417809 kubelet[2496]: I1213 01:48:28.417783 2496 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:48:28.418035 kubelet[2496]: I1213 01:48:28.418016 2496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:48:28.442155 kubelet[2496]: I1213 01:48:28.442135 2496 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.446045 kubelet[2496]: I1213 01:48:28.445293 2496 topology_manager.go:215] "Topology Admit Handler" podUID="5ba162e925ee1be13fbe2abf5535e20b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.446045 kubelet[2496]: I1213 01:48:28.445377 2496 topology_manager.go:215] "Topology Admit Handler" podUID="8164eb9916f500bebaeba07b160aa1fe" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.446045 kubelet[2496]: I1213 01:48:28.445417 2496 topology_manager.go:215] "Topology Admit Handler" podUID="8d63dd564bc2f087d6526b64d817069d" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.457850 kubelet[2496]: W1213 01:48:28.457808 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:48:28.460628 kubelet[2496]: W1213 01:48:28.460604 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:48:28.460757 kubelet[2496]: W1213 01:48:28.460618 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:48:28.460981 kubelet[2496]: I1213 01:48:28.460961 2496 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.461063 kubelet[2496]: I1213 01:48:28.461049 2496 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642192 kubelet[2496]: I1213 01:48:28.642066 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642192 kubelet[2496]: I1213 01:48:28.642121 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8164eb9916f500bebaeba07b160aa1fe-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8164eb9916f500bebaeba07b160aa1fe\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642192 kubelet[2496]: I1213 01:48:28.642148 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d63dd564bc2f087d6526b64d817069d-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8d63dd564bc2f087d6526b64d817069d\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642192 kubelet[2496]: I1213 01:48:28.642176 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d63dd564bc2f087d6526b64d817069d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8d63dd564bc2f087d6526b64d817069d\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642485 kubelet[2496]: I1213 01:48:28.642203 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642485 kubelet[2496]: I1213 01:48:28.642227 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642485 kubelet[2496]: I1213 01:48:28.642273 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642485 kubelet[2496]: I1213 01:48:28.642307 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d63dd564bc2f087d6526b64d817069d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" (UID: \"8d63dd564bc2f087d6526b64d817069d\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:28.642485 kubelet[2496]: I1213 01:48:28.642336 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ba162e925ee1be13fbe2abf5535e20b-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-d3376cd0d9\" (UID: \"5ba162e925ee1be13fbe2abf5535e20b\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:29.320619 kubelet[2496]: I1213 01:48:29.320561 2496 apiserver.go:52] "Watching apiserver" Dec 13 01:48:29.341440 kubelet[2496]: I1213 01:48:29.341403 2496 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:48:29.409570 kubelet[2496]: W1213 01:48:29.409545 2496 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:48:29.409866 kubelet[2496]: E1213 01:48:29.409847 2496 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-d3376cd0d9\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" Dec 13 01:48:29.430454 kubelet[2496]: I1213 01:48:29.430418 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-d3376cd0d9" podStartSLOduration=1.430351059 podStartE2EDuration="1.430351059s" podCreationTimestamp="2024-12-13 01:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:29.428662326 +0000 UTC m=+1.559548771" watchObservedRunningTime="2024-12-13 01:48:29.430351059 +0000 UTC m=+1.561237504" Dec 13 01:48:29.443447 kubelet[2496]: I1213 01:48:29.443418 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-d3376cd0d9" podStartSLOduration=1.443359114 podStartE2EDuration="1.443359114s" podCreationTimestamp="2024-12-13 01:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:29.442847804 +0000 UTC m=+1.573734249" watchObservedRunningTime="2024-12-13 01:48:29.443359114 +0000 UTC m=+1.574245559" Dec 13 01:48:29.525506 sudo[1804]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:29.641948 sshd[1801]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:29.645559 systemd[1]: sshd@4-10.200.8.37:22-10.200.16.10:50598.service: Deactivated successfully. Dec 13 01:48:29.646456 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:48:29.646635 systemd[1]: session-7.scope: Consumed 3.878s CPU time. Dec 13 01:48:29.647281 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:48:29.648124 systemd-logind[1428]: Removed session 7. Dec 13 01:48:34.402023 kubelet[2496]: I1213 01:48:34.401956 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-d3376cd0d9" podStartSLOduration=6.40191995 podStartE2EDuration="6.40191995s" podCreationTimestamp="2024-12-13 01:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:29.455187446 +0000 UTC m=+1.586073891" watchObservedRunningTime="2024-12-13 01:48:34.40191995 +0000 UTC m=+6.532806395" Dec 13 01:48:40.096502 kubelet[2496]: I1213 01:48:40.096471 2496 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:48:40.096985 env[1441]: time="2024-12-13T01:48:40.096899352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:48:40.097304 kubelet[2496]: I1213 01:48:40.097127 2496 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:48:41.094388 kubelet[2496]: I1213 01:48:41.094335 2496 topology_manager.go:215] "Topology Admit Handler" podUID="da7cd26f-809c-4c9c-a40d-76fcd52ebf05" podNamespace="kube-system" podName="kube-proxy-x6hvp" Dec 13 01:48:41.101838 systemd[1]: Created slice kubepods-besteffort-podda7cd26f_809c_4c9c_a40d_76fcd52ebf05.slice. Dec 13 01:48:41.104117 kubelet[2496]: I1213 01:48:41.104084 2496 topology_manager.go:215] "Topology Admit Handler" podUID="47a50435-d50a-4594-9b88-1ae531693401" podNamespace="kube-flannel" podName="kube-flannel-ds-qs78l" Dec 13 01:48:41.115509 systemd[1]: Created slice kubepods-burstable-pod47a50435_d50a_4594_9b88_1ae531693401.slice. Dec 13 01:48:41.223878 kubelet[2496]: I1213 01:48:41.223839 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da7cd26f-809c-4c9c-a40d-76fcd52ebf05-lib-modules\") pod \"kube-proxy-x6hvp\" (UID: \"da7cd26f-809c-4c9c-a40d-76fcd52ebf05\") " pod="kube-system/kube-proxy-x6hvp" Dec 13 01:48:41.224060 kubelet[2496]: I1213 01:48:41.223905 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/47a50435-d50a-4594-9b88-1ae531693401-run\") pod \"kube-flannel-ds-qs78l\" (UID: \"47a50435-d50a-4594-9b88-1ae531693401\") " pod="kube-flannel/kube-flannel-ds-qs78l" Dec 13 01:48:41.224060 kubelet[2496]: I1213 01:48:41.223936 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/47a50435-d50a-4594-9b88-1ae531693401-flannel-cfg\") pod \"kube-flannel-ds-qs78l\" (UID: \"47a50435-d50a-4594-9b88-1ae531693401\") " pod="kube-flannel/kube-flannel-ds-qs78l" Dec 13 01:48:41.224060 kubelet[2496]: I1213 01:48:41.223961 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/47a50435-d50a-4594-9b88-1ae531693401-cni-plugin\") pod \"kube-flannel-ds-qs78l\" (UID: \"47a50435-d50a-4594-9b88-1ae531693401\") " pod="kube-flannel/kube-flannel-ds-qs78l" Dec 13 01:48:41.224060 kubelet[2496]: I1213 01:48:41.223985 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a50435-d50a-4594-9b88-1ae531693401-xtables-lock\") pod \"kube-flannel-ds-qs78l\" (UID: \"47a50435-d50a-4594-9b88-1ae531693401\") " pod="kube-flannel/kube-flannel-ds-qs78l" Dec 13 01:48:41.224060 kubelet[2496]: I1213 01:48:41.224035 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da7cd26f-809c-4c9c-a40d-76fcd52ebf05-xtables-lock\") pod \"kube-proxy-x6hvp\" (UID: \"da7cd26f-809c-4c9c-a40d-76fcd52ebf05\") " pod="kube-system/kube-proxy-x6hvp" Dec 13 01:48:41.224283 kubelet[2496]: I1213 01:48:41.224070 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbgr\" (UniqueName: \"kubernetes.io/projected/da7cd26f-809c-4c9c-a40d-76fcd52ebf05-kube-api-access-dxbgr\") pod \"kube-proxy-x6hvp\" (UID: \"da7cd26f-809c-4c9c-a40d-76fcd52ebf05\") " pod="kube-system/kube-proxy-x6hvp" Dec 13 01:48:41.224283 kubelet[2496]: I1213 01:48:41.224097 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/47a50435-d50a-4594-9b88-1ae531693401-cni\") pod \"kube-flannel-ds-qs78l\" (UID: \"47a50435-d50a-4594-9b88-1ae531693401\") " pod="kube-flannel/kube-flannel-ds-qs78l" Dec 13 01:48:41.224283 kubelet[2496]: I1213 01:48:41.224131 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnlg\" (UniqueName: \"kubernetes.io/projected/47a50435-d50a-4594-9b88-1ae531693401-kube-api-access-fgnlg\") pod \"kube-flannel-ds-qs78l\" (UID: \"47a50435-d50a-4594-9b88-1ae531693401\") " pod="kube-flannel/kube-flannel-ds-qs78l" Dec 13 01:48:41.224283 kubelet[2496]: I1213 01:48:41.224164 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da7cd26f-809c-4c9c-a40d-76fcd52ebf05-kube-proxy\") pod \"kube-proxy-x6hvp\" (UID: \"da7cd26f-809c-4c9c-a40d-76fcd52ebf05\") " pod="kube-system/kube-proxy-x6hvp" Dec 13 01:48:41.416322 env[1441]: time="2024-12-13T01:48:41.415506966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6hvp,Uid:da7cd26f-809c-4c9c-a40d-76fcd52ebf05,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:41.420762 env[1441]: time="2024-12-13T01:48:41.420727346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qs78l,Uid:47a50435-d50a-4594-9b88-1ae531693401,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:48:41.483743 env[1441]: time="2024-12-13T01:48:41.480630356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:41.483743 env[1441]: time="2024-12-13T01:48:41.480670756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:41.483743 env[1441]: time="2024-12-13T01:48:41.480685457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:41.483743 env[1441]: time="2024-12-13T01:48:41.480810058Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68dfacbc55b1554a77d1794be1aed40ee745565bcf6701283ba70c4610f0b45e pid=2558 runtime=io.containerd.runc.v2 Dec 13 01:48:41.499250 env[1441]: time="2024-12-13T01:48:41.499165237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:41.499411 env[1441]: time="2024-12-13T01:48:41.499260139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:41.499411 env[1441]: time="2024-12-13T01:48:41.499299439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:41.499510 env[1441]: time="2024-12-13T01:48:41.499478142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a pid=2577 runtime=io.containerd.runc.v2 Dec 13 01:48:41.506402 systemd[1]: Started cri-containerd-68dfacbc55b1554a77d1794be1aed40ee745565bcf6701283ba70c4610f0b45e.scope. Dec 13 01:48:41.519982 systemd[1]: Started cri-containerd-f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a.scope. Dec 13 01:48:41.559413 env[1441]: time="2024-12-13T01:48:41.559364452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x6hvp,Uid:da7cd26f-809c-4c9c-a40d-76fcd52ebf05,Namespace:kube-system,Attempt:0,} returns sandbox id \"68dfacbc55b1554a77d1794be1aed40ee745565bcf6701283ba70c4610f0b45e\"" Dec 13 01:48:41.562517 env[1441]: time="2024-12-13T01:48:41.562481599Z" level=info msg="CreateContainer within sandbox \"68dfacbc55b1554a77d1794be1aed40ee745565bcf6701283ba70c4610f0b45e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:48:41.582340 env[1441]: time="2024-12-13T01:48:41.581647891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-qs78l,Uid:47a50435-d50a-4594-9b88-1ae531693401,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\"" Dec 13 01:48:41.584353 env[1441]: time="2024-12-13T01:48:41.584306131Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:48:41.608997 env[1441]: time="2024-12-13T01:48:41.608956306Z" level=info msg="CreateContainer within sandbox \"68dfacbc55b1554a77d1794be1aed40ee745565bcf6701283ba70c4610f0b45e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64d6ec8effcf74f6077d991c24e4c10210d3308c20f971e06b88815b32a3f485\"" Dec 13 01:48:41.610087 env[1441]: time="2024-12-13T01:48:41.609601915Z" level=info msg="StartContainer for \"64d6ec8effcf74f6077d991c24e4c10210d3308c20f971e06b88815b32a3f485\"" Dec 13 01:48:41.632630 systemd[1]: Started cri-containerd-64d6ec8effcf74f6077d991c24e4c10210d3308c20f971e06b88815b32a3f485.scope. Dec 13 01:48:41.668174 env[1441]: time="2024-12-13T01:48:41.666488780Z" level=info msg="StartContainer for \"64d6ec8effcf74f6077d991c24e4c10210d3308c20f971e06b88815b32a3f485\" returns successfully" Dec 13 01:48:43.485118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934048608.mount: Deactivated successfully. Dec 13 01:48:43.629575 env[1441]: time="2024-12-13T01:48:43.629517241Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:43.637382 env[1441]: time="2024-12-13T01:48:43.637335155Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:43.640803 env[1441]: time="2024-12-13T01:48:43.640770805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:43.644403 env[1441]: time="2024-12-13T01:48:43.644369658Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:43.644919 env[1441]: time="2024-12-13T01:48:43.644872565Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:48:43.647765 env[1441]: time="2024-12-13T01:48:43.647723707Z" level=info msg="CreateContainer within sandbox \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:48:43.683288 env[1441]: time="2024-12-13T01:48:43.683243525Z" level=info msg="CreateContainer within sandbox \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c\"" Dec 13 01:48:43.685245 env[1441]: time="2024-12-13T01:48:43.683955136Z" level=info msg="StartContainer for \"a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c\"" Dec 13 01:48:43.704175 systemd[1]: Started cri-containerd-a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c.scope. Dec 13 01:48:43.733883 systemd[1]: cri-containerd-a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c.scope: Deactivated successfully. Dec 13 01:48:43.740614 env[1441]: time="2024-12-13T01:48:43.738991840Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47a50435_d50a_4594_9b88_1ae531693401.slice/cri-containerd-a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c.scope/memory.events\": no such file or directory" Dec 13 01:48:43.742060 env[1441]: time="2024-12-13T01:48:43.742023984Z" level=info msg="StartContainer for \"a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c\" returns successfully" Dec 13 01:48:44.397465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c-rootfs.mount: Deactivated successfully. Dec 13 01:48:44.440647 kubelet[2496]: I1213 01:48:44.440579 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x6hvp" podStartSLOduration=3.440516964 podStartE2EDuration="3.440516964s" podCreationTimestamp="2024-12-13 01:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:48:42.433518506 +0000 UTC m=+14.564405051" watchObservedRunningTime="2024-12-13 01:48:44.440516964 +0000 UTC m=+16.571403509" Dec 13 01:48:44.454709 env[1441]: time="2024-12-13T01:48:44.454663667Z" level=info msg="shim disconnected" id=a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c Dec 13 01:48:44.454872 env[1441]: time="2024-12-13T01:48:44.454721668Z" level=warning msg="cleaning up after shim disconnected" id=a251b642ae35debd2c85f736bd198d71325de225e7a93f8bf20dd247ebf39e4c namespace=k8s.io Dec 13 01:48:44.454872 env[1441]: time="2024-12-13T01:48:44.454737568Z" level=info msg="cleaning up dead shim" Dec 13 01:48:44.462303 env[1441]: time="2024-12-13T01:48:44.462266576Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:48:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2831 runtime=io.containerd.runc.v2\n" Dec 13 01:48:45.430502 env[1441]: time="2024-12-13T01:48:45.430455032Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:48:47.428786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1208678843.mount: Deactivated successfully. Dec 13 01:48:48.581499 env[1441]: time="2024-12-13T01:48:48.581439401Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:48.588862 env[1441]: time="2024-12-13T01:48:48.588821600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:48.595001 env[1441]: time="2024-12-13T01:48:48.594966181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:48.600583 env[1441]: time="2024-12-13T01:48:48.600550156Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:48:48.601192 env[1441]: time="2024-12-13T01:48:48.601159564Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:48:48.604315 env[1441]: time="2024-12-13T01:48:48.604264105Z" level=info msg="CreateContainer within sandbox \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:48:48.636365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713122285.mount: Deactivated successfully. Dec 13 01:48:48.652351 env[1441]: time="2024-12-13T01:48:48.652307844Z" level=info msg="CreateContainer within sandbox \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125\"" Dec 13 01:48:48.654295 env[1441]: time="2024-12-13T01:48:48.653047153Z" level=info msg="StartContainer for \"574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125\"" Dec 13 01:48:48.678801 systemd[1]: Started cri-containerd-574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125.scope. Dec 13 01:48:48.707711 systemd[1]: cri-containerd-574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125.scope: Deactivated successfully. Dec 13 01:48:48.711662 env[1441]: time="2024-12-13T01:48:48.711617932Z" level=info msg="StartContainer for \"574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125\" returns successfully" Dec 13 01:48:48.763124 kubelet[2496]: I1213 01:48:48.761853 2496 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:48:48.798129 kubelet[2496]: I1213 01:48:48.791325 2496 topology_manager.go:215] "Topology Admit Handler" podUID="8a220a84-231c-4ff7-a448-6665834a2d5e" podNamespace="kube-system" podName="coredns-76f75df574-dpq2g" Dec 13 01:48:48.798129 kubelet[2496]: I1213 01:48:48.796075 2496 topology_manager.go:215] "Topology Admit Handler" podUID="291c73b8-b775-4abc-bba7-411995518ba3" podNamespace="kube-system" podName="coredns-76f75df574-btpfx" Dec 13 01:48:48.807084 systemd[1]: Created slice kubepods-burstable-pod8a220a84_231c_4ff7_a448_6665834a2d5e.slice. Dec 13 01:48:48.813722 systemd[1]: Created slice kubepods-burstable-pod291c73b8_b775_4abc_bba7_411995518ba3.slice. Dec 13 01:48:48.977242 kubelet[2496]: I1213 01:48:48.977007 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a220a84-231c-4ff7-a448-6665834a2d5e-config-volume\") pod \"coredns-76f75df574-dpq2g\" (UID: \"8a220a84-231c-4ff7-a448-6665834a2d5e\") " pod="kube-system/coredns-76f75df574-dpq2g" Dec 13 01:48:48.977242 kubelet[2496]: I1213 01:48:48.977065 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/291c73b8-b775-4abc-bba7-411995518ba3-config-volume\") pod \"coredns-76f75df574-btpfx\" (UID: \"291c73b8-b775-4abc-bba7-411995518ba3\") " pod="kube-system/coredns-76f75df574-btpfx" Dec 13 01:48:48.977242 kubelet[2496]: I1213 01:48:48.977100 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxz8g\" (UniqueName: \"kubernetes.io/projected/8a220a84-231c-4ff7-a448-6665834a2d5e-kube-api-access-cxz8g\") pod \"coredns-76f75df574-dpq2g\" (UID: \"8a220a84-231c-4ff7-a448-6665834a2d5e\") " pod="kube-system/coredns-76f75df574-dpq2g" Dec 13 01:48:48.977242 kubelet[2496]: I1213 01:48:48.977129 2496 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntvdw\" (UniqueName: \"kubernetes.io/projected/291c73b8-b775-4abc-bba7-411995518ba3-kube-api-access-ntvdw\") pod \"coredns-76f75df574-btpfx\" (UID: \"291c73b8-b775-4abc-bba7-411995518ba3\") " pod="kube-system/coredns-76f75df574-btpfx" Dec 13 01:48:49.111919 env[1441]: time="2024-12-13T01:48:49.111839526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq2g,Uid:8a220a84-231c-4ff7-a448-6665834a2d5e,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:49.117556 env[1441]: time="2024-12-13T01:48:49.117513400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btpfx,Uid:291c73b8-b775-4abc-bba7-411995518ba3,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:49.275072 env[1441]: time="2024-12-13T01:48:49.274859554Z" level=info msg="shim disconnected" id=574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125 Dec 13 01:48:49.275072 env[1441]: time="2024-12-13T01:48:49.274920655Z" level=warning msg="cleaning up after shim disconnected" id=574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125 namespace=k8s.io Dec 13 01:48:49.275072 env[1441]: time="2024-12-13T01:48:49.274933155Z" level=info msg="cleaning up dead shim" Dec 13 01:48:49.283434 env[1441]: time="2024-12-13T01:48:49.283391965Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:48:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2894 runtime=io.containerd.runc.v2\n" Dec 13 01:48:49.333524 env[1441]: time="2024-12-13T01:48:49.333453719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq2g,Uid:8a220a84-231c-4ff7-a448-6665834a2d5e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2176a1451a505ec68383af3cf6881e47b570722b641b7fc7f32606d2d91439ce\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:48:49.334843 kubelet[2496]: E1213 01:48:49.334805 2496 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176a1451a505ec68383af3cf6881e47b570722b641b7fc7f32606d2d91439ce\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:48:49.334988 kubelet[2496]: E1213 01:48:49.334871 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176a1451a505ec68383af3cf6881e47b570722b641b7fc7f32606d2d91439ce\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-dpq2g" Dec 13 01:48:49.334988 kubelet[2496]: E1213 01:48:49.334916 2496 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2176a1451a505ec68383af3cf6881e47b570722b641b7fc7f32606d2d91439ce\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-dpq2g" Dec 13 01:48:49.335082 kubelet[2496]: E1213 01:48:49.334990 2496 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dpq2g_kube-system(8a220a84-231c-4ff7-a448-6665834a2d5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dpq2g_kube-system(8a220a84-231c-4ff7-a448-6665834a2d5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2176a1451a505ec68383af3cf6881e47b570722b641b7fc7f32606d2d91439ce\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-dpq2g" podUID="8a220a84-231c-4ff7-a448-6665834a2d5e" Dec 13 01:48:49.342429 env[1441]: time="2024-12-13T01:48:49.342382035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btpfx,Uid:291c73b8-b775-4abc-bba7-411995518ba3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"569b69ef504848ec9eccd7f0b5697dab778633dcd737bb5b613dfed654af619a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:48:49.342688 kubelet[2496]: E1213 01:48:49.342667 2496 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b69ef504848ec9eccd7f0b5697dab778633dcd737bb5b613dfed654af619a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:48:49.342785 kubelet[2496]: E1213 01:48:49.342717 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b69ef504848ec9eccd7f0b5697dab778633dcd737bb5b613dfed654af619a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-btpfx" Dec 13 01:48:49.342785 kubelet[2496]: E1213 01:48:49.342744 2496 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569b69ef504848ec9eccd7f0b5697dab778633dcd737bb5b613dfed654af619a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-btpfx" Dec 13 01:48:49.342877 kubelet[2496]: E1213 01:48:49.342806 2496 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-btpfx_kube-system(291c73b8-b775-4abc-bba7-411995518ba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-btpfx_kube-system(291c73b8-b775-4abc-bba7-411995518ba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"569b69ef504848ec9eccd7f0b5697dab778633dcd737bb5b613dfed654af619a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-btpfx" podUID="291c73b8-b775-4abc-bba7-411995518ba3" Dec 13 01:48:49.443313 env[1441]: time="2024-12-13T01:48:49.443266452Z" level=info msg="CreateContainer within sandbox \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:48:49.497157 env[1441]: time="2024-12-13T01:48:49.497104755Z" level=info msg="CreateContainer within sandbox \"f474ab5a40870f4c71b7bac9956641a0b02ba58087fe62bad968a61b2e8bdf4a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8c92671ba7ea2f2d568961175f8137c24307ccbc7418151d0768e311abacb5d0\"" Dec 13 01:48:49.499306 env[1441]: time="2024-12-13T01:48:49.498371672Z" level=info msg="StartContainer for \"8c92671ba7ea2f2d568961175f8137c24307ccbc7418151d0768e311abacb5d0\"" Dec 13 01:48:49.514668 systemd[1]: Started cri-containerd-8c92671ba7ea2f2d568961175f8137c24307ccbc7418151d0768e311abacb5d0.scope. Dec 13 01:48:49.550553 env[1441]: time="2024-12-13T01:48:49.550504652Z" level=info msg="StartContainer for \"8c92671ba7ea2f2d568961175f8137c24307ccbc7418151d0768e311abacb5d0\" returns successfully" Dec 13 01:48:49.637755 systemd[1]: run-containerd-runc-k8s.io-574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125-runc.PMGiS1.mount: Deactivated successfully. Dec 13 01:48:49.637871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-574244902c8a2d50f76e4964b9aed8f33c73bd1acd08b82daf6acc68d6d5a125-rootfs.mount: Deactivated successfully. Dec 13 01:48:50.708412 systemd-networkd[1592]: flannel.1: Link UP Dec 13 01:48:50.708423 systemd-networkd[1592]: flannel.1: Gained carrier Dec 13 01:48:51.747046 systemd-networkd[1592]: flannel.1: Gained IPv6LL Dec 13 01:49:01.345573 env[1441]: time="2024-12-13T01:49:01.345512938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btpfx,Uid:291c73b8-b775-4abc-bba7-411995518ba3,Namespace:kube-system,Attempt:0,}" Dec 13 01:49:01.385590 systemd-networkd[1592]: cni0: Link UP Dec 13 01:49:01.385599 systemd-networkd[1592]: cni0: Gained carrier Dec 13 01:49:01.389558 systemd-networkd[1592]: cni0: Lost carrier Dec 13 01:49:01.421038 systemd-networkd[1592]: vethf7e0818e: Link UP Dec 13 01:49:01.428628 kernel: cni0: port 1(vethf7e0818e) entered blocking state Dec 13 01:49:01.428733 kernel: cni0: port 1(vethf7e0818e) entered disabled state Dec 13 01:49:01.428761 kernel: device vethf7e0818e entered promiscuous mode Dec 13 01:49:01.440310 kernel: cni0: port 1(vethf7e0818e) entered blocking state Dec 13 01:49:01.440415 kernel: cni0: port 1(vethf7e0818e) entered forwarding state Dec 13 01:49:01.440447 kernel: cni0: port 1(vethf7e0818e) entered disabled state Dec 13 01:49:01.453615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf7e0818e: link becomes ready Dec 13 01:49:01.453720 kernel: cni0: port 1(vethf7e0818e) entered blocking state Dec 13 01:49:01.453759 kernel: cni0: port 1(vethf7e0818e) entered forwarding state Dec 13 01:49:01.453433 systemd-networkd[1592]: vethf7e0818e: Gained carrier Dec 13 01:49:01.454326 systemd-networkd[1592]: cni0: Gained carrier Dec 13 01:49:01.456367 env[1441]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e928), "name":"cbr0", "type":"bridge"} Dec 13 01:49:01.456367 env[1441]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:49:01.470949 env[1441]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:49:01.470858081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:01.470949 env[1441]: time="2024-12-13T01:49:01.470907181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:01.471194 env[1441]: time="2024-12-13T01:49:01.470922882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:01.471194 env[1441]: time="2024-12-13T01:49:01.471047583Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86f0001b3bb1cc6c6fd5e35dbe87fd28a15349096e89bdff9edc57fc64a54b4a pid=3156 runtime=io.containerd.runc.v2 Dec 13 01:49:01.496500 systemd[1]: run-containerd-runc-k8s.io-86f0001b3bb1cc6c6fd5e35dbe87fd28a15349096e89bdff9edc57fc64a54b4a-runc.6FQTcl.mount: Deactivated successfully. Dec 13 01:49:01.501200 systemd[1]: Started cri-containerd-86f0001b3bb1cc6c6fd5e35dbe87fd28a15349096e89bdff9edc57fc64a54b4a.scope. Dec 13 01:49:01.538996 env[1441]: time="2024-12-13T01:49:01.538877009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btpfx,Uid:291c73b8-b775-4abc-bba7-411995518ba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f0001b3bb1cc6c6fd5e35dbe87fd28a15349096e89bdff9edc57fc64a54b4a\"" Dec 13 01:49:01.542709 env[1441]: time="2024-12-13T01:49:01.542668250Z" level=info msg="CreateContainer within sandbox \"86f0001b3bb1cc6c6fd5e35dbe87fd28a15349096e89bdff9edc57fc64a54b4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:49:01.579955 env[1441]: time="2024-12-13T01:49:01.579908649Z" level=info msg="CreateContainer within sandbox \"86f0001b3bb1cc6c6fd5e35dbe87fd28a15349096e89bdff9edc57fc64a54b4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"737a2643cc5ee6d4b2de5614f5ab5e2de96c450810eb541203dc3e55bcfb8768\"" Dec 13 01:49:01.580700 env[1441]: time="2024-12-13T01:49:01.580672357Z" level=info msg="StartContainer for \"737a2643cc5ee6d4b2de5614f5ab5e2de96c450810eb541203dc3e55bcfb8768\"" Dec 13 01:49:01.597825 systemd[1]: Started cri-containerd-737a2643cc5ee6d4b2de5614f5ab5e2de96c450810eb541203dc3e55bcfb8768.scope. Dec 13 01:49:01.636044 env[1441]: time="2024-12-13T01:49:01.636006949Z" level=info msg="StartContainer for \"737a2643cc5ee6d4b2de5614f5ab5e2de96c450810eb541203dc3e55bcfb8768\" returns successfully" Dec 13 01:49:02.476490 kubelet[2496]: I1213 01:49:02.476453 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-qs78l" podStartSLOduration=14.457571312 podStartE2EDuration="21.476401374s" podCreationTimestamp="2024-12-13 01:48:41 +0000 UTC" firstStartedPulling="2024-12-13 01:48:41.582638606 +0000 UTC m=+13.713525051" lastFinishedPulling="2024-12-13 01:48:48.601468668 +0000 UTC m=+20.732355113" observedRunningTime="2024-12-13 01:48:50.454388847 +0000 UTC m=+22.585275292" watchObservedRunningTime="2024-12-13 01:49:02.476401374 +0000 UTC m=+34.607287819" Dec 13 01:49:02.490045 kubelet[2496]: I1213 01:49:02.489864 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-btpfx" podStartSLOduration=21.489804316 podStartE2EDuration="21.489804316s" podCreationTimestamp="2024-12-13 01:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:49:02.477257983 +0000 UTC m=+34.608144428" watchObservedRunningTime="2024-12-13 01:49:02.489804316 +0000 UTC m=+34.620690861" Dec 13 01:49:02.563082 systemd-networkd[1592]: cni0: Gained IPv6LL Dec 13 01:49:02.883085 systemd-networkd[1592]: vethf7e0818e: Gained IPv6LL Dec 13 01:49:04.346873 env[1441]: time="2024-12-13T01:49:04.346337847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq2g,Uid:8a220a84-231c-4ff7-a448-6665834a2d5e,Namespace:kube-system,Attempt:0,}" Dec 13 01:49:04.415308 systemd-networkd[1592]: veth594a3064: Link UP Dec 13 01:49:04.421407 kernel: cni0: port 2(veth594a3064) entered blocking state Dec 13 01:49:04.421506 kernel: cni0: port 2(veth594a3064) entered disabled state Dec 13 01:49:04.421537 kernel: device veth594a3064 entered promiscuous mode Dec 13 01:49:04.434399 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:49:04.434479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth594a3064: link becomes ready Dec 13 01:49:04.438145 kernel: cni0: port 2(veth594a3064) entered blocking state Dec 13 01:49:04.438209 kernel: cni0: port 2(veth594a3064) entered forwarding state Dec 13 01:49:04.440521 systemd-networkd[1592]: veth594a3064: Gained carrier Dec 13 01:49:04.442200 env[1441]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:49:04.442200 env[1441]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:49:04.460553 env[1441]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:49:04.460484717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:04.460755 env[1441]: time="2024-12-13T01:49:04.460519118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:04.460755 env[1441]: time="2024-12-13T01:49:04.460532618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:04.460977 env[1441]: time="2024-12-13T01:49:04.460940722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f319dbecb7dfa589e3713b102a29221a01823966277e87b73c6429c0a18379fd pid=3269 runtime=io.containerd.runc.v2 Dec 13 01:49:04.479086 systemd[1]: Started cri-containerd-f319dbecb7dfa589e3713b102a29221a01823966277e87b73c6429c0a18379fd.scope. Dec 13 01:49:04.523739 env[1441]: time="2024-12-13T01:49:04.523699665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dpq2g,Uid:8a220a84-231c-4ff7-a448-6665834a2d5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f319dbecb7dfa589e3713b102a29221a01823966277e87b73c6429c0a18379fd\"" Dec 13 01:49:04.527434 env[1441]: time="2024-12-13T01:49:04.527395903Z" level=info msg="CreateContainer within sandbox \"f319dbecb7dfa589e3713b102a29221a01823966277e87b73c6429c0a18379fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:49:04.582230 env[1441]: time="2024-12-13T01:49:04.582178565Z" level=info msg="CreateContainer within sandbox \"f319dbecb7dfa589e3713b102a29221a01823966277e87b73c6429c0a18379fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79603be54d70cd5e9beb7914c8aba5412800e2e8a72711d9b36f850c35459d21\"" Dec 13 01:49:04.584091 env[1441]: time="2024-12-13T01:49:04.582952573Z" level=info msg="StartContainer for \"79603be54d70cd5e9beb7914c8aba5412800e2e8a72711d9b36f850c35459d21\"" Dec 13 01:49:04.598843 systemd[1]: Started cri-containerd-79603be54d70cd5e9beb7914c8aba5412800e2e8a72711d9b36f850c35459d21.scope. Dec 13 01:49:04.629589 env[1441]: time="2024-12-13T01:49:04.629541250Z" level=info msg="StartContainer for \"79603be54d70cd5e9beb7914c8aba5412800e2e8a72711d9b36f850c35459d21\" returns successfully" Dec 13 01:49:05.489380 kubelet[2496]: I1213 01:49:05.489344 2496 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dpq2g" podStartSLOduration=24.489297792 podStartE2EDuration="24.489297792s" podCreationTimestamp="2024-12-13 01:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:49:05.488986589 +0000 UTC m=+37.619873034" watchObservedRunningTime="2024-12-13 01:49:05.489297792 +0000 UTC m=+37.620184237" Dec 13 01:49:06.019027 systemd-networkd[1592]: veth594a3064: Gained IPv6LL Dec 13 01:50:06.656854 update_engine[1431]: I1213 01:50:06.656811 1431 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:50:06.656854 update_engine[1431]: I1213 01:50:06.656848 1431 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:50:06.657442 update_engine[1431]: I1213 01:50:06.657033 1431 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:50:06.657564 update_engine[1431]: I1213 01:50:06.657541 1431 omaha_request_params.cc:62] Current group set to lts Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657714 1431 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657732 1431 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657750 1431 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657786 1431 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657852 1431 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657858 1431 omaha_request_action.cc:271] Request: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: Dec 13 01:50:06.657871 update_engine[1431]: I1213 01:50:06.657864 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:50:06.659060 update_engine[1431]: I1213 01:50:06.659035 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:50:06.659249 update_engine[1431]: I1213 01:50:06.659229 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:50:06.659494 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:50:06.692442 update_engine[1431]: E1213 01:50:06.692409 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:50:06.692580 update_engine[1431]: I1213 01:50:06.692528 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:50:16.605714 update_engine[1431]: I1213 01:50:16.605156 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:50:16.605714 update_engine[1431]: I1213 01:50:16.605444 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:50:16.605714 update_engine[1431]: I1213 01:50:16.605665 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:50:16.621843 update_engine[1431]: E1213 01:50:16.621806 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:50:16.622011 update_engine[1431]: I1213 01:50:16.621946 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:50:26.603010 update_engine[1431]: I1213 01:50:26.602957 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:50:26.603479 update_engine[1431]: I1213 01:50:26.603256 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:50:26.603536 update_engine[1431]: I1213 01:50:26.603490 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:50:26.624054 update_engine[1431]: E1213 01:50:26.624001 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:50:26.624218 update_engine[1431]: I1213 01:50:26.624135 1431 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:50:36.611273 update_engine[1431]: I1213 01:50:36.611202 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:50:36.611791 update_engine[1431]: I1213 01:50:36.611527 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:50:36.611863 update_engine[1431]: I1213 01:50:36.611788 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:50:36.616705 update_engine[1431]: E1213 01:50:36.616657 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:50:36.616878 update_engine[1431]: I1213 01:50:36.616787 1431 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:50:36.616878 update_engine[1431]: I1213 01:50:36.616801 1431 omaha_request_action.cc:621] Omaha request response: Dec 13 01:50:36.617023 update_engine[1431]: E1213 01:50:36.616921 1431 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 01:50:36.617023 update_engine[1431]: I1213 01:50:36.616942 1431 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:50:36.617023 update_engine[1431]: I1213 01:50:36.616948 1431 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:50:36.617023 update_engine[1431]: I1213 01:50:36.616955 1431 update_attempter.cc:306] Processing Done. Dec 13 01:50:36.617023 update_engine[1431]: E1213 01:50:36.616972 1431 update_attempter.cc:619] Update failed. Dec 13 01:50:36.617023 update_engine[1431]: I1213 01:50:36.616978 1431 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:50:36.617023 update_engine[1431]: I1213 01:50:36.616988 1431 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:50:36.617023 update_engine[1431]: I1213 01:50:36.616995 1431 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:50:36.617417 update_engine[1431]: I1213 01:50:36.617094 1431 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:50:36.617417 update_engine[1431]: I1213 01:50:36.617122 1431 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 01:50:36.617417 update_engine[1431]: I1213 01:50:36.617128 1431 omaha_request_action.cc:271] Request: Dec 13 01:50:36.617417 update_engine[1431]: Dec 13 01:50:36.617417 update_engine[1431]: Dec 13 01:50:36.617417 update_engine[1431]: Dec 13 01:50:36.617417 update_engine[1431]: Dec 13 01:50:36.617417 update_engine[1431]: Dec 13 01:50:36.617417 update_engine[1431]: Dec 13 01:50:36.617417 update_engine[1431]: I1213 01:50:36.617136 1431 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:50:36.617417 update_engine[1431]: I1213 01:50:36.617350 1431 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:50:36.617906 update_engine[1431]: I1213 01:50:36.617547 1431 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:50:36.618062 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:50:36.637826 update_engine[1431]: E1213 01:50:36.637775 1431 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637916 1431 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637929 1431 omaha_request_action.cc:621] Omaha request response: Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637936 1431 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637940 1431 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637944 1431 update_attempter.cc:306] Processing Done. Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637950 1431 update_attempter.cc:310] Error event sent. Dec 13 01:50:36.638011 update_engine[1431]: I1213 01:50:36.637960 1431 update_check_scheduler.cc:74] Next update check in 44m45s Dec 13 01:50:36.638467 locksmithd[1516]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:51:25.244681 systemd[1]: Started sshd@5-10.200.8.37:22-10.200.16.10:33800.service. Dec 13 01:51:25.869970 sshd[3951]: Accepted publickey for core from 10.200.16.10 port 33800 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:25.871924 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:25.877820 systemd[1]: Started session-8.scope. Dec 13 01:51:25.878471 systemd-logind[1428]: New session 8 of user core. Dec 13 01:51:26.386964 sshd[3951]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:26.390549 systemd[1]: sshd@5-10.200.8.37:22-10.200.16.10:33800.service: Deactivated successfully. Dec 13 01:51:26.391442 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:51:26.392157 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:51:26.392925 systemd-logind[1428]: Removed session 8. Dec 13 01:51:31.492054 systemd[1]: Started sshd@6-10.200.8.37:22-10.200.16.10:49866.service. Dec 13 01:51:32.117522 sshd[4008]: Accepted publickey for core from 10.200.16.10 port 49866 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:32.119274 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:32.123956 systemd-logind[1428]: New session 9 of user core. Dec 13 01:51:32.125135 systemd[1]: Started session-9.scope. Dec 13 01:51:32.614716 sshd[4008]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:32.618100 systemd[1]: sshd@6-10.200.8.37:22-10.200.16.10:49866.service: Deactivated successfully. Dec 13 01:51:32.619271 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:51:32.620208 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:51:32.621231 systemd-logind[1428]: Removed session 9. Dec 13 01:51:37.720260 systemd[1]: Started sshd@7-10.200.8.37:22-10.200.16.10:49882.service. Dec 13 01:51:38.343072 sshd[4041]: Accepted publickey for core from 10.200.16.10 port 49882 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:38.344913 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:38.352181 systemd[1]: Started session-10.scope. Dec 13 01:51:38.352616 systemd-logind[1428]: New session 10 of user core. Dec 13 01:51:38.852036 sshd[4041]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:38.855705 systemd[1]: sshd@7-10.200.8.37:22-10.200.16.10:49882.service: Deactivated successfully. Dec 13 01:51:38.856734 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:51:38.857623 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:51:38.858579 systemd-logind[1428]: Removed session 10. Dec 13 01:51:38.956555 systemd[1]: Started sshd@8-10.200.8.37:22-10.200.16.10:44100.service. Dec 13 01:51:39.580480 sshd[4054]: Accepted publickey for core from 10.200.16.10 port 44100 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:39.582000 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:39.587548 systemd-logind[1428]: New session 11 of user core. Dec 13 01:51:39.588086 systemd[1]: Started session-11.scope. Dec 13 01:51:40.119412 sshd[4054]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:40.122875 systemd[1]: sshd@8-10.200.8.37:22-10.200.16.10:44100.service: Deactivated successfully. Dec 13 01:51:40.123991 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:51:40.124955 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:51:40.125783 systemd-logind[1428]: Removed session 11. Dec 13 01:51:40.224813 systemd[1]: Started sshd@9-10.200.8.37:22-10.200.16.10:44104.service. Dec 13 01:51:40.848519 sshd[4063]: Accepted publickey for core from 10.200.16.10 port 44104 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:40.850321 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:40.855928 systemd[1]: Started session-12.scope. Dec 13 01:51:40.856551 systemd-logind[1428]: New session 12 of user core. Dec 13 01:51:41.351110 sshd[4063]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:41.354393 systemd[1]: sshd@9-10.200.8.37:22-10.200.16.10:44104.service: Deactivated successfully. Dec 13 01:51:41.355498 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:51:41.356361 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:51:41.357190 systemd-logind[1428]: Removed session 12. Dec 13 01:51:46.456382 systemd[1]: Started sshd@10-10.200.8.37:22-10.200.16.10:44106.service. Dec 13 01:51:47.080912 sshd[4120]: Accepted publickey for core from 10.200.16.10 port 44106 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:47.082378 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:47.088546 systemd[1]: Started session-13.scope. Dec 13 01:51:47.089950 systemd-logind[1428]: New session 13 of user core. Dec 13 01:51:47.581359 sshd[4120]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:47.584559 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:51:47.584794 systemd[1]: sshd@10-10.200.8.37:22-10.200.16.10:44106.service: Deactivated successfully. Dec 13 01:51:47.585731 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:51:47.586601 systemd-logind[1428]: Removed session 13. Dec 13 01:51:47.685785 systemd[1]: Started sshd@11-10.200.8.37:22-10.200.16.10:44118.service. Dec 13 01:51:48.308731 sshd[4135]: Accepted publickey for core from 10.200.16.10 port 44118 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:48.310504 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:48.315803 systemd[1]: Started session-14.scope. Dec 13 01:51:48.316508 systemd-logind[1428]: New session 14 of user core. Dec 13 01:51:48.885789 sshd[4135]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:48.888745 systemd[1]: sshd@11-10.200.8.37:22-10.200.16.10:44118.service: Deactivated successfully. Dec 13 01:51:48.889689 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:51:48.890343 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:51:48.891187 systemd-logind[1428]: Removed session 14. Dec 13 01:51:48.990545 systemd[1]: Started sshd@12-10.200.8.37:22-10.200.16.10:41090.service. Dec 13 01:51:49.615626 sshd[4145]: Accepted publickey for core from 10.200.16.10 port 41090 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:49.617294 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:49.622850 systemd[1]: Started session-15.scope. Dec 13 01:51:49.623683 systemd-logind[1428]: New session 15 of user core. Dec 13 01:51:51.387241 sshd[4145]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:51.390714 systemd[1]: sshd@12-10.200.8.37:22-10.200.16.10:41090.service: Deactivated successfully. Dec 13 01:51:51.391735 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:51:51.392603 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:51:51.393633 systemd-logind[1428]: Removed session 15. Dec 13 01:51:51.490908 systemd[1]: Started sshd@13-10.200.8.37:22-10.200.16.10:41106.service. Dec 13 01:51:52.113583 sshd[4183]: Accepted publickey for core from 10.200.16.10 port 41106 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:52.115312 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:52.120960 systemd-logind[1428]: New session 16 of user core. Dec 13 01:51:52.121630 systemd[1]: Started session-16.scope. Dec 13 01:51:52.706013 sshd[4183]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:52.709422 systemd[1]: sshd@13-10.200.8.37:22-10.200.16.10:41106.service: Deactivated successfully. Dec 13 01:51:52.710358 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:51:52.711089 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:51:52.711902 systemd-logind[1428]: Removed session 16. Dec 13 01:51:52.810290 systemd[1]: Started sshd@14-10.200.8.37:22-10.200.16.10:41112.service. Dec 13 01:51:53.434425 sshd[4193]: Accepted publickey for core from 10.200.16.10 port 41112 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:53.435980 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:53.440916 systemd[1]: Started session-17.scope. Dec 13 01:51:53.441529 systemd-logind[1428]: New session 17 of user core. Dec 13 01:51:53.938559 sshd[4193]: pam_unix(sshd:session): session closed for user core Dec 13 01:51:53.941476 systemd[1]: sshd@14-10.200.8.37:22-10.200.16.10:41112.service: Deactivated successfully. Dec 13 01:51:53.942439 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:51:53.943124 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:51:53.943963 systemd-logind[1428]: Removed session 17. Dec 13 01:51:59.043693 systemd[1]: Started sshd@15-10.200.8.37:22-10.200.16.10:39180.service. Dec 13 01:51:59.668726 sshd[4229]: Accepted publickey for core from 10.200.16.10 port 39180 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:51:59.670229 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:51:59.674978 systemd-logind[1428]: New session 18 of user core. Dec 13 01:51:59.675301 systemd[1]: Started session-18.scope. Dec 13 01:52:00.167241 sshd[4229]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:00.170610 systemd[1]: sshd@15-10.200.8.37:22-10.200.16.10:39180.service: Deactivated successfully. Dec 13 01:52:00.171717 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:52:00.172577 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:52:00.173806 systemd-logind[1428]: Removed session 18. Dec 13 01:52:05.273871 systemd[1]: Started sshd@16-10.200.8.37:22-10.200.16.10:39196.service. Dec 13 01:52:05.898318 sshd[4262]: Accepted publickey for core from 10.200.16.10 port 39196 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:05.899793 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:05.904626 systemd-logind[1428]: New session 19 of user core. Dec 13 01:52:05.905265 systemd[1]: Started session-19.scope. Dec 13 01:52:06.396826 sshd[4262]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:06.400207 systemd[1]: sshd@16-10.200.8.37:22-10.200.16.10:39196.service: Deactivated successfully. Dec 13 01:52:06.401311 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:52:06.402203 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:52:06.403222 systemd-logind[1428]: Removed session 19. Dec 13 01:52:11.504491 systemd[1]: Started sshd@17-10.200.8.37:22-10.200.16.10:58034.service. Dec 13 01:52:12.130716 sshd[4320]: Accepted publickey for core from 10.200.16.10 port 58034 ssh2: RSA SHA256:t16aFHvQKfPoAwlQZqbEr00BgbjT/QwXGm40cf1AA4M Dec 13 01:52:12.132306 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:52:12.137606 systemd[1]: Started session-20.scope. Dec 13 01:52:12.138300 systemd-logind[1428]: New session 20 of user core. Dec 13 01:52:12.624574 sshd[4320]: pam_unix(sshd:session): session closed for user core Dec 13 01:52:12.627606 systemd[1]: sshd@17-10.200.8.37:22-10.200.16.10:58034.service: Deactivated successfully. Dec 13 01:52:12.628474 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:52:12.628970 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:52:12.629758 systemd-logind[1428]: Removed session 20.