Dec 13 14:28:16.040771 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:28:16.040800 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:16.040813 kernel: BIOS-provided physical RAM map: Dec 13 14:28:16.040823 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:28:16.040832 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 14:28:16.040841 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 14:28:16.040856 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Dec 13 14:28:16.040866 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 14:28:16.040876 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 14:28:16.040886 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 14:28:16.040897 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 14:28:16.040908 kernel: printk: bootconsole [earlyser0] enabled Dec 13 14:28:16.040918 kernel: NX (Execute Disable) protection: active Dec 13 14:28:16.040929 kernel: efi: EFI v2.70 by Microsoft Dec 13 14:28:16.040947 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Dec 13 14:28:16.040959 kernel: random: crng init done Dec 13 14:28:16.040969 kernel: SMBIOS 3.1.0 present. Dec 13 14:28:16.040979 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 14:28:16.040989 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 14:28:16.040998 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 14:28:16.041008 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Dec 13 14:28:16.041019 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 14:28:16.041034 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 14:28:16.041045 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 14:28:16.041056 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 14:28:16.041068 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 14:28:16.041081 kernel: tsc: Detected 2593.906 MHz processor Dec 13 14:28:16.041094 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:28:16.041107 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:28:16.041119 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 14:28:16.041130 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:28:16.041147 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 14:28:16.041164 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 14:28:16.041176 kernel: Using GB pages for direct mapping Dec 13 14:28:16.041188 kernel: Secure boot disabled Dec 13 14:28:16.041201 kernel: ACPI: Early table checksum verification disabled Dec 13 14:28:16.041213 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 14:28:16.041223 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041235 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041248 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 14:28:16.041268 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 14:28:16.041279 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041290 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041302 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041313 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041325 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041340 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041351 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 14:28:16.041362 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 14:28:16.041372 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 14:28:16.041382 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 14:28:16.041390 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 14:28:16.041400 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 14:28:16.041407 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 14:28:16.041419 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 14:28:16.041427 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 14:28:16.041454 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 14:28:16.041462 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 14:28:16.041472 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:28:16.041479 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:28:16.041486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 14:28:16.041495 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 14:28:16.041502 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 14:28:16.041514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 14:28:16.041521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 14:28:16.041530 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 14:28:16.041538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 14:28:16.041547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 14:28:16.041555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 14:28:16.041562 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 14:28:16.041571 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 14:28:16.041579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 14:28:16.041591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 14:28:16.041598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 14:28:16.041606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 14:28:16.041615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 14:28:16.041623 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 14:28:16.041632 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 14:28:16.041639 kernel: Zone ranges: Dec 13 14:28:16.041647 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:28:16.041656 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:28:16.041668 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:28:16.041676 kernel: Movable zone start for each node Dec 13 14:28:16.041683 kernel: Early memory node ranges Dec 13 14:28:16.041692 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:28:16.041700 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 14:28:16.041709 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 14:28:16.041716 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 14:28:16.041724 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 14:28:16.041733 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:28:16.041744 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:28:16.041751 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 14:28:16.041758 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 14:28:16.041768 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 14:28:16.041775 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:28:16.041785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:28:16.041792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:28:16.041800 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 14:28:16.041809 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:28:16.041820 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 14:28:16.041828 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 14:28:16.041835 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:28:16.041846 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:28:16.041853 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:28:16.041862 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:28:16.041869 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:28:16.041876 kernel: Hyper-V: PV spinlocks enabled Dec 13 14:28:16.041886 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:28:16.041897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 14:28:16.041905 kernel: Policy zone: Normal Dec 13 14:28:16.041912 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:16.041923 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:28:16.041930 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:28:16.041940 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:28:16.041947 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:28:16.041954 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 308056K reserved, 0K cma-reserved) Dec 13 14:28:16.041966 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:28:16.041975 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:28:16.041990 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:28:16.042002 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:28:16.042011 kernel: rcu: RCU event tracing is enabled. Dec 13 14:28:16.042020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:28:16.042027 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:28:16.042037 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:28:16.042045 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:28:16.042056 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:28:16.042063 kernel: Using NULL legacy PIC Dec 13 14:28:16.042074 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 14:28:16.042082 kernel: Console: colour dummy device 80x25 Dec 13 14:28:16.042092 kernel: printk: console [tty1] enabled Dec 13 14:28:16.042100 kernel: printk: console [ttyS0] enabled Dec 13 14:28:16.042107 kernel: printk: bootconsole [earlyser0] disabled Dec 13 14:28:16.042119 kernel: ACPI: Core revision 20210730 Dec 13 14:28:16.042128 kernel: Failed to register legacy timer interrupt Dec 13 14:28:16.042137 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:28:16.042144 kernel: Hyper-V: Using IPI hypercalls Dec 13 14:28:16.042155 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Dec 13 14:28:16.042162 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:28:16.042173 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:28:16.042180 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:28:16.042188 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:28:16.042198 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:28:16.042210 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:28:16.042217 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:28:16.042225 kernel: RETBleed: Vulnerable Dec 13 14:28:16.042235 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:28:16.042242 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:16.042252 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:16.042259 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:28:16.042268 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:28:16.042277 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:28:16.042286 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:28:16.042296 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:28:16.042304 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:28:16.042314 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:28:16.042322 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:28:16.042331 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 14:28:16.042338 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 14:28:16.042347 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 14:28:16.042356 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 14:28:16.042366 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:28:16.042373 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:28:16.042381 kernel: LSM: Security Framework initializing Dec 13 14:28:16.042391 kernel: SELinux: Initializing. Dec 13 14:28:16.042402 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:16.042410 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:16.042418 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:28:16.042428 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:28:16.042444 kernel: signal: max sigframe size: 3632 Dec 13 14:28:16.042451 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:28:16.042460 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:28:16.042469 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:28:16.042478 kernel: x86: Booting SMP configuration: Dec 13 14:28:16.042486 kernel: .... node #0, CPUs: #1 Dec 13 14:28:16.042496 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 14:28:16.042506 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:28:16.042514 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:28:16.042524 kernel: smpboot: Max logical packages: 1 Dec 13 14:28:16.042531 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 14:28:16.042540 kernel: devtmpfs: initialized Dec 13 14:28:16.042549 kernel: x86/mm: Memory block size: 128MB Dec 13 14:28:16.042559 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 14:28:16.042569 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:28:16.042577 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:28:16.042586 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:28:16.042595 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:28:16.042604 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:28:16.042611 kernel: audit: type=2000 audit(1734100095.023:1): state=initialized audit_enabled=0 res=1 Dec 13 14:28:16.042622 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:28:16.042629 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:28:16.042640 kernel: cpuidle: using governor menu Dec 13 14:28:16.042648 kernel: ACPI: bus type PCI registered Dec 13 14:28:16.042658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:28:16.042666 kernel: dca service started, version 1.12.1 Dec 13 14:28:16.042676 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:28:16.042684 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:28:16.042691 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:28:16.042701 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:28:16.042709 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:28:16.042719 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:28:16.042728 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:28:16.042738 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:28:16.042745 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:28:16.042756 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:28:16.042763 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:28:16.042771 kernel: ACPI: Interpreter enabled Dec 13 14:28:16.042780 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:28:16.042789 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:28:16.042798 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:28:16.042807 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 14:28:16.042817 kernel: iommu: Default domain type: Translated Dec 13 14:28:16.042825 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:28:16.042835 kernel: vgaarb: loaded Dec 13 14:28:16.042842 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:28:16.042851 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:28:16.042860 kernel: PTP clock support registered Dec 13 14:28:16.042870 kernel: Registered efivars operations Dec 13 14:28:16.042877 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:28:16.042884 kernel: PCI: System does not support PCI Dec 13 14:28:16.042896 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 14:28:16.042905 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:28:16.042914 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:28:16.042921 kernel: pnp: PnP ACPI init Dec 13 14:28:16.042930 kernel: pnp: PnP ACPI: found 3 devices Dec 13 14:28:16.042939 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:28:16.042949 kernel: NET: Registered PF_INET protocol family Dec 13 14:28:16.042957 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:28:16.042968 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:28:16.042976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:28:16.042987 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:28:16.042994 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:28:16.043001 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:28:16.043012 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:28:16.043020 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:28:16.043029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:28:16.043036 kernel: NET: Registered PF_XDP protocol family Dec 13 14:28:16.043048 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:28:16.043056 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:28:16.043066 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Dec 13 14:28:16.043073 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:28:16.043082 kernel: Initialise system trusted keyrings Dec 13 14:28:16.043091 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:28:16.043100 kernel: Key type asymmetric registered Dec 13 14:28:16.043108 kernel: Asymmetric key parser 'x509' registered Dec 13 14:28:16.043115 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:28:16.043127 kernel: io scheduler mq-deadline registered Dec 13 14:28:16.043135 kernel: io scheduler kyber registered Dec 13 14:28:16.043145 kernel: io scheduler bfq registered Dec 13 14:28:16.043152 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:28:16.043161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:28:16.043170 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:28:16.043179 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:28:16.043187 kernel: i8042: PNP: No PS/2 controller found. Dec 13 14:28:16.043315 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 14:28:16.043405 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T14:28:15 UTC (1734100095) Dec 13 14:28:16.043498 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 14:28:16.043509 kernel: fail to initialize ptp_kvm Dec 13 14:28:16.043516 kernel: intel_pstate: CPU model not supported Dec 13 14:28:16.043526 kernel: efifb: probing for efifb Dec 13 14:28:16.043535 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 14:28:16.043546 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 14:28:16.043553 kernel: efifb: scrolling: redraw Dec 13 14:28:16.043565 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:28:16.043573 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:28:16.043584 kernel: fb0: EFI VGA frame buffer device Dec 13 14:28:16.043591 kernel: pstore: Registered efi as persistent store backend Dec 13 14:28:16.043599 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:28:16.043608 kernel: Segment Routing with IPv6 Dec 13 14:28:16.043617 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:28:16.043626 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:28:16.043633 kernel: Key type dns_resolver registered Dec 13 14:28:16.043646 kernel: IPI shorthand broadcast: enabled Dec 13 14:28:16.043654 kernel: sched_clock: Marking stable (780003400, 22877900)->(1001438600, -198557300) Dec 13 14:28:16.043664 kernel: registered taskstats version 1 Dec 13 14:28:16.043672 kernel: Loading compiled-in X.509 certificates Dec 13 14:28:16.043681 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:28:16.043690 kernel: Key type .fscrypt registered Dec 13 14:28:16.043699 kernel: Key type fscrypt-provisioning registered Dec 13 14:28:16.043706 kernel: pstore: Using crash dump compression: deflate Dec 13 14:28:16.043715 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:28:16.043722 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:28:16.043729 kernel: ima: No architecture policies found Dec 13 14:28:16.043737 kernel: clk: Disabling unused clocks Dec 13 14:28:16.043744 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:28:16.043751 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:28:16.043758 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:28:16.043765 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:28:16.043772 kernel: Run /init as init process Dec 13 14:28:16.043779 kernel: with arguments: Dec 13 14:28:16.043788 kernel: /init Dec 13 14:28:16.043795 kernel: with environment: Dec 13 14:28:16.043802 kernel: HOME=/ Dec 13 14:28:16.043812 kernel: TERM=linux Dec 13 14:28:16.043819 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:28:16.043829 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:16.043840 systemd[1]: Detected virtualization microsoft. Dec 13 14:28:16.043853 systemd[1]: Detected architecture x86-64. Dec 13 14:28:16.043860 systemd[1]: Running in initrd. Dec 13 14:28:16.043868 systemd[1]: No hostname configured, using default hostname. Dec 13 14:28:16.043878 systemd[1]: Hostname set to . Dec 13 14:28:16.043888 systemd[1]: Initializing machine ID from random generator. Dec 13 14:28:16.043897 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:28:16.043905 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:16.043914 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:16.043923 systemd[1]: Reached target paths.target. Dec 13 14:28:16.043935 systemd[1]: Reached target slices.target. Dec 13 14:28:16.043943 systemd[1]: Reached target swap.target. Dec 13 14:28:16.043952 systemd[1]: Reached target timers.target. Dec 13 14:28:16.043962 systemd[1]: Listening on iscsid.socket. Dec 13 14:28:16.043972 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:28:16.043980 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:16.043988 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:16.044000 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:16.044010 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:16.044019 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:16.044026 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:16.044034 systemd[1]: Reached target sockets.target. Dec 13 14:28:16.044044 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:16.044054 systemd[1]: Finished network-cleanup.service. Dec 13 14:28:16.044063 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:28:16.044074 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:16.044091 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:16.044106 systemd[1]: Starting systemd-resolved.service... Dec 13 14:28:16.044120 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:28:16.044135 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:16.044154 systemd-journald[183]: Journal started Dec 13 14:28:16.044225 systemd-journald[183]: Runtime Journal (/run/log/journal/527c4fe3c8e142248fcb4890fa84b7b2) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:28:16.016539 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 14:28:16.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.069300 kernel: audit: type=1130 audit(1734100096.053:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.069335 systemd[1]: Started systemd-journald.service. Dec 13 14:28:16.072090 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:28:16.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.092007 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:28:16.094419 kernel: audit: type=1130 audit(1734100096.071:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.104383 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:28:16.106942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:28:16.109994 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:16.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.140856 kernel: audit: type=1130 audit(1734100096.091:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.140893 kernel: audit: type=1130 audit(1734100096.095:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.133259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:16.150068 kernel: Bridge firewalling registered Dec 13 14:28:16.146016 systemd-resolved[185]: Positive Trust Anchors: Dec 13 14:28:16.146025 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:28:16.181219 kernel: audit: type=1130 audit(1734100096.143:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.181247 kernel: audit: type=1130 audit(1734100096.165:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.146060 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:28:16.164367 systemd-resolved[185]: Defaulting to hostname 'linux'. Dec 13 14:28:16.165163 systemd[1]: Started systemd-resolved.service. Dec 13 14:28:16.206639 kernel: SCSI subsystem initialized Dec 13 14:28:16.166010 systemd[1]: Reached target nss-lookup.target. Dec 13 14:28:16.179070 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 14:28:16.204001 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:28:16.231462 kernel: audit: type=1130 audit(1734100096.214:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.216159 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:28:16.240480 dracut-cmdline[201]: dracut-dracut-053 Dec 13 14:28:16.243021 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:16.270920 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:28:16.270959 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:28:16.271875 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:28:16.280581 systemd-modules-load[184]: Inserted module 'dm_multipath' Dec 13 14:28:16.281415 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:16.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.289069 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:16.306013 kernel: audit: type=1130 audit(1734100096.287:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.314114 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:16.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.331450 kernel: audit: type=1130 audit(1734100096.316:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.335446 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:28:16.354448 kernel: iscsi: registered transport (tcp) Dec 13 14:28:16.381228 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:28:16.381271 kernel: QLogic iSCSI HBA Driver Dec 13 14:28:16.409707 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:28:16.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.416039 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:28:16.466448 kernel: raid6: avx512x4 gen() 18626 MB/s Dec 13 14:28:16.486444 kernel: raid6: avx512x4 xor() 8451 MB/s Dec 13 14:28:16.506442 kernel: raid6: avx512x2 gen() 18584 MB/s Dec 13 14:28:16.526444 kernel: raid6: avx512x2 xor() 29975 MB/s Dec 13 14:28:16.546440 kernel: raid6: avx512x1 gen() 18524 MB/s Dec 13 14:28:16.566440 kernel: raid6: avx512x1 xor() 26894 MB/s Dec 13 14:28:16.587446 kernel: raid6: avx2x4 gen() 18500 MB/s Dec 13 14:28:16.607440 kernel: raid6: avx2x4 xor() 7996 MB/s Dec 13 14:28:16.627441 kernel: raid6: avx2x2 gen() 18489 MB/s Dec 13 14:28:16.647444 kernel: raid6: avx2x2 xor() 22185 MB/s Dec 13 14:28:16.667441 kernel: raid6: avx2x1 gen() 14201 MB/s Dec 13 14:28:16.687440 kernel: raid6: avx2x1 xor() 19476 MB/s Dec 13 14:28:16.708445 kernel: raid6: sse2x4 gen() 11746 MB/s Dec 13 14:28:16.727446 kernel: raid6: sse2x4 xor() 7338 MB/s Dec 13 14:28:16.747441 kernel: raid6: sse2x2 gen() 12934 MB/s Dec 13 14:28:16.768444 kernel: raid6: sse2x2 xor() 7473 MB/s Dec 13 14:28:16.788440 kernel: raid6: sse2x1 gen() 11671 MB/s Dec 13 14:28:16.811359 kernel: raid6: sse2x1 xor() 5923 MB/s Dec 13 14:28:16.811389 kernel: raid6: using algorithm avx512x4 gen() 18626 MB/s Dec 13 14:28:16.811400 kernel: raid6: .... xor() 8451 MB/s, rmw enabled Dec 13 14:28:16.815939 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:28:16.834452 kernel: xor: automatically using best checksumming function avx Dec 13 14:28:16.930456 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:28:16.938298 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:28:16.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.942000 audit: BPF prog-id=7 op=LOAD Dec 13 14:28:16.942000 audit: BPF prog-id=8 op=LOAD Dec 13 14:28:16.943569 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:16.958352 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 14:28:16.963022 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:16.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:16.972854 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:28:16.986815 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Dec 13 14:28:17.015169 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:28:17.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:17.018450 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:17.053744 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:17.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:17.107450 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:28:17.112522 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 14:28:17.134738 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:28:17.134774 kernel: AES CTR mode by8 optimization enabled Dec 13 14:28:17.145452 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 14:28:17.161445 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 14:28:17.172102 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 14:28:17.181444 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 14:28:17.182468 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:28:17.187827 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 14:28:17.191555 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 14:28:17.191594 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 14:28:17.205944 kernel: scsi host0: storvsc_host_t Dec 13 14:28:17.206123 kernel: scsi host1: storvsc_host_t Dec 13 14:28:17.206151 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 14:28:17.221452 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 14:28:17.247735 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 14:28:17.264677 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:28:17.264691 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 14:28:17.272882 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 14:28:17.273007 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 14:28:17.273108 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 14:28:17.273204 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 14:28:17.273300 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 14:28:17.273400 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:17.273411 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 14:28:17.297458 kernel: hv_netvsc 7c1e5237-df0c-7c1e-5237-df0c7c1e5237 eth0: VF slot 1 added Dec 13 14:28:17.307451 kernel: hv_vmbus: registering driver hv_pci Dec 13 14:28:17.315452 kernel: hv_pci d78c2272-170d-4edf-9df8-e51afc4cd9b0: PCI VMBus probing: Using version 0x10004 Dec 13 14:28:17.389543 kernel: hv_pci d78c2272-170d-4edf-9df8-e51afc4cd9b0: PCI host bridge to bus 170d:00 Dec 13 14:28:17.389729 kernel: pci_bus 170d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 14:28:17.389889 kernel: pci_bus 170d:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 14:28:17.390031 kernel: pci 170d:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 14:28:17.390197 kernel: pci 170d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:28:17.390352 kernel: pci 170d:00:02.0: enabling Extended Tags Dec 13 14:28:17.390525 kernel: pci 170d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 170d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:28:17.390679 kernel: pci_bus 170d:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 14:28:17.390819 kernel: pci 170d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 14:28:17.482458 kernel: mlx5_core 170d:00:02.0: firmware version: 14.30.5000 Dec 13 14:28:17.744593 kernel: mlx5_core 170d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 14:28:17.744778 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (453) Dec 13 14:28:17.744797 kernel: mlx5_core 170d:00:02.0: Supported tc offload range - chains: 1, prios: 1 Dec 13 14:28:17.744947 kernel: mlx5_core 170d:00:02.0: mlx5e_tc_post_act_init:40:(pid 190): firmware level support is missing Dec 13 14:28:17.745080 kernel: hv_netvsc 7c1e5237-df0c-7c1e-5237-df0c7c1e5237 eth0: VF registering: eth1 Dec 13 14:28:17.745184 kernel: mlx5_core 170d:00:02.0 eth1: joined to eth0 Dec 13 14:28:17.645328 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:28:17.676894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:28:17.756446 kernel: mlx5_core 170d:00:02.0 enP5901s1: renamed from eth1 Dec 13 14:28:17.831585 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:28:17.846485 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:28:17.853180 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:28:17.856608 systemd[1]: Starting disk-uuid.service... Dec 13 14:28:18.877283 disk-uuid[562]: The operation has completed successfully. Dec 13 14:28:18.880088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:28:18.948599 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:28:18.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:18.948701 systemd[1]: Finished disk-uuid.service. Dec 13 14:28:18.964734 systemd[1]: Starting verity-setup.service... Dec 13 14:28:18.998456 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:28:19.259188 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:28:19.262802 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:28:19.266992 systemd[1]: Finished verity-setup.service. Dec 13 14:28:19.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.339143 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:28:19.343002 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:28:19.341154 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:28:19.343148 systemd[1]: Starting ignition-setup.service... Dec 13 14:28:19.351639 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:28:19.376272 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:19.376312 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:19.376330 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:19.420778 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:28:19.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.425000 audit: BPF prog-id=9 op=LOAD Dec 13 14:28:19.426536 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:19.449178 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:28:19.455684 systemd-networkd[832]: lo: Link UP Dec 13 14:28:19.455694 systemd-networkd[832]: lo: Gained carrier Dec 13 14:28:19.459739 systemd-networkd[832]: Enumeration completed Dec 13 14:28:19.460188 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:19.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.463470 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:19.466202 systemd[1]: Reached target network.target. Dec 13 14:28:19.473550 systemd[1]: Starting iscsiuio.service... Dec 13 14:28:19.479253 systemd[1]: Started iscsiuio.service. Dec 13 14:28:19.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.481975 systemd[1]: Starting iscsid.service... Dec 13 14:28:19.485852 iscsid[841]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:19.485852 iscsid[841]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:28:19.485852 iscsid[841]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:28:19.485852 iscsid[841]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:28:19.485852 iscsid[841]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:19.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.518358 iscsid[841]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:28:19.491233 systemd[1]: Started iscsid.service. Dec 13 14:28:19.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.508222 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:28:19.521640 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:28:19.524554 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:28:19.541939 kernel: mlx5_core 170d:00:02.0 enP5901s1: Link up Dec 13 14:28:19.528664 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:19.533029 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:19.541909 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:28:19.552102 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:28:19.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.561555 systemd[1]: Finished ignition-setup.service. Dec 13 14:28:19.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:19.566040 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:28:19.580221 kernel: hv_netvsc 7c1e5237-df0c-7c1e-5237-df0c7c1e5237 eth0: Data path switched to VF: enP5901s1 Dec 13 14:28:19.580485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:19.580675 systemd-networkd[832]: enP5901s1: Link UP Dec 13 14:28:19.580807 systemd-networkd[832]: eth0: Link UP Dec 13 14:28:19.580998 systemd-networkd[832]: eth0: Gained carrier Dec 13 14:28:19.587545 systemd-networkd[832]: enP5901s1: Gained carrier Dec 13 14:28:19.646533 systemd-networkd[832]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:28:21.552677 systemd-networkd[832]: eth0: Gained IPv6LL Dec 13 14:28:22.697699 ignition[856]: Ignition 2.14.0 Dec 13 14:28:22.697715 ignition[856]: Stage: fetch-offline Dec 13 14:28:22.697803 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:22.697856 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:22.784095 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:22.784279 ignition[856]: parsed url from cmdline: "" Dec 13 14:28:22.784282 ignition[856]: no config URL provided Dec 13 14:28:22.784288 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:22.784298 ignition[856]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:22.784304 ignition[856]: failed to fetch config: resource requires networking Dec 13 14:28:22.787551 ignition[856]: Ignition finished successfully Dec 13 14:28:22.798298 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:28:22.809192 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:28:22.809251 kernel: audit: type=1130 audit(1734100102.803:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:22.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:22.804428 systemd[1]: Starting ignition-fetch.service... Dec 13 14:28:22.814085 ignition[862]: Ignition 2.14.0 Dec 13 14:28:22.814091 ignition[862]: Stage: fetch Dec 13 14:28:22.814199 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:22.814226 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:22.817495 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:22.836932 ignition[862]: parsed url from cmdline: "" Dec 13 14:28:22.837005 ignition[862]: no config URL provided Dec 13 14:28:22.837017 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:22.837032 ignition[862]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:22.837071 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 14:28:22.932151 ignition[862]: GET result: OK Dec 13 14:28:22.932298 ignition[862]: config has been read from IMDS userdata Dec 13 14:28:22.932339 ignition[862]: parsing config with SHA512: 5dbbf7ce3994e7538035626206991fc116f377e7597579459a0e002175a80ed0ecd62dde39e95bfd2bba60b4614aadd31f50c79646176a469f7f7b745772014a Dec 13 14:28:22.936329 unknown[862]: fetched base config from "system" Dec 13 14:28:22.937028 ignition[862]: fetch: fetch complete Dec 13 14:28:22.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:22.936336 unknown[862]: fetched base config from "system" Dec 13 14:28:22.958469 kernel: audit: type=1130 audit(1734100102.941:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:22.937034 ignition[862]: fetch: fetch passed Dec 13 14:28:22.936342 unknown[862]: fetched user config from "azure" Dec 13 14:28:22.937102 ignition[862]: Ignition finished successfully Dec 13 14:28:22.938512 systemd[1]: Finished ignition-fetch.service. Dec 13 14:28:22.942463 systemd[1]: Starting ignition-kargs.service... Dec 13 14:28:22.975944 ignition[868]: Ignition 2.14.0 Dec 13 14:28:22.975954 ignition[868]: Stage: kargs Dec 13 14:28:22.976098 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:22.976131 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:22.980073 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:22.983974 ignition[868]: kargs: kargs passed Dec 13 14:28:22.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:22.985812 systemd[1]: Finished ignition-kargs.service. Dec 13 14:28:23.003946 kernel: audit: type=1130 audit(1734100102.987:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:22.984015 ignition[868]: Ignition finished successfully Dec 13 14:28:22.988791 systemd[1]: Starting ignition-disks.service... Dec 13 14:28:23.009219 ignition[874]: Ignition 2.14.0 Dec 13 14:28:23.009228 ignition[874]: Stage: disks Dec 13 14:28:23.009360 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:23.009391 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:23.012642 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:23.016525 ignition[874]: disks: disks passed Dec 13 14:28:23.016571 ignition[874]: Ignition finished successfully Dec 13 14:28:23.023293 systemd[1]: Finished ignition-disks.service. Dec 13 14:28:23.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.028354 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:28:23.045387 kernel: audit: type=1130 audit(1734100103.027:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.045392 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:23.049588 systemd[1]: Reached target local-fs.target. Dec 13 14:28:23.053532 systemd[1]: Reached target sysinit.target. Dec 13 14:28:23.057754 systemd[1]: Reached target basic.target. Dec 13 14:28:23.062458 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:28:23.117110 systemd-fsck[882]: ROOT: clean, 621/7326000 files, 481077/7359488 blocks Dec 13 14:28:23.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.126957 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:28:23.146412 kernel: audit: type=1130 audit(1734100103.129:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.130326 systemd[1]: Mounting sysroot.mount... Dec 13 14:28:23.159447 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:28:23.159711 systemd[1]: Mounted sysroot.mount. Dec 13 14:28:23.161747 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:28:23.192955 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:28:23.194726 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 14:28:23.194894 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:28:23.194934 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:28:23.201056 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:28:23.253790 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:23.260046 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:28:23.268039 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (893) Dec 13 14:28:23.278669 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:23.278704 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:23.278716 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:23.282453 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:28:23.290139 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:23.311930 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:28:23.331871 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:28:23.339658 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:28:23.817622 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:28:23.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.833900 systemd[1]: Starting ignition-mount.service... Dec 13 14:28:23.843511 kernel: audit: type=1130 audit(1734100103.819:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.838857 systemd[1]: Starting sysroot-boot.service... Dec 13 14:28:23.849705 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:23.849829 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:23.864613 systemd[1]: Finished sysroot-boot.service. Dec 13 14:28:23.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.879481 kernel: audit: type=1130 audit(1734100103.866:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.888550 ignition[961]: INFO : Ignition 2.14.0 Dec 13 14:28:23.888550 ignition[961]: INFO : Stage: mount Dec 13 14:28:23.893251 ignition[961]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:23.893251 ignition[961]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:23.905409 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:23.911206 ignition[961]: INFO : mount: mount passed Dec 13 14:28:23.913116 ignition[961]: INFO : Ignition finished successfully Dec 13 14:28:23.915531 systemd[1]: Finished ignition-mount.service. Dec 13 14:28:23.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:23.932447 kernel: audit: type=1130 audit(1734100103.919:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:24.769092 coreos-metadata[892]: Dec 13 14:28:24.769 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 14:28:24.808645 coreos-metadata[892]: Dec 13 14:28:24.808 INFO Fetch successful Dec 13 14:28:24.844176 coreos-metadata[892]: Dec 13 14:28:24.844 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 14:28:24.855407 coreos-metadata[892]: Dec 13 14:28:24.855 INFO Fetch successful Dec 13 14:28:24.883967 coreos-metadata[892]: Dec 13 14:28:24.883 INFO wrote hostname ci-3510.3.6-a-b3ffbcfb3b to /sysroot/etc/hostname Dec 13 14:28:24.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:24.886134 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 14:28:24.909322 kernel: audit: type=1130 audit(1734100104.886:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:24.887589 systemd[1]: Starting ignition-files.service... Dec 13 14:28:24.914172 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:24.925450 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (971) Dec 13 14:28:24.925479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:24.933881 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:28:24.933904 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:28:24.942355 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:24.955785 ignition[990]: INFO : Ignition 2.14.0 Dec 13 14:28:24.955785 ignition[990]: INFO : Stage: files Dec 13 14:28:24.960669 ignition[990]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:24.960669 ignition[990]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:24.974301 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:25.004953 ignition[990]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:28:25.008496 ignition[990]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:28:25.008496 ignition[990]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:28:25.045264 ignition[990]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:28:25.048932 ignition[990]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:28:25.060402 unknown[990]: wrote ssh authorized keys file for user: core Dec 13 14:28:25.063028 ignition[990]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:28:25.080022 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:28:25.085067 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:28:25.601191 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:28:25.728582 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:28:25.734815 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:25.734815 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:25.734815 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:28:25.748482 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:28:25.748482 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:28:25.757552 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:28:25.762163 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:28:25.766574 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:28:25.771187 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:25.775813 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:25.780268 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:25.787127 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:25.793594 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:28:25.798351 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:25.809356 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem879425503" Dec 13 14:28:25.819926 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (995) Dec 13 14:28:25.819957 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem879425503": device or resource busy Dec 13 14:28:25.819957 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem879425503", trying btrfs: device or resource busy Dec 13 14:28:25.819957 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem879425503" Dec 13 14:28:25.836952 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem879425503" Dec 13 14:28:25.836952 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem879425503" Dec 13 14:28:25.836952 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem879425503" Dec 13 14:28:25.836952 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Dec 13 14:28:25.836952 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:28:25.836952 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:25.832651 systemd[1]: mnt-oem879425503.mount: Deactivated successfully. Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1684289088" Dec 13 14:28:25.869052 ignition[990]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1684289088": device or resource busy Dec 13 14:28:25.869052 ignition[990]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1684289088", trying btrfs: device or resource busy Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1684289088" Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1684289088" Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem1684289088" Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem1684289088" Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:25.869052 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:28:25.853459 systemd[1]: mnt-oem1684289088.mount: Deactivated successfully. Dec 13 14:28:26.391716 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK Dec 13 14:28:26.801008 ignition[990]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:26.801008 ignition[990]: INFO : files: op(13): [started] processing unit "waagent.service" Dec 13 14:28:26.801008 ignition[990]: INFO : files: op(13): [finished] processing unit "waagent.service" Dec 13 14:28:26.801008 ignition[990]: INFO : files: op(14): [started] processing unit "nvidia.service" Dec 13 14:28:26.801008 ignition[990]: INFO : files: op(14): [finished] processing unit "nvidia.service" Dec 13 14:28:26.801008 ignition[990]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(17): [started] setting preset to enabled for "waagent.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(17): [finished] setting preset to enabled for "waagent.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:28:26.823304 ignition[990]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:28:26.858445 ignition[990]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:26.858445 ignition[990]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:26.858445 ignition[990]: INFO : files: files passed Dec 13 14:28:26.858445 ignition[990]: INFO : Ignition finished successfully Dec 13 14:28:26.872259 systemd[1]: Finished ignition-files.service. Dec 13 14:28:26.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.877586 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:28:26.895588 kernel: audit: type=1130 audit(1734100106.874:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.889260 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:28:26.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.890084 systemd[1]: Starting ignition-quench.service... Dec 13 14:28:26.911438 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:28:26.893079 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:28:26.893174 systemd[1]: Finished ignition-quench.service. Dec 13 14:28:26.898730 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:28:26.905071 systemd[1]: Reached target ignition-complete.target. Dec 13 14:28:26.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.908101 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:28:26.925165 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:28:26.925257 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:28:26.929290 systemd[1]: Reached target initrd-fs.target. Dec 13 14:28:26.931286 systemd[1]: Reached target initrd.target. Dec 13 14:28:26.933317 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:28:26.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.934050 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:28:26.946802 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:28:26.949874 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:28:26.964420 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:28:26.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.054287 iscsid[841]: iscsid shutting down. Dec 13 14:28:26.964665 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:28:27.058876 ignition[1028]: INFO : Ignition 2.14.0 Dec 13 14:28:27.058876 ignition[1028]: INFO : Stage: umount Dec 13 14:28:27.058876 ignition[1028]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:27.058876 ignition[1028]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Dec 13 14:28:27.058876 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 14:28:27.058876 ignition[1028]: INFO : umount: umount passed Dec 13 14:28:27.058876 ignition[1028]: INFO : Ignition finished successfully Dec 13 14:28:26.965156 systemd[1]: Stopped target timers.target. Dec 13 14:28:26.965569 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:28:26.965687 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:28:26.966102 systemd[1]: Stopped target initrd.target. Dec 13 14:28:26.966420 systemd[1]: Stopped target basic.target. Dec 13 14:28:27.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.966892 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:28:26.967302 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:28:26.967729 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:28:26.968273 systemd[1]: Stopped target remote-fs.target. Dec 13 14:28:26.968745 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:28:26.969182 systemd[1]: Stopped target sysinit.target. Dec 13 14:28:26.969595 systemd[1]: Stopped target local-fs.target. Dec 13 14:28:26.970038 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:28:26.970472 systemd[1]: Stopped target swap.target. Dec 13 14:28:26.970843 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:28:26.970957 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:28:26.971420 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:28:26.971739 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:28:26.971849 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:28:27.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.972260 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:28:26.972378 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:28:26.972656 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:28:26.972762 systemd[1]: Stopped ignition-files.service. Dec 13 14:28:26.973094 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:28:26.973204 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 14:28:26.974626 systemd[1]: Stopping ignition-mount.service... Dec 13 14:28:26.977614 systemd[1]: Stopping iscsid.service... Dec 13 14:28:26.977800 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:28:26.977930 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:28:26.979183 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:28:26.979582 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:28:26.979733 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:28:26.980139 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:28:27.154000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:28:26.980252 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:28:27.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.982331 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:28:26.982458 systemd[1]: Stopped iscsid.service. Dec 13 14:28:27.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:26.983632 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:28:26.991953 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:28:27.001158 systemd[1]: Stopping iscsiuio.service... Dec 13 14:28:27.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.002024 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:28:27.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.007702 systemd[1]: Stopped iscsiuio.service. Dec 13 14:28:27.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.016039 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:28:27.016118 systemd[1]: Stopped ignition-mount.service. Dec 13 14:28:27.016320 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:28:27.016358 systemd[1]: Stopped ignition-disks.service. Dec 13 14:28:27.016687 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:28:27.219687 kernel: hv_netvsc 7c1e5237-df0c-7c1e-5237-df0c7c1e5237 eth0: Data path switched from VF: enP5901s1 Dec 13 14:28:27.016718 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:28:27.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.017097 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:28:27.017128 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:28:27.017510 systemd[1]: Stopped target network.target. Dec 13 14:28:27.017932 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:28:27.017963 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:28:27.018376 systemd[1]: Stopped target paths.target. Dec 13 14:28:27.018776 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:28:27.021397 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:28:27.021798 systemd[1]: Stopped target slices.target. Dec 13 14:28:27.022211 systemd[1]: Stopped target sockets.target. Dec 13 14:28:27.024156 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:28:27.024191 systemd[1]: Closed iscsid.socket. Dec 13 14:28:27.024521 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:28:27.024549 systemd[1]: Closed iscsiuio.socket. Dec 13 14:28:27.024865 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:28:27.024901 systemd[1]: Stopped ignition-setup.service. Dec 13 14:28:27.025556 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:28:27.027214 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:28:27.042783 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:28:27.057046 systemd-networkd[832]: eth0: DHCPv6 lease lost Dec 13 14:28:27.232000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:28:27.060481 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:28:27.060595 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:28:27.102631 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:28:27.110462 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:28:27.121764 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:28:27.121820 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:28:27.154724 systemd[1]: Stopping network-cleanup.service... Dec 13 14:28:27.156662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:28:27.156735 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:28:27.159123 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:28:27.159178 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:28:27.161249 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:28:27.161299 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:28:27.163938 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:28:27.167042 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:28:27.171720 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:28:27.171865 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:28:27.175931 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:28:27.175974 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:28:27.180517 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:28:27.180554 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:28:27.184856 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:28:27.184902 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:28:27.188824 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:28:27.188870 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:28:27.193301 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:28:27.193348 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:28:27.198109 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:28:27.209457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:28:27.209513 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:28:27.224233 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:28:27.232766 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:28:27.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.342491 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:28:27.342586 systemd[1]: Stopped network-cleanup.service. Dec 13 14:28:27.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.567326 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:28:27.567503 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:28:27.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.577485 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:28:27.582174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:28:27.582251 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:28:27.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:27.589399 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:28:27.600661 systemd[1]: Switching root. Dec 13 14:28:27.623749 systemd-journald[183]: Journal stopped Dec 13 14:28:56.342489 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 14:28:56.342517 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:28:56.342530 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:28:56.342540 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:28:56.342554 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:28:56.342570 kernel: SELinux: policy capability open_perms=1 Dec 13 14:28:56.342590 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:28:56.342605 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:28:56.342621 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:28:56.342636 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:28:56.342657 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:28:56.342672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:28:56.342688 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 14:28:56.342708 kernel: audit: type=1403 audit(1734100110.012:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:28:56.342730 systemd[1]: Successfully loaded SELinux policy in 353.126ms. Dec 13 14:28:56.342749 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.392ms. Dec 13 14:28:56.342767 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:56.342790 systemd[1]: Detected virtualization microsoft. Dec 13 14:28:56.342811 systemd[1]: Detected architecture x86-64. Dec 13 14:28:56.342832 systemd[1]: Detected first boot. Dec 13 14:28:56.342852 systemd[1]: Hostname set to . Dec 13 14:28:56.342871 systemd[1]: Initializing machine ID from random generator. Dec 13 14:28:56.342890 kernel: audit: type=1400 audit(1734100110.661:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:56.342907 kernel: audit: type=1400 audit(1734100110.703:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:56.342930 kernel: audit: type=1400 audit(1734100110.703:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:56.342951 kernel: audit: type=1334 audit(1734100110.716:85): prog-id=10 op=LOAD Dec 13 14:28:56.342967 kernel: audit: type=1334 audit(1734100110.716:86): prog-id=10 op=UNLOAD Dec 13 14:28:56.342983 kernel: audit: type=1334 audit(1734100110.716:87): prog-id=11 op=LOAD Dec 13 14:28:56.343001 kernel: audit: type=1334 audit(1734100110.716:88): prog-id=11 op=UNLOAD Dec 13 14:28:56.343016 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:28:56.343033 kernel: audit: type=1400 audit(1734100112.050:89): avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:28:56.343050 kernel: audit: type=1300 audit(1734100112.050:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:56.343071 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:28:56.343090 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:56.343109 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:56.343127 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:56.343144 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:28:56.343160 kernel: audit: type=1334 audit(1734100135.754:91): prog-id=12 op=LOAD Dec 13 14:28:56.343178 kernel: audit: type=1334 audit(1734100135.754:92): prog-id=3 op=UNLOAD Dec 13 14:28:56.343198 kernel: audit: type=1334 audit(1734100135.758:93): prog-id=13 op=LOAD Dec 13 14:28:56.343226 kernel: audit: type=1334 audit(1734100135.763:94): prog-id=14 op=LOAD Dec 13 14:28:56.343242 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:28:56.343261 kernel: audit: type=1334 audit(1734100135.763:95): prog-id=4 op=UNLOAD Dec 13 14:28:56.343278 kernel: audit: type=1334 audit(1734100135.763:96): prog-id=5 op=UNLOAD Dec 13 14:28:56.343295 kernel: audit: type=1131 audit(1734100135.764:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.343315 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:28:56.343334 kernel: audit: type=1334 audit(1734100135.808:98): prog-id=12 op=UNLOAD Dec 13 14:28:56.343359 kernel: audit: type=1130 audit(1734100135.816:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.343380 kernel: audit: type=1131 audit(1734100135.816:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.343397 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:28:56.343413 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:28:56.343427 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:28:56.343460 systemd[1]: Created slice system-getty.slice. Dec 13 14:28:56.343474 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:28:56.343494 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:28:56.345719 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:28:56.345754 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:28:56.345768 systemd[1]: Created slice user.slice. Dec 13 14:28:56.345780 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:56.345792 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:28:56.345806 systemd[1]: Set up automount boot.automount. Dec 13 14:28:56.345816 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:28:56.345829 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:28:56.345845 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:28:56.345856 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:28:56.345872 systemd[1]: Reached target integritysetup.target. Dec 13 14:28:56.345885 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:56.345894 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:56.345906 systemd[1]: Reached target slices.target. Dec 13 14:28:56.345917 systemd[1]: Reached target swap.target. Dec 13 14:28:56.345929 systemd[1]: Reached target torcx.target. Dec 13 14:28:56.345941 systemd[1]: Reached target veritysetup.target. Dec 13 14:28:56.345953 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:28:56.345965 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:28:56.345975 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:56.345988 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:56.346004 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:56.346014 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:28:56.346027 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:28:56.346037 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:28:56.346049 systemd[1]: Mounting media.mount... Dec 13 14:28:56.346060 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:56.346072 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:28:56.346084 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:28:56.346094 systemd[1]: Mounting tmp.mount... Dec 13 14:28:56.346108 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:28:56.346123 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:56.346132 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:56.346146 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:28:56.346156 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:56.346168 systemd[1]: Starting modprobe@drm.service... Dec 13 14:28:56.346178 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:56.346189 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:28:56.346202 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:56.346214 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:28:56.346227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:28:56.346239 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:28:56.346249 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:28:56.346260 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:28:56.346272 systemd[1]: Stopped systemd-journald.service. Dec 13 14:28:56.346284 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:56.346294 kernel: loop: module loaded Dec 13 14:28:56.346308 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:56.346321 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:28:56.346331 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:28:56.346344 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:56.346355 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:28:56.346368 systemd[1]: Stopped verity-setup.service. Dec 13 14:28:56.346379 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:56.346390 kernel: fuse: init (API version 7.34) Dec 13 14:28:56.346402 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:28:56.346414 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:28:56.346427 systemd[1]: Mounted media.mount. Dec 13 14:28:56.346473 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:28:56.346485 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:28:56.346495 systemd[1]: Mounted tmp.mount. Dec 13 14:28:56.346512 systemd-journald[1169]: Journal started Dec 13 14:28:56.346565 systemd-journald[1169]: Runtime Journal (/run/log/journal/5da0013a6ebe4e32b321864252782430) is 8.0M, max 159.0M, 151.0M free. Dec 13 14:28:30.012000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:28:30.661000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:30.703000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:30.703000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:30.716000 audit: BPF prog-id=10 op=LOAD Dec 13 14:28:30.716000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:28:30.716000 audit: BPF prog-id=11 op=LOAD Dec 13 14:28:30.716000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:28:32.050000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:28:32.050000 audit[1061]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:32.050000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:28:32.058000 audit[1061]: AVC avc: denied { associate } for pid=1061 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:28:32.058000 audit[1061]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1044 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:32.058000 audit: CWD cwd="/" Dec 13 14:28:32.058000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:32.058000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:32.058000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:28:55.754000 audit: BPF prog-id=12 op=LOAD Dec 13 14:28:55.754000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:28:55.758000 audit: BPF prog-id=13 op=LOAD Dec 13 14:28:55.763000 audit: BPF prog-id=14 op=LOAD Dec 13 14:28:55.763000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:28:55.763000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:28:55.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:55.808000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:28:55.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:55.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.217000 audit: BPF prog-id=15 op=LOAD Dec 13 14:28:56.217000 audit: BPF prog-id=16 op=LOAD Dec 13 14:28:56.217000 audit: BPF prog-id=17 op=LOAD Dec 13 14:28:56.217000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:28:56.217000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:28:56.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.339000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:28:56.339000 audit[1169]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc4c1ced70 a2=4000 a3=7ffc4c1cee0c items=0 ppid=1 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:56.339000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:28:32.018412 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:55.752735 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:28:32.019009 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:28:55.765015 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:28:32.019044 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:28:32.019097 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:28:32.019117 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:28:32.019178 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:28:32.019200 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:28:32.019523 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:28:32.019603 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:28:56.350957 systemd[1]: Started systemd-journald.service. Dec 13 14:28:32.019631 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:28:32.035605 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:28:32.035645 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:28:32.035671 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:28:32.035685 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:28:32.035702 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:28:32.035714 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:28:54.590774 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:28:54.591020 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:28:54.591145 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:28:54.591310 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:28:54.591354 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:28:54.591406 /usr/lib/systemd/system-generators/torcx-generator[1061]: time="2024-12-13T14:28:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:28:56.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.357862 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:28:56.360779 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:56.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.363644 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:28:56.363793 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:28:56.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.367315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:56.367475 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:56.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.370268 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:28:56.370415 systemd[1]: Finished modprobe@drm.service. Dec 13 14:28:56.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.372957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:56.373101 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:56.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.375632 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:28:56.375789 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:28:56.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.378133 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:56.378300 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:56.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.380665 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:56.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.383779 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:28:56.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.386377 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:28:56.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.389025 systemd[1]: Reached target network-pre.target. Dec 13 14:28:56.392546 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:28:56.396192 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:28:56.400582 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:28:56.442964 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:28:56.446734 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:28:56.448917 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:56.450197 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:28:56.452353 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:56.453601 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:56.456778 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:28:56.461025 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:56.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.463573 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:28:56.465898 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:28:56.469214 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:28:56.482275 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:28:56.487059 systemd-journald[1169]: Time spent on flushing to /var/log/journal/5da0013a6ebe4e32b321864252782430 is 15.307ms for 1150 entries. Dec 13 14:28:56.487059 systemd-journald[1169]: System Journal (/var/log/journal/5da0013a6ebe4e32b321864252782430) is 8.0M, max 2.6G, 2.6G free. Dec 13 14:28:56.625287 systemd-journald[1169]: Received client request to flush runtime journal. Dec 13 14:28:56.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:56.513304 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:28:56.517318 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:28:56.558729 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:56.626370 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:28:56.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.329275 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:28:57.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.189174 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:28:58.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.192000 audit: BPF prog-id=18 op=LOAD Dec 13 14:28:58.192000 audit: BPF prog-id=19 op=LOAD Dec 13 14:28:58.192000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:28:58.192000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:28:58.193480 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:58.212561 systemd-udevd[1187]: Using default interface naming scheme 'v252'. Dec 13 14:28:58.919114 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:58.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.922000 audit: BPF prog-id=20 op=LOAD Dec 13 14:28:58.924692 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:58.962287 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:28:59.019000 audit[1191]: AVC avc: denied { confidentiality } for pid=1191 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:59.033471 kernel: hv_vmbus: registering driver hv_balloon Dec 13 14:28:59.049345 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 14:28:59.049460 kernel: hv_vmbus: registering driver hv_utils Dec 13 14:28:59.063451 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 14:28:59.071680 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 14:28:59.071770 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 14:28:59.076108 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 14:28:59.019000 audit[1191]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564a024d12a0 a1=f884 a2=7f288a7a6bc5 a3=5 items=12 ppid=1187 pid=1191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:59.019000 audit: CWD cwd="/" Dec 13 14:28:59.019000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=1 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=2 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=3 name=(null) inode=15101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=4 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=5 name=(null) inode=15102 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=6 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=7 name=(null) inode=15103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=8 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=9 name=(null) inode=15104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=10 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PATH item=11 name=(null) inode=15105 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:59.019000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:28:59.506168 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:28:59.517160 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 14:28:59.530289 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 14:28:59.530370 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 14:28:59.536041 kernel: Console: switching to colour dummy device 80x25 Dec 13 14:28:59.540061 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 14:28:59.537000 audit: BPF prog-id=21 op=LOAD Dec 13 14:28:59.538000 audit: BPF prog-id=22 op=LOAD Dec 13 14:28:59.538000 audit: BPF prog-id=23 op=LOAD Dec 13 14:28:59.540024 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:28:59.606225 systemd[1]: Started systemd-userdbd.service. Dec 13 14:28:59.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.736144 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1205) Dec 13 14:28:59.787935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:28:59.799327 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Dec 13 14:28:59.914521 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:28:59.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.921955 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:28:59.926739 systemd-networkd[1193]: lo: Link UP Dec 13 14:28:59.926749 systemd-networkd[1193]: lo: Gained carrier Dec 13 14:28:59.927477 systemd-networkd[1193]: Enumeration completed Dec 13 14:28:59.927595 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:59.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.931483 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:29:00.001199 systemd-networkd[1193]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:29:00.056144 kernel: mlx5_core 170d:00:02.0 enP5901s1: Link up Dec 13 14:29:00.078176 kernel: hv_netvsc 7c1e5237-df0c-7c1e-5237-df0c7c1e5237 eth0: Data path switched to VF: enP5901s1 Dec 13 14:29:00.078700 systemd-networkd[1193]: enP5901s1: Link UP Dec 13 14:29:00.078854 systemd-networkd[1193]: eth0: Link UP Dec 13 14:29:00.078866 systemd-networkd[1193]: eth0: Gained carrier Dec 13 14:29:00.084416 systemd-networkd[1193]: enP5901s1: Gained carrier Dec 13 14:29:00.113263 systemd-networkd[1193]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:29:00.248855 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:29:00.280337 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:29:00.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.283160 systemd[1]: Reached target cryptsetup.target. Dec 13 14:29:00.286769 systemd[1]: Starting lvm2-activation.service... Dec 13 14:29:00.294221 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:29:00.320257 systemd[1]: Finished lvm2-activation.service. Dec 13 14:29:00.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.323161 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:29:00.325736 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:29:00.325769 systemd[1]: Reached target local-fs.target. Dec 13 14:29:00.328065 systemd[1]: Reached target machines.target. Dec 13 14:29:00.331391 systemd[1]: Starting ldconfig.service... Dec 13 14:29:00.334268 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.334371 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.335568 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:29:00.339040 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:29:00.343181 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:29:00.346708 systemd[1]: Starting systemd-sysext.service... Dec 13 14:29:00.391526 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) Dec 13 14:29:00.392976 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:29:00.421918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:29:00.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.459536 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:29:00.492508 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:29:00.493221 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:29:00.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.496667 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:29:00.496863 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:29:00.524136 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:29:00.580288 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:29:00.597147 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:29:00.601334 (sd-sysext)[1280]: Using extensions 'kubernetes'. Dec 13 14:29:00.602549 (sd-sysext)[1280]: Merged extensions into '/usr'. Dec 13 14:29:00.618070 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.619722 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:29:00.620234 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.623795 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:00.625818 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:00.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.628886 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:00.629045 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.629175 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.629283 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.630206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:00.630328 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:00.631092 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:00.631245 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:00.631666 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.634541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:00.634689 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:00.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.638607 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:29:00.641020 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:00.642708 systemd[1]: Finished systemd-sysext.service. Dec 13 14:29:00.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.645862 systemd[1]: Starting ensure-sysext.service... Dec 13 14:29:00.647065 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:29:00.656241 systemd[1]: Reloading. Dec 13 14:29:00.729173 /usr/lib/systemd/system-generators/torcx-generator[1307]: time="2024-12-13T14:29:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:00.729624 /usr/lib/systemd/system-generators/torcx-generator[1307]: time="2024-12-13T14:29:00Z" level=info msg="torcx already run" Dec 13 14:29:00.822413 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:00.822433 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:00.839552 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:00.868052 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:29:00.904000 audit: BPF prog-id=24 op=LOAD Dec 13 14:29:00.904000 audit: BPF prog-id=25 op=LOAD Dec 13 14:29:00.904000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:29:00.904000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:29:00.906000 audit: BPF prog-id=26 op=LOAD Dec 13 14:29:00.906000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:29:00.906000 audit: BPF prog-id=27 op=LOAD Dec 13 14:29:00.906000 audit: BPF prog-id=28 op=LOAD Dec 13 14:29:00.906000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:29:00.906000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:29:00.906000 audit: BPF prog-id=29 op=LOAD Dec 13 14:29:00.906000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:29:00.906000 audit: BPF prog-id=30 op=LOAD Dec 13 14:29:00.906000 audit: BPF prog-id=31 op=LOAD Dec 13 14:29:00.906000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:29:00.906000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:29:00.908000 audit: BPF prog-id=32 op=LOAD Dec 13 14:29:00.908000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:29:00.923542 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.923826 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.925425 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:00.927802 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:00.931291 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:00.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.931526 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.931656 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.931899 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.932846 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:00.933129 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:00.938928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:00.939101 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:00.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.939648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:00.939766 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:00.940271 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.940653 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.943991 systemd[1]: Starting modprobe@drm.service... Dec 13 14:29:00.947168 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:00.947857 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.948015 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.948382 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:00.948663 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.951730 systemd[1]: Finished ensure-sysext.service. Dec 13 14:29:00.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.952368 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:00.952487 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:00.954212 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.957283 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:29:00.957440 systemd[1]: Finished modprobe@drm.service. Dec 13 14:29:00.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.969761 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:29:01.109465 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:29:01.206963 systemd-fsck[1275]: fsck.fat 4.2 (2021-01-31) Dec 13 14:29:01.206963 systemd-fsck[1275]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:29:01.209295 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:29:01.217478 kernel: kauditd_printk_skb: 105 callbacks suppressed Dec 13 14:29:01.217560 kernel: audit: type=1130 audit(1734100141.212:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.214557 systemd[1]: Mounting boot.mount... Dec 13 14:29:01.236873 systemd[1]: Mounted boot.mount. Dec 13 14:29:01.251604 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:29:01.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.266138 kernel: audit: type=1130 audit(1734100141.252:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.669553 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:29:01.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.673705 systemd[1]: Starting audit-rules.service... Dec 13 14:29:01.688728 kernel: audit: type=1130 audit(1734100141.671:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.690022 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:29:01.693532 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:29:01.698331 systemd[1]: Starting systemd-resolved.service... Dec 13 14:29:01.702393 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:29:01.696000 audit: BPF prog-id=33 op=LOAD Dec 13 14:29:01.705666 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:29:01.716191 kernel: audit: type=1334 audit(1734100141.696:192): prog-id=33 op=LOAD Dec 13 14:29:01.716266 kernel: audit: type=1334 audit(1734100141.700:193): prog-id=34 op=LOAD Dec 13 14:29:01.700000 audit: BPF prog-id=34 op=LOAD Dec 13 14:29:01.726000 audit[1386]: SYSTEM_BOOT pid=1386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.728962 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:29:01.741731 kernel: audit: type=1127 audit(1734100141.726:194): pid=1386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.754139 kernel: audit: type=1130 audit(1734100141.740:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.774674 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:29:01.777366 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:01.792882 kernel: audit: type=1130 audit(1734100141.776:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.777523 systemd-networkd[1193]: eth0: Gained IPv6LL Dec 13 14:29:01.799312 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:29:01.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.818235 kernel: audit: type=1130 audit(1734100141.801:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.826152 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:29:01.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.844171 kernel: audit: type=1130 audit(1734100141.827:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.890979 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:29:01.894306 systemd[1]: Reached target time-set.target. Dec 13 14:29:01.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:01.951898 systemd-resolved[1384]: Positive Trust Anchors: Dec 13 14:29:01.951917 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:29:01.951954 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:29:01.987000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:29:01.987000 audit[1401]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcffc51710 a2=420 a3=0 items=0 ppid=1380 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:01.987000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:29:01.989352 augenrules[1401]: No rules Dec 13 14:29:01.989835 systemd[1]: Finished audit-rules.service. Dec 13 14:29:02.071656 systemd-resolved[1384]: Using system hostname 'ci-3510.3.6-a-b3ffbcfb3b'. Dec 13 14:29:02.073652 systemd[1]: Started systemd-resolved.service. Dec 13 14:29:02.076436 systemd[1]: Reached target network.target. Dec 13 14:29:02.078841 systemd[1]: Reached target network-online.target. Dec 13 14:29:02.081306 systemd[1]: Reached target nss-lookup.target. Dec 13 14:29:02.125656 systemd-timesyncd[1385]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Dec 13 14:29:02.125738 systemd-timesyncd[1385]: Initial clock synchronization to Fri 2024-12-13 14:29:02.126205 UTC. Dec 13 14:29:07.400738 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:29:07.408078 systemd[1]: Finished ldconfig.service. Dec 13 14:29:07.412255 systemd[1]: Starting systemd-update-done.service... Dec 13 14:29:07.420509 systemd[1]: Finished systemd-update-done.service. Dec 13 14:29:07.423448 systemd[1]: Reached target sysinit.target. Dec 13 14:29:07.425936 systemd[1]: Started motdgen.path. Dec 13 14:29:07.428079 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:29:07.431210 systemd[1]: Started logrotate.timer. Dec 13 14:29:07.433171 systemd[1]: Started mdadm.timer. Dec 13 14:29:07.434904 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:29:07.437146 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:29:07.437185 systemd[1]: Reached target paths.target. Dec 13 14:29:07.439190 systemd[1]: Reached target timers.target. Dec 13 14:29:07.441867 systemd[1]: Listening on dbus.socket. Dec 13 14:29:07.445994 systemd[1]: Starting docker.socket... Dec 13 14:29:07.450769 systemd[1]: Listening on sshd.socket. Dec 13 14:29:07.453256 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:07.453704 systemd[1]: Listening on docker.socket. Dec 13 14:29:07.456030 systemd[1]: Reached target sockets.target. Dec 13 14:29:07.458397 systemd[1]: Reached target basic.target. Dec 13 14:29:07.460654 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:29:07.460689 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:29:07.461635 systemd[1]: Starting containerd.service... Dec 13 14:29:07.464775 systemd[1]: Starting dbus.service... Dec 13 14:29:07.467522 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:29:07.470808 systemd[1]: Starting extend-filesystems.service... Dec 13 14:29:07.473242 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:29:07.474684 systemd[1]: Starting kubelet.service... Dec 13 14:29:07.477895 systemd[1]: Starting motdgen.service... Dec 13 14:29:07.480981 systemd[1]: Started nvidia.service. Dec 13 14:29:07.484568 systemd[1]: Starting prepare-helm.service... Dec 13 14:29:07.487984 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:29:07.491552 systemd[1]: Starting sshd-keygen.service... Dec 13 14:29:07.499616 systemd[1]: Starting systemd-logind.service... Dec 13 14:29:07.501908 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:07.501999 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:29:07.502540 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:29:07.503363 systemd[1]: Starting update-engine.service... Dec 13 14:29:07.506735 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:29:07.515935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:29:07.517214 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:29:07.547134 jq[1411]: false Dec 13 14:29:07.547397 jq[1428]: true Dec 13 14:29:07.547511 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:29:07.547774 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:29:07.562239 extend-filesystems[1412]: Found loop1 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda1 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda2 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda3 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found usr Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda4 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda6 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda7 Dec 13 14:29:07.564358 extend-filesystems[1412]: Found sda9 Dec 13 14:29:07.564358 extend-filesystems[1412]: Checking size of /dev/sda9 Dec 13 14:29:07.571950 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:29:07.575290 systemd[1]: Finished motdgen.service. Dec 13 14:29:07.597778 jq[1439]: true Dec 13 14:29:07.644776 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:29:07.649278 systemd-logind[1425]: New seat seat0. Dec 13 14:29:07.663722 extend-filesystems[1412]: Old size kept for /dev/sda9 Dec 13 14:29:07.674496 extend-filesystems[1412]: Found sr0 Dec 13 14:29:07.666443 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:29:07.680334 tar[1432]: linux-amd64/helm Dec 13 14:29:07.666636 systemd[1]: Finished extend-filesystems.service. Dec 13 14:29:07.691131 env[1435]: time="2024-12-13T14:29:07.691052979Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:29:07.777341 dbus-daemon[1410]: [system] SELinux support is enabled Dec 13 14:29:07.781568 systemd[1]: Started dbus.service. Dec 13 14:29:07.786269 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:29:07.786310 systemd[1]: Reached target system-config.target. Dec 13 14:29:07.789019 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:29:07.789040 systemd[1]: Reached target user-config.target. Dec 13 14:29:07.793838 systemd[1]: Started systemd-logind.service. Dec 13 14:29:07.800306 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:29:07.801475 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:29:07.821662 env[1435]: time="2024-12-13T14:29:07.821598574Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:29:07.821878 env[1435]: time="2024-12-13T14:29:07.821858482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:07.823449 env[1435]: time="2024-12-13T14:29:07.823409232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:07.824924 env[1435]: time="2024-12-13T14:29:07.824898980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:07.825315 env[1435]: time="2024-12-13T14:29:07.825288493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:07.827005 env[1435]: time="2024-12-13T14:29:07.826978847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:07.827157 env[1435]: time="2024-12-13T14:29:07.827100651Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:29:07.827231 env[1435]: time="2024-12-13T14:29:07.827216755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:07.827390 env[1435]: time="2024-12-13T14:29:07.827373960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:07.827706 env[1435]: time="2024-12-13T14:29:07.827684570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:07.828027 env[1435]: time="2024-12-13T14:29:07.827984079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:07.828141 env[1435]: time="2024-12-13T14:29:07.828103583Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:29:07.828270 env[1435]: time="2024-12-13T14:29:07.828251688Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:29:07.830091 env[1435]: time="2024-12-13T14:29:07.830052046Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.846870386Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.846907987Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.846940689Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.846983690Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847068593Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847103294Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847141495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847163096Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847182996Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847218497Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847254699Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847270999Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847399403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:29:07.848338 env[1435]: time="2024-12-13T14:29:07.847502407Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.847951621Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.847997723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848017823Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848076825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848093126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848110326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848199129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848218730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848235430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848261731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848278732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.848829 env[1435]: time="2024-12-13T14:29:07.848296932Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849161060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849222162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849241762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849269363Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849289664Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849304364Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849344066Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:29:07.849687 env[1435]: time="2024-12-13T14:29:07.849383467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:29:07.850361 env[1435]: time="2024-12-13T14:29:07.850266795Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.850511503Z" level=info msg="Connect containerd service" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.850568505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851419832Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851554637Z" level=info msg="Start subscribing containerd event" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851604838Z" level=info msg="Start recovering state" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851668440Z" level=info msg="Start event monitor" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851692141Z" level=info msg="Start snapshots syncer" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851702942Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.851712042Z" level=info msg="Start streaming server" Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.852098954Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.852162656Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:29:07.897477 env[1435]: time="2024-12-13T14:29:07.852472666Z" level=info msg="containerd successfully booted in 0.177738s" Dec 13 14:29:07.852289 systemd[1]: Started containerd.service. Dec 13 14:29:07.877442 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:29:08.429899 update_engine[1427]: I1213 14:29:08.428422 1427 main.cc:92] Flatcar Update Engine starting Dec 13 14:29:08.477204 systemd[1]: Started update-engine.service. Dec 13 14:29:08.477960 update_engine[1427]: I1213 14:29:08.477862 1427 update_check_scheduler.cc:74] Next update check in 5m8s Dec 13 14:29:08.482407 systemd[1]: Started locksmithd.service. Dec 13 14:29:08.555103 tar[1432]: linux-amd64/LICENSE Dec 13 14:29:08.555367 tar[1432]: linux-amd64/README.md Dec 13 14:29:08.567922 systemd[1]: Finished prepare-helm.service. Dec 13 14:29:08.817530 systemd[1]: Started kubelet.service. Dec 13 14:29:09.051984 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:29:09.080099 systemd[1]: Finished sshd-keygen.service. Dec 13 14:29:09.084425 systemd[1]: Starting issuegen.service... Dec 13 14:29:09.087864 systemd[1]: Started waagent.service. Dec 13 14:29:09.091933 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:29:09.092139 systemd[1]: Finished issuegen.service. Dec 13 14:29:09.096021 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:29:09.117581 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:29:09.122182 systemd[1]: Started getty@tty1.service. Dec 13 14:29:09.126081 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:29:09.130009 systemd[1]: Reached target getty.target. Dec 13 14:29:09.133110 systemd[1]: Reached target multi-user.target. Dec 13 14:29:09.139301 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:29:09.148423 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:29:09.148600 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:29:09.151493 systemd[1]: Startup finished in 768ms (firmware) + 27.875s (loader) + 933ms (kernel) + 13.724s (initrd) + 39.303s (userspace) = 1min 22.605s. Dec 13 14:29:09.574196 kubelet[1513]: E1213 14:29:09.574134 1513 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:09.576139 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:09.576304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:09.576581 systemd[1]: kubelet.service: Consumed 1.086s CPU time. Dec 13 14:29:09.611784 login[1534]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 14:29:09.613794 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:29:09.640565 systemd[1]: Created slice user-500.slice. Dec 13 14:29:09.641947 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:29:09.644690 systemd-logind[1425]: New session 1 of user core. Dec 13 14:29:09.682939 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:29:09.684702 systemd[1]: Starting user@500.service... Dec 13 14:29:09.688347 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:09.779494 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:29:09.853816 systemd[1540]: Queued start job for default target default.target. Dec 13 14:29:09.854389 systemd[1540]: Reached target paths.target. Dec 13 14:29:09.854417 systemd[1540]: Reached target sockets.target. Dec 13 14:29:09.854433 systemd[1540]: Reached target timers.target. Dec 13 14:29:09.854447 systemd[1540]: Reached target basic.target. Dec 13 14:29:09.854572 systemd[1]: Started user@500.service. Dec 13 14:29:09.855854 systemd[1]: Started session-1.scope. Dec 13 14:29:09.856419 systemd[1540]: Reached target default.target. Dec 13 14:29:09.856606 systemd[1540]: Startup finished in 161ms. Dec 13 14:29:10.612496 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 14:29:10.617296 systemd-logind[1425]: New session 2 of user core. Dec 13 14:29:10.618751 systemd[1]: Started session-2.scope. Dec 13 14:29:14.577025 waagent[1529]: 2024-12-13T14:29:14.576905Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Dec 13 14:29:14.592863 waagent[1529]: 2024-12-13T14:29:14.592781Z INFO Daemon Daemon OS: flatcar 3510.3.6 Dec 13 14:29:14.593796 waagent[1529]: 2024-12-13T14:29:14.593738Z INFO Daemon Daemon Python: 3.9.16 Dec 13 14:29:14.595095 waagent[1529]: 2024-12-13T14:29:14.595036Z INFO Daemon Daemon Run daemon Dec 13 14:29:14.596617 waagent[1529]: 2024-12-13T14:29:14.596565Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.6' Dec 13 14:29:14.608917 waagent[1529]: 2024-12-13T14:29:14.608799Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:29:14.618744 waagent[1529]: 2024-12-13T14:29:14.618625Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:29:14.623651 waagent[1529]: 2024-12-13T14:29:14.623585Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:29:14.634017 waagent[1529]: 2024-12-13T14:29:14.624665Z INFO Daemon Daemon Using waagent for provisioning Dec 13 14:29:14.634017 waagent[1529]: 2024-12-13T14:29:14.626200Z INFO Daemon Daemon Activate resource disk Dec 13 14:29:14.634017 waagent[1529]: 2024-12-13T14:29:14.626967Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 14:29:14.634754 waagent[1529]: 2024-12-13T14:29:14.634695Z INFO Daemon Daemon Found device: None Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.635951Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.636795Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.638510Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.639459Z INFO Daemon Daemon Running default provisioning handler Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.649143Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.652546Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.653434Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 14:29:14.664152 waagent[1529]: 2024-12-13T14:29:14.654295Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 14:29:14.751557 waagent[1529]: 2024-12-13T14:29:14.751402Z INFO Daemon Daemon Successfully mounted dvd Dec 13 14:29:14.812190 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 14:29:14.831167 waagent[1529]: 2024-12-13T14:29:14.830966Z INFO Daemon Daemon Detect protocol endpoint Dec 13 14:29:14.846729 waagent[1529]: 2024-12-13T14:29:14.832407Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 14:29:14.846729 waagent[1529]: 2024-12-13T14:29:14.833363Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 14:29:14.846729 waagent[1529]: 2024-12-13T14:29:14.834155Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 14:29:14.846729 waagent[1529]: 2024-12-13T14:29:14.835186Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 14:29:14.846729 waagent[1529]: 2024-12-13T14:29:14.835874Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 14:29:15.023777 waagent[1529]: 2024-12-13T14:29:15.023700Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 14:29:15.032612 waagent[1529]: 2024-12-13T14:29:15.025740Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 14:29:15.032612 waagent[1529]: 2024-12-13T14:29:15.026764Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 14:29:15.827851 waagent[1529]: 2024-12-13T14:29:15.827698Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 14:29:15.841203 waagent[1529]: 2024-12-13T14:29:15.841126Z INFO Daemon Daemon Forcing an update of the goal state.. Dec 13 14:29:15.844411 waagent[1529]: 2024-12-13T14:29:15.844340Z INFO Daemon Daemon Fetching goal state [incarnation 1] Dec 13 14:29:15.927469 waagent[1529]: 2024-12-13T14:29:15.927344Z INFO Daemon Daemon Found private key matching thumbprint F5B48523D9751B751F44C59ED806E29576498436 Dec 13 14:29:15.932853 waagent[1529]: 2024-12-13T14:29:15.932770Z INFO Daemon Daemon Certificate with thumbprint C3017AD21E6BCFBBF894C12A9E545DEA42BB3113 has no matching private key. Dec 13 14:29:15.940126 waagent[1529]: 2024-12-13T14:29:15.934096Z INFO Daemon Daemon Fetch goal state completed Dec 13 14:29:15.958302 waagent[1529]: 2024-12-13T14:29:15.958228Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 661af43b-4b6a-404d-a13f-456c283576bb New eTag: 16375029494700849354] Dec 13 14:29:15.966388 waagent[1529]: 2024-12-13T14:29:15.959946Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:29:15.972468 waagent[1529]: 2024-12-13T14:29:15.972416Z INFO Daemon Daemon Starting provisioning Dec 13 14:29:15.979188 waagent[1529]: 2024-12-13T14:29:15.973576Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 14:29:15.979188 waagent[1529]: 2024-12-13T14:29:15.974451Z INFO Daemon Daemon Set hostname [ci-3510.3.6-a-b3ffbcfb3b] Dec 13 14:29:15.990941 waagent[1529]: 2024-12-13T14:29:15.990841Z INFO Daemon Daemon Publish hostname [ci-3510.3.6-a-b3ffbcfb3b] Dec 13 14:29:16.000280 waagent[1529]: 2024-12-13T14:29:15.993969Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 14:29:16.000280 waagent[1529]: 2024-12-13T14:29:15.994906Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 14:29:16.008794 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Dec 13 14:29:16.009037 systemd[1]: Stopped systemd-networkd-wait-online.service. Dec 13 14:29:16.009135 systemd[1]: Stopping systemd-networkd-wait-online.service... Dec 13 14:29:16.009522 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:29:16.013168 systemd-networkd[1193]: eth0: DHCPv6 lease lost Dec 13 14:29:16.015472 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:29:16.015677 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:29:16.018418 systemd[1]: Starting systemd-networkd.service... Dec 13 14:29:16.050251 systemd-networkd[1587]: enP5901s1: Link UP Dec 13 14:29:16.050261 systemd-networkd[1587]: enP5901s1: Gained carrier Dec 13 14:29:16.051553 systemd-networkd[1587]: eth0: Link UP Dec 13 14:29:16.051563 systemd-networkd[1587]: eth0: Gained carrier Dec 13 14:29:16.051986 systemd-networkd[1587]: lo: Link UP Dec 13 14:29:16.051996 systemd-networkd[1587]: lo: Gained carrier Dec 13 14:29:16.052447 systemd-networkd[1587]: eth0: Gained IPv6LL Dec 13 14:29:16.052752 systemd-networkd[1587]: Enumeration completed Dec 13 14:29:16.052852 systemd[1]: Started systemd-networkd.service. Dec 13 14:29:16.055248 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:29:16.057143 waagent[1529]: 2024-12-13T14:29:16.056781Z INFO Daemon Daemon Create user account if not exists Dec 13 14:29:16.064214 waagent[1529]: 2024-12-13T14:29:16.060800Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 14:29:16.061921 systemd-networkd[1587]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:29:16.064866 waagent[1529]: 2024-12-13T14:29:16.064788Z INFO Daemon Daemon Configure sudoer Dec 13 14:29:16.070600 waagent[1529]: 2024-12-13T14:29:16.066290Z INFO Daemon Daemon Configure sshd Dec 13 14:29:16.070600 waagent[1529]: 2024-12-13T14:29:16.067761Z INFO Daemon Daemon Deploy ssh public key. Dec 13 14:29:16.122247 systemd-networkd[1587]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 14:29:16.126612 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:29:17.188523 waagent[1529]: 2024-12-13T14:29:17.188429Z INFO Daemon Daemon Provisioning complete Dec 13 14:29:17.206580 waagent[1529]: 2024-12-13T14:29:17.206486Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 14:29:17.214296 waagent[1529]: 2024-12-13T14:29:17.208161Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 14:29:17.214296 waagent[1529]: 2024-12-13T14:29:17.210152Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Dec 13 14:29:17.483026 waagent[1596]: 2024-12-13T14:29:17.482845Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Dec 13 14:29:17.483767 waagent[1596]: 2024-12-13T14:29:17.483701Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:17.483917 waagent[1596]: 2024-12-13T14:29:17.483863Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:17.495728 waagent[1596]: 2024-12-13T14:29:17.495628Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Dec 13 14:29:17.495933 waagent[1596]: 2024-12-13T14:29:17.495871Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Dec 13 14:29:17.563474 waagent[1596]: 2024-12-13T14:29:17.563333Z INFO ExtHandler ExtHandler Found private key matching thumbprint F5B48523D9751B751F44C59ED806E29576498436 Dec 13 14:29:17.563716 waagent[1596]: 2024-12-13T14:29:17.563655Z INFO ExtHandler ExtHandler Certificate with thumbprint C3017AD21E6BCFBBF894C12A9E545DEA42BB3113 has no matching private key. Dec 13 14:29:17.563956 waagent[1596]: 2024-12-13T14:29:17.563905Z INFO ExtHandler ExtHandler Fetch goal state completed Dec 13 14:29:17.577692 waagent[1596]: 2024-12-13T14:29:17.577617Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: a60ee767-ef84-4fef-a12e-8e686b64f739 New eTag: 16375029494700849354] Dec 13 14:29:17.578306 waagent[1596]: 2024-12-13T14:29:17.578246Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Dec 13 14:29:17.664922 waagent[1596]: 2024-12-13T14:29:17.664750Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:29:17.675849 waagent[1596]: 2024-12-13T14:29:17.675750Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1596 Dec 13 14:29:17.679316 waagent[1596]: 2024-12-13T14:29:17.679242Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:29:17.680546 waagent[1596]: 2024-12-13T14:29:17.680485Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:29:17.835091 waagent[1596]: 2024-12-13T14:29:17.835009Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:29:17.835727 waagent[1596]: 2024-12-13T14:29:17.835648Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:29:17.844318 waagent[1596]: 2024-12-13T14:29:17.844256Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:29:17.844833 waagent[1596]: 2024-12-13T14:29:17.844772Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:29:17.845926 waagent[1596]: 2024-12-13T14:29:17.845860Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Dec 13 14:29:17.847212 waagent[1596]: 2024-12-13T14:29:17.847153Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:29:17.847863 waagent[1596]: 2024-12-13T14:29:17.847794Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:29:17.848258 waagent[1596]: 2024-12-13T14:29:17.848191Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:17.848445 waagent[1596]: 2024-12-13T14:29:17.848394Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:17.848663 waagent[1596]: 2024-12-13T14:29:17.848613Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:17.849279 waagent[1596]: 2024-12-13T14:29:17.849217Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:29:17.849729 waagent[1596]: 2024-12-13T14:29:17.849675Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:29:17.849849 waagent[1596]: 2024-12-13T14:29:17.849777Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:17.850064 waagent[1596]: 2024-12-13T14:29:17.850018Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:29:17.850161 waagent[1596]: 2024-12-13T14:29:17.850090Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:29:17.850876 waagent[1596]: 2024-12-13T14:29:17.850821Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:29:17.851356 waagent[1596]: 2024-12-13T14:29:17.851304Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:29:17.851885 waagent[1596]: 2024-12-13T14:29:17.851832Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:29:17.851885 waagent[1596]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:29:17.851885 waagent[1596]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:29:17.851885 waagent[1596]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:29:17.851885 waagent[1596]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:17.851885 waagent[1596]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:17.851885 waagent[1596]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:17.852946 waagent[1596]: 2024-12-13T14:29:17.852865Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:29:17.853368 waagent[1596]: 2024-12-13T14:29:17.853318Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:29:17.857608 waagent[1596]: 2024-12-13T14:29:17.857534Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:29:17.868301 waagent[1596]: 2024-12-13T14:29:17.868233Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Dec 13 14:29:17.869305 waagent[1596]: 2024-12-13T14:29:17.869255Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:29:17.870433 waagent[1596]: 2024-12-13T14:29:17.870382Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Dec 13 14:29:17.894852 waagent[1596]: 2024-12-13T14:29:17.894759Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Dec 13 14:29:17.905712 waagent[1596]: 2024-12-13T14:29:17.905634Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1587' Dec 13 14:29:17.998025 waagent[1596]: 2024-12-13T14:29:17.997926Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:29:17.998025 waagent[1596]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:29:17.998025 waagent[1596]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:29:17.998025 waagent[1596]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:37:df:0c brd ff:ff:ff:ff:ff:ff Dec 13 14:29:17.998025 waagent[1596]: 3: enP5901s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:37:df:0c brd ff:ff:ff:ff:ff:ff\ altname enP5901p0s2 Dec 13 14:29:17.998025 waagent[1596]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:29:17.998025 waagent[1596]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:29:17.998025 waagent[1596]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:29:17.998025 waagent[1596]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:29:17.998025 waagent[1596]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:29:17.998025 waagent[1596]: 2: eth0 inet6 fe80::7e1e:52ff:fe37:df0c/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:29:18.254771 waagent[1596]: 2024-12-13T14:29:18.254620Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.12.0.2 -- exiting Dec 13 14:29:19.215581 waagent[1529]: 2024-12-13T14:29:19.215406Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Dec 13 14:29:19.221487 waagent[1529]: 2024-12-13T14:29:19.221416Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.12.0.2 to be the latest agent Dec 13 14:29:19.801620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:29:19.801897 systemd[1]: Stopped kubelet.service. Dec 13 14:29:19.801954 systemd[1]: kubelet.service: Consumed 1.086s CPU time. Dec 13 14:29:19.803835 systemd[1]: Starting kubelet.service... Dec 13 14:29:19.920593 systemd[1]: Started kubelet.service. Dec 13 14:29:20.338975 waagent[1634]: 2024-12-13T14:29:20.338858Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.2) Dec 13 14:29:20.339758 waagent[1634]: 2024-12-13T14:29:20.339686Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.6 Dec 13 14:29:20.339911 waagent[1634]: 2024-12-13T14:29:20.339858Z INFO ExtHandler ExtHandler Python: 3.9.16 Dec 13 14:29:20.340057 waagent[1634]: 2024-12-13T14:29:20.340012Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 13 14:29:20.350106 waagent[1634]: 2024-12-13T14:29:20.349971Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.6; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 14:29:20.350594 waagent[1634]: 2024-12-13T14:29:20.350525Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:20.350763 waagent[1634]: 2024-12-13T14:29:20.350715Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:20.363042 waagent[1634]: 2024-12-13T14:29:20.362945Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 14:29:20.375986 waagent[1634]: 2024-12-13T14:29:20.375910Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 14:29:20.377052 waagent[1634]: 2024-12-13T14:29:20.376986Z INFO ExtHandler Dec 13 14:29:20.377230 waagent[1634]: 2024-12-13T14:29:20.377174Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1bddc28f-e9f1-4fe9-ac2e-3440b667d9a6 eTag: 16375029494700849354 source: Fabric] Dec 13 14:29:20.377952 waagent[1634]: 2024-12-13T14:29:20.377895Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 14:29:20.398153 waagent[1634]: 2024-12-13T14:29:20.397703Z INFO ExtHandler Dec 13 14:29:20.398153 waagent[1634]: 2024-12-13T14:29:20.398026Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 14:29:20.407161 waagent[1634]: 2024-12-13T14:29:20.407068Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 14:29:20.407814 waagent[1634]: 2024-12-13T14:29:20.407742Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Dec 13 14:29:20.432674 waagent[1634]: 2024-12-13T14:29:20.432585Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Dec 13 14:29:20.447675 kubelet[1641]: E1213 14:29:20.447621 1641 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:20.451063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:20.451237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:20.507981 waagent[1634]: 2024-12-13T14:29:20.507837Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F5B48523D9751B751F44C59ED806E29576498436', 'hasPrivateKey': True} Dec 13 14:29:20.509036 waagent[1634]: 2024-12-13T14:29:20.508966Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C3017AD21E6BCFBBF894C12A9E545DEA42BB3113', 'hasPrivateKey': False} Dec 13 14:29:20.510027 waagent[1634]: 2024-12-13T14:29:20.509965Z INFO ExtHandler Fetch goal state completed Dec 13 14:29:20.532427 waagent[1634]: 2024-12-13T14:29:20.532300Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Dec 13 14:29:20.544829 waagent[1634]: 2024-12-13T14:29:20.544712Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.2 running as process 1634 Dec 13 14:29:20.547953 waagent[1634]: 2024-12-13T14:29:20.547868Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 14:29:20.549022 waagent[1634]: 2024-12-13T14:29:20.548956Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '3510.3.6', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 13 14:29:20.549349 waagent[1634]: 2024-12-13T14:29:20.549289Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 13 14:29:20.551380 waagent[1634]: 2024-12-13T14:29:20.551322Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 14:29:20.556739 waagent[1634]: 2024-12-13T14:29:20.556675Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 14:29:20.557167 waagent[1634]: 2024-12-13T14:29:20.557088Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 14:29:20.565931 waagent[1634]: 2024-12-13T14:29:20.565866Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 14:29:20.566516 waagent[1634]: 2024-12-13T14:29:20.566452Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Dec 13 14:29:20.573454 waagent[1634]: 2024-12-13T14:29:20.573338Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 14:29:20.574557 waagent[1634]: 2024-12-13T14:29:20.574482Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 13 14:29:20.576136 waagent[1634]: 2024-12-13T14:29:20.576055Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 14:29:20.576565 waagent[1634]: 2024-12-13T14:29:20.576507Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:20.576731 waagent[1634]: 2024-12-13T14:29:20.576681Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:20.577353 waagent[1634]: 2024-12-13T14:29:20.577295Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 14:29:20.577786 waagent[1634]: 2024-12-13T14:29:20.577731Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 14:29:20.578712 waagent[1634]: 2024-12-13T14:29:20.578651Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 14:29:20.579035 waagent[1634]: 2024-12-13T14:29:20.578982Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 14:29:20.579245 waagent[1634]: 2024-12-13T14:29:20.579194Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 14:29:20.579404 waagent[1634]: 2024-12-13T14:29:20.579350Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 14:29:20.579834 waagent[1634]: 2024-12-13T14:29:20.579775Z INFO EnvHandler ExtHandler Configure routes Dec 13 14:29:20.580715 waagent[1634]: 2024-12-13T14:29:20.580664Z INFO EnvHandler ExtHandler Gateway:None Dec 13 14:29:20.580715 waagent[1634]: 2024-12-13T14:29:20.580577Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 14:29:20.580977 waagent[1634]: 2024-12-13T14:29:20.580927Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 14:29:20.581208 waagent[1634]: 2024-12-13T14:29:20.581158Z INFO EnvHandler ExtHandler Routes:None Dec 13 14:29:20.581583 waagent[1634]: 2024-12-13T14:29:20.581526Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 14:29:20.581583 waagent[1634]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 14:29:20.581583 waagent[1634]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 14:29:20.581583 waagent[1634]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 14:29:20.581583 waagent[1634]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:20.581583 waagent[1634]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:20.581583 waagent[1634]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 14:29:20.586306 waagent[1634]: 2024-12-13T14:29:20.586241Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 14:29:20.603284 waagent[1634]: 2024-12-13T14:29:20.603102Z INFO ExtHandler ExtHandler Downloading agent manifest Dec 13 14:29:20.611850 waagent[1634]: 2024-12-13T14:29:20.611757Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 14:29:20.611850 waagent[1634]: Executing ['ip', '-a', '-o', 'link']: Dec 13 14:29:20.611850 waagent[1634]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 14:29:20.611850 waagent[1634]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:37:df:0c brd ff:ff:ff:ff:ff:ff Dec 13 14:29:20.611850 waagent[1634]: 3: enP5901s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:37:df:0c brd ff:ff:ff:ff:ff:ff\ altname enP5901p0s2 Dec 13 14:29:20.611850 waagent[1634]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 14:29:20.611850 waagent[1634]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 14:29:20.611850 waagent[1634]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 14:29:20.611850 waagent[1634]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 14:29:20.611850 waagent[1634]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Dec 13 14:29:20.611850 waagent[1634]: 2: eth0 inet6 fe80::7e1e:52ff:fe37:df0c/64 scope link \ valid_lft forever preferred_lft forever Dec 13 14:29:20.648436 waagent[1634]: 2024-12-13T14:29:20.648280Z INFO ExtHandler ExtHandler Dec 13 14:29:20.654261 waagent[1634]: 2024-12-13T14:29:20.654002Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a14dcbfe-ee7c-4002-be21-ff1e80112e03 correlation 4a05eb4f-8951-4062-bf62-18cc1c1bf823 created: 2024-12-13T14:27:36.244957Z] Dec 13 14:29:20.661730 waagent[1634]: 2024-12-13T14:29:20.661653Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 14:29:20.664418 waagent[1634]: 2024-12-13T14:29:20.664347Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 16 ms] Dec 13 14:29:20.692203 waagent[1634]: 2024-12-13T14:29:20.692092Z INFO ExtHandler ExtHandler Looking for existing remote access users. Dec 13 14:29:20.720710 waagent[1634]: 2024-12-13T14:29:20.720444Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.2 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 57AC4F4D-648F-4091-821A-B8D1EBD77423;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Dec 13 14:29:20.738093 waagent[1634]: 2024-12-13T14:29:20.737968Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 13 14:29:20.738093 waagent[1634]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:20.738093 waagent[1634]: pkts bytes target prot opt in out source destination Dec 13 14:29:20.738093 waagent[1634]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:20.738093 waagent[1634]: pkts bytes target prot opt in out source destination Dec 13 14:29:20.738093 waagent[1634]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:20.738093 waagent[1634]: pkts bytes target prot opt in out source destination Dec 13 14:29:20.738093 waagent[1634]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:29:20.738093 waagent[1634]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:29:20.738093 waagent[1634]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:29:20.745877 waagent[1634]: 2024-12-13T14:29:20.745763Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 14:29:20.745877 waagent[1634]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:20.745877 waagent[1634]: pkts bytes target prot opt in out source destination Dec 13 14:29:20.745877 waagent[1634]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:20.745877 waagent[1634]: pkts bytes target prot opt in out source destination Dec 13 14:29:20.745877 waagent[1634]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 14:29:20.745877 waagent[1634]: pkts bytes target prot opt in out source destination Dec 13 14:29:20.745877 waagent[1634]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 14:29:20.745877 waagent[1634]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 14:29:20.745877 waagent[1634]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 14:29:20.746501 waagent[1634]: 2024-12-13T14:29:20.746445Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 14:29:30.551579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:29:30.551905 systemd[1]: Stopped kubelet.service. Dec 13 14:29:30.553891 systemd[1]: Starting kubelet.service... Dec 13 14:29:30.878256 systemd[1]: Started kubelet.service. Dec 13 14:29:31.181627 kubelet[1696]: E1213 14:29:31.181500 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:31.183602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:31.183761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:41.301630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:29:41.301931 systemd[1]: Stopped kubelet.service. Dec 13 14:29:41.303993 systemd[1]: Starting kubelet.service... Dec 13 14:29:41.630311 systemd[1]: Started kubelet.service. Dec 13 14:29:41.913598 kubelet[1706]: E1213 14:29:41.913489 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:41.915431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:41.915590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:43.770814 systemd[1]: Created slice system-sshd.slice. Dec 13 14:29:43.772705 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.16.10:59944.service. Dec 13 14:29:44.683318 sshd[1713]: Accepted publickey for core from 10.200.16.10 port 59944 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:44.684978 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:44.690324 systemd[1]: Started session-3.scope. Dec 13 14:29:44.690907 systemd-logind[1425]: New session 3 of user core. Dec 13 14:29:45.298347 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.16.10:59954.service. Dec 13 14:29:46.006406 sshd[1718]: Accepted publickey for core from 10.200.16.10 port 59954 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:46.008042 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:46.013779 systemd[1]: Started session-4.scope. Dec 13 14:29:46.014556 systemd-logind[1425]: New session 4 of user core. Dec 13 14:29:46.507595 sshd[1718]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:46.510861 systemd[1]: sshd@1-10.200.8.12:22-10.200.16.10:59954.service: Deactivated successfully. Dec 13 14:29:46.511909 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:29:46.512661 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:29:46.513578 systemd-logind[1425]: Removed session 4. Dec 13 14:29:46.626768 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.16.10:59956.service. Dec 13 14:29:47.337091 sshd[1724]: Accepted publickey for core from 10.200.16.10 port 59956 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:47.338746 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:47.343594 systemd[1]: Started session-5.scope. Dec 13 14:29:47.344214 systemd-logind[1425]: New session 5 of user core. Dec 13 14:29:47.583783 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 14:29:47.838827 sshd[1724]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:47.842092 systemd[1]: sshd@2-10.200.8.12:22-10.200.16.10:59956.service: Deactivated successfully. Dec 13 14:29:47.843155 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:29:47.843939 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:29:47.844865 systemd-logind[1425]: Removed session 5. Dec 13 14:29:47.960486 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.16.10:59958.service. Dec 13 14:29:48.670385 sshd[1730]: Accepted publickey for core from 10.200.16.10 port 59958 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:48.672022 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:48.677232 systemd[1]: Started session-6.scope. Dec 13 14:29:48.677677 systemd-logind[1425]: New session 6 of user core. Dec 13 14:29:49.171580 sshd[1730]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:49.174819 systemd[1]: sshd@3-10.200.8.12:22-10.200.16.10:59958.service: Deactivated successfully. Dec 13 14:29:49.175829 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:29:49.176590 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:29:49.177497 systemd-logind[1425]: Removed session 6. Dec 13 14:29:49.290230 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.16.10:56544.service. Dec 13 14:29:50.001518 sshd[1736]: Accepted publickey for core from 10.200.16.10 port 56544 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:29:50.003188 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:50.008872 systemd[1]: Started session-7.scope. Dec 13 14:29:50.009335 systemd-logind[1425]: New session 7 of user core. Dec 13 14:29:50.713818 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:29:50.714223 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:50.738208 systemd[1]: Starting docker.service... Dec 13 14:29:50.774427 env[1749]: time="2024-12-13T14:29:50.774378724Z" level=info msg="Starting up" Dec 13 14:29:50.775540 env[1749]: time="2024-12-13T14:29:50.775512526Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:29:50.775540 env[1749]: time="2024-12-13T14:29:50.775531526Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:29:50.775697 env[1749]: time="2024-12-13T14:29:50.775552626Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:29:50.775697 env[1749]: time="2024-12-13T14:29:50.775565926Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:29:50.777344 env[1749]: time="2024-12-13T14:29:50.777313830Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:29:50.777344 env[1749]: time="2024-12-13T14:29:50.777331630Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:29:50.777485 env[1749]: time="2024-12-13T14:29:50.777347330Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:29:50.777485 env[1749]: time="2024-12-13T14:29:50.777359230Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:29:50.787884 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3616561338-merged.mount: Deactivated successfully. Dec 13 14:29:50.888200 env[1749]: time="2024-12-13T14:29:50.888162252Z" level=info msg="Loading containers: start." Dec 13 14:29:51.022141 kernel: Initializing XFRM netlink socket Dec 13 14:29:51.045584 env[1749]: time="2024-12-13T14:29:51.045540261Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:29:51.138673 systemd-networkd[1587]: docker0: Link UP Dec 13 14:29:51.159311 env[1749]: time="2024-12-13T14:29:51.159273175Z" level=info msg="Loading containers: done." Dec 13 14:29:51.176906 env[1749]: time="2024-12-13T14:29:51.176859208Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:29:51.177093 env[1749]: time="2024-12-13T14:29:51.177053808Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:29:51.177213 env[1749]: time="2024-12-13T14:29:51.177189509Z" level=info msg="Daemon has completed initialization" Dec 13 14:29:51.202554 systemd[1]: Started docker.service. Dec 13 14:29:51.212489 env[1749]: time="2024-12-13T14:29:51.212434475Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:29:52.051637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:29:52.051940 systemd[1]: Stopped kubelet.service. Dec 13 14:29:52.054074 systemd[1]: Starting kubelet.service... Dec 13 14:29:52.144788 systemd[1]: Started kubelet.service. Dec 13 14:29:52.188968 kubelet[1868]: E1213 14:29:52.188922 1868 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:52.190765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:52.190925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:54.193878 update_engine[1427]: I1213 14:29:54.193799 1427 update_attempter.cc:509] Updating boot flags... Dec 13 14:29:55.955844 env[1435]: time="2024-12-13T14:29:55.955780403Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:29:56.638179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118458693.mount: Deactivated successfully. Dec 13 14:29:59.116397 env[1435]: time="2024-12-13T14:29:59.116333931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.124005 env[1435]: time="2024-12-13T14:29:59.123946240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.127088 env[1435]: time="2024-12-13T14:29:59.127028243Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.130621 env[1435]: time="2024-12-13T14:29:59.130572947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:59.131492 env[1435]: time="2024-12-13T14:29:59.131448248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:29:59.141779 env[1435]: time="2024-12-13T14:29:59.141733660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:30:01.997523 env[1435]: time="2024-12-13T14:30:01.997462756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:02.003303 env[1435]: time="2024-12-13T14:30:02.003248762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:02.008190 env[1435]: time="2024-12-13T14:30:02.008139066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:02.011937 env[1435]: time="2024-12-13T14:30:02.011888070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:02.012548 env[1435]: time="2024-12-13T14:30:02.012508570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:30:02.023173 env[1435]: time="2024-12-13T14:30:02.023111180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:30:02.301470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:30:02.301706 systemd[1]: Stopped kubelet.service. Dec 13 14:30:02.303430 systemd[1]: Starting kubelet.service... Dec 13 14:30:02.386101 systemd[1]: Started kubelet.service. Dec 13 14:30:02.904775 kubelet[1964]: E1213 14:30:02.904712 1964 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:02.906585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:02.906701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:04.061987 env[1435]: time="2024-12-13T14:30:04.061920398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.066176 env[1435]: time="2024-12-13T14:30:04.066102002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.070237 env[1435]: time="2024-12-13T14:30:04.070189105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.073609 env[1435]: time="2024-12-13T14:30:04.073564508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:04.073935 env[1435]: time="2024-12-13T14:30:04.073900308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:30:04.085492 env[1435]: time="2024-12-13T14:30:04.085450817Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:30:05.277693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252150407.mount: Deactivated successfully. Dec 13 14:30:05.860521 env[1435]: time="2024-12-13T14:30:05.860464714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:05.865619 env[1435]: time="2024-12-13T14:30:05.865579418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:05.869284 env[1435]: time="2024-12-13T14:30:05.869243621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:05.871969 env[1435]: time="2024-12-13T14:30:05.871931323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:05.872391 env[1435]: time="2024-12-13T14:30:05.872359123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:30:05.882467 env[1435]: time="2024-12-13T14:30:05.882436831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:30:06.480463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759485682.mount: Deactivated successfully. Dec 13 14:30:07.665725 env[1435]: time="2024-12-13T14:30:07.665667039Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:07.673257 env[1435]: time="2024-12-13T14:30:07.673202712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:07.677829 env[1435]: time="2024-12-13T14:30:07.677780617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:07.682454 env[1435]: time="2024-12-13T14:30:07.682415423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:07.683059 env[1435]: time="2024-12-13T14:30:07.683026037Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:30:07.693589 env[1435]: time="2024-12-13T14:30:07.693540578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:30:08.203150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024495115.mount: Deactivated successfully. Dec 13 14:30:08.224219 env[1435]: time="2024-12-13T14:30:08.224151400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.233449 env[1435]: time="2024-12-13T14:30:08.233400407Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.236940 env[1435]: time="2024-12-13T14:30:08.236900385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.240465 env[1435]: time="2024-12-13T14:30:08.240424663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:08.240876 env[1435]: time="2024-12-13T14:30:08.240841072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:30:08.250696 env[1435]: time="2024-12-13T14:30:08.250664691Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:30:08.784247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount89922790.mount: Deactivated successfully. Dec 13 14:30:12.148854 env[1435]: time="2024-12-13T14:30:12.148794000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.155405 env[1435]: time="2024-12-13T14:30:12.155355231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.159642 env[1435]: time="2024-12-13T14:30:12.159597715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.165045 env[1435]: time="2024-12-13T14:30:12.165008523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:12.165696 env[1435]: time="2024-12-13T14:30:12.165658036Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:30:13.046395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:30:13.046757 systemd[1]: Stopped kubelet.service. Dec 13 14:30:13.048917 systemd[1]: Starting kubelet.service... Dec 13 14:30:13.191470 systemd[1]: Started kubelet.service. Dec 13 14:30:13.233770 kubelet[2017]: E1213 14:30:13.233734 2017 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:30:13.235317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:30:13.235473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:30:15.561599 systemd[1]: Stopped kubelet.service. Dec 13 14:30:15.564485 systemd[1]: Starting kubelet.service... Dec 13 14:30:15.591076 systemd[1]: Reloading. Dec 13 14:30:15.675562 /usr/lib/systemd/system-generators/torcx-generator[2089]: time="2024-12-13T14:30:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:15.698178 /usr/lib/systemd/system-generators/torcx-generator[2089]: time="2024-12-13T14:30:15Z" level=info msg="torcx already run" Dec 13 14:30:15.789924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:15.789943 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:15.814485 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:15.918691 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:30:15.918800 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:30:15.919040 systemd[1]: Stopped kubelet.service. Dec 13 14:30:15.920876 systemd[1]: Starting kubelet.service... Dec 13 14:30:16.152196 systemd[1]: Started kubelet.service. Dec 13 14:30:16.200362 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:16.200702 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:30:16.200746 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:16.200896 kubelet[2157]: I1213 14:30:16.200867 2157 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:30:16.711321 kubelet[2157]: I1213 14:30:16.711285 2157 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:30:16.711321 kubelet[2157]: I1213 14:30:16.711313 2157 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:30:16.711622 kubelet[2157]: I1213 14:30:16.711609 2157 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:30:16.940624 kubelet[2157]: E1213 14:30:16.940555 2157 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:16.942754 kubelet[2157]: I1213 14:30:16.942709 2157 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:30:16.993737 kubelet[2157]: I1213 14:30:16.993645 2157 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:30:16.994824 kubelet[2157]: I1213 14:30:16.994335 2157 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:30:16.994824 kubelet[2157]: I1213 14:30:16.994550 2157 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:30:16.995410 kubelet[2157]: I1213 14:30:16.995383 2157 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:30:16.995500 kubelet[2157]: I1213 14:30:16.995414 2157 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:30:16.995549 kubelet[2157]: I1213 14:30:16.995520 2157 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:16.995642 kubelet[2157]: I1213 14:30:16.995628 2157 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:30:16.995704 kubelet[2157]: I1213 14:30:16.995649 2157 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:30:16.995704 kubelet[2157]: I1213 14:30:16.995685 2157 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:30:16.995704 kubelet[2157]: I1213 14:30:16.995705 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:30:16.997968 kubelet[2157]: W1213 14:30:16.997767 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:16.997968 kubelet[2157]: E1213 14:30:16.997833 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:16.997968 kubelet[2157]: W1213 14:30:16.997912 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-b3ffbcfb3b&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:16.997968 kubelet[2157]: E1213 14:30:16.997955 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-b3ffbcfb3b&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:16.998216 kubelet[2157]: I1213 14:30:16.998030 2157 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:30:17.039978 kubelet[2157]: I1213 14:30:17.039936 2157 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:30:17.083010 kubelet[2157]: W1213 14:30:17.082950 2157 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:30:17.083848 kubelet[2157]: I1213 14:30:17.083790 2157 server.go:1256] "Started kubelet" Dec 13 14:30:17.084526 kubelet[2157]: I1213 14:30:17.084146 2157 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:30:17.085768 kubelet[2157]: I1213 14:30:17.085309 2157 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:30:17.091811 kubelet[2157]: I1213 14:30:17.091292 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:30:17.091811 kubelet[2157]: I1213 14:30:17.091491 2157 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:30:17.093065 kubelet[2157]: E1213 14:30:17.093047 2157 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-b3ffbcfb3b.1810c2f5b43482f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-b3ffbcfb3b,UID:ci-3510.3.6-a-b3ffbcfb3b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-b3ffbcfb3b,},FirstTimestamp:2024-12-13 14:30:17.083757297 +0000 UTC m=+0.925770796,LastTimestamp:2024-12-13 14:30:17.083757297 +0000 UTC m=+0.925770796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-b3ffbcfb3b,}" Dec 13 14:30:17.094126 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:30:17.094277 kubelet[2157]: I1213 14:30:17.094255 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:30:17.100249 kubelet[2157]: I1213 14:30:17.100223 2157 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:30:17.102445 kubelet[2157]: I1213 14:30:17.102421 2157 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:30:17.102520 kubelet[2157]: I1213 14:30:17.102487 2157 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:30:17.114734 kubelet[2157]: E1213 14:30:17.114716 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-b3ffbcfb3b?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="200ms" Dec 13 14:30:17.114949 kubelet[2157]: W1213 14:30:17.114910 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.115059 kubelet[2157]: E1213 14:30:17.115047 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.130828 kubelet[2157]: I1213 14:30:17.130670 2157 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:30:17.131110 kubelet[2157]: I1213 14:30:17.131090 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:30:17.131810 kubelet[2157]: E1213 14:30:17.131797 2157 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:30:17.132820 kubelet[2157]: I1213 14:30:17.132803 2157 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:30:17.228383 kubelet[2157]: I1213 14:30:17.228346 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:30:17.229999 kubelet[2157]: I1213 14:30:17.229974 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:30:17.230244 kubelet[2157]: I1213 14:30:17.230014 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:30:17.230244 kubelet[2157]: I1213 14:30:17.230201 2157 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:30:17.230358 kubelet[2157]: E1213 14:30:17.230268 2157 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:30:17.231097 kubelet[2157]: W1213 14:30:17.230930 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.231097 kubelet[2157]: E1213 14:30:17.230963 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.270474 kubelet[2157]: I1213 14:30:17.270374 2157 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.272584 kubelet[2157]: I1213 14:30:17.272562 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:30:17.272736 kubelet[2157]: I1213 14:30:17.272717 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:30:17.272821 kubelet[2157]: I1213 14:30:17.272752 2157 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:17.273027 kubelet[2157]: E1213 14:30:17.272681 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.292805 kubelet[2157]: I1213 14:30:17.292421 2157 policy_none.go:49] "None policy: Start" Dec 13 14:30:17.293976 kubelet[2157]: I1213 14:30:17.293952 2157 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:30:17.294126 kubelet[2157]: I1213 14:30:17.293986 2157 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:30:17.307994 systemd[1]: Created slice kubepods.slice. Dec 13 14:30:17.312505 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:30:17.315800 kubelet[2157]: E1213 14:30:17.315775 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-b3ffbcfb3b?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="400ms" Dec 13 14:30:17.320177 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:30:17.321367 kubelet[2157]: I1213 14:30:17.321347 2157 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:30:17.321563 kubelet[2157]: I1213 14:30:17.321541 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:30:17.323606 kubelet[2157]: E1213 14:30:17.323544 2157 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-b3ffbcfb3b\" not found" Dec 13 14:30:17.330715 kubelet[2157]: I1213 14:30:17.330695 2157 topology_manager.go:215] "Topology Admit Handler" podUID="9593d47cec552f0c1bfe17950fd01ebf" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.331996 kubelet[2157]: I1213 14:30:17.331976 2157 topology_manager.go:215] "Topology Admit Handler" podUID="8b5c33e5e6ea0b64e302f2377e492844" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.333462 kubelet[2157]: I1213 14:30:17.333440 2157 topology_manager.go:215] "Topology Admit Handler" podUID="b4af8acf5426c3a2ab3b297630d37be7" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.339073 systemd[1]: Created slice kubepods-burstable-pod9593d47cec552f0c1bfe17950fd01ebf.slice. Dec 13 14:30:17.348336 systemd[1]: Created slice kubepods-burstable-pod8b5c33e5e6ea0b64e302f2377e492844.slice. Dec 13 14:30:17.352691 systemd[1]: Created slice kubepods-burstable-podb4af8acf5426c3a2ab3b297630d37be7.slice. Dec 13 14:30:17.403677 kubelet[2157]: I1213 14:30:17.403646 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.403805 kubelet[2157]: I1213 14:30:17.403697 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4af8acf5426c3a2ab3b297630d37be7-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"b4af8acf5426c3a2ab3b297630d37be7\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.403805 kubelet[2157]: I1213 14:30:17.403722 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9593d47cec552f0c1bfe17950fd01ebf-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"9593d47cec552f0c1bfe17950fd01ebf\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.403805 kubelet[2157]: I1213 14:30:17.403749 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.403805 kubelet[2157]: I1213 14:30:17.403778 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.403805 kubelet[2157]: I1213 14:30:17.403806 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.404027 kubelet[2157]: I1213 14:30:17.403832 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9593d47cec552f0c1bfe17950fd01ebf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"9593d47cec552f0c1bfe17950fd01ebf\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.404027 kubelet[2157]: I1213 14:30:17.403863 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9593d47cec552f0c1bfe17950fd01ebf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"9593d47cec552f0c1bfe17950fd01ebf\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.404027 kubelet[2157]: I1213 14:30:17.403904 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.475593 kubelet[2157]: I1213 14:30:17.475565 2157 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.476261 kubelet[2157]: E1213 14:30:17.476233 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.648821 env[1435]: time="2024-12-13T14:30:17.648493012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b,Uid:9593d47cec552f0c1bfe17950fd01ebf,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:17.651710 env[1435]: time="2024-12-13T14:30:17.651670167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b,Uid:8b5c33e5e6ea0b64e302f2377e492844,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:17.655512 env[1435]: time="2024-12-13T14:30:17.655371232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b,Uid:b4af8acf5426c3a2ab3b297630d37be7,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:17.716706 kubelet[2157]: E1213 14:30:17.716660 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-b3ffbcfb3b?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="800ms" Dec 13 14:30:17.868867 kubelet[2157]: W1213 14:30:17.868804 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-b3ffbcfb3b&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.868867 kubelet[2157]: E1213 14:30:17.868872 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-b3ffbcfb3b&limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.878431 kubelet[2157]: I1213 14:30:17.878403 2157 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.878761 kubelet[2157]: E1213 14:30:17.878740 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:17.968891 kubelet[2157]: W1213 14:30:17.968754 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:17.968891 kubelet[2157]: E1213 14:30:17.968824 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:18.031038 kubelet[2157]: W1213 14:30:18.030972 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:18.031038 kubelet[2157]: E1213 14:30:18.031043 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:18.051641 kubelet[2157]: E1213 14:30:18.051608 2157 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-b3ffbcfb3b.1810c2f5b43482f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-b3ffbcfb3b,UID:ci-3510.3.6-a-b3ffbcfb3b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-b3ffbcfb3b,},FirstTimestamp:2024-12-13 14:30:17.083757297 +0000 UTC m=+0.925770796,LastTimestamp:2024-12-13 14:30:17.083757297 +0000 UTC m=+0.925770796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-b3ffbcfb3b,}" Dec 13 14:30:18.517996 kubelet[2157]: E1213 14:30:18.517958 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-b3ffbcfb3b?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="1.6s" Dec 13 14:30:18.681394 kubelet[2157]: I1213 14:30:18.681360 2157 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:18.681805 kubelet[2157]: E1213 14:30:18.681773 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.12:6443/api/v1/nodes\": dial tcp 10.200.8.12:6443: connect: connection refused" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:18.726709 kubelet[2157]: W1213 14:30:18.726660 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:18.726709 kubelet[2157]: E1213 14:30:18.726713 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:19.012866 kubelet[2157]: E1213 14:30:19.012818 2157 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.12:6443: connect: connection refused Dec 13 14:30:19.478699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460605089.mount: Deactivated successfully. Dec 13 14:30:19.511979 env[1435]: time="2024-12-13T14:30:19.511932271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.515521 env[1435]: time="2024-12-13T14:30:19.515482030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.527388 env[1435]: time="2024-12-13T14:30:19.527353425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.532738 env[1435]: time="2024-12-13T14:30:19.532701213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.537031 env[1435]: time="2024-12-13T14:30:19.536996284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.540018 env[1435]: time="2024-12-13T14:30:19.539983133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.543642 env[1435]: time="2024-12-13T14:30:19.543605393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.547489 env[1435]: time="2024-12-13T14:30:19.547456056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.553255 env[1435]: time="2024-12-13T14:30:19.553220851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.557775 env[1435]: time="2024-12-13T14:30:19.557741526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.574840 env[1435]: time="2024-12-13T14:30:19.574802307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.591620 env[1435]: time="2024-12-13T14:30:19.591585383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:19.646956 env[1435]: time="2024-12-13T14:30:19.646295184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:19.646956 env[1435]: time="2024-12-13T14:30:19.646325785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:19.646956 env[1435]: time="2024-12-13T14:30:19.646335385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:19.646956 env[1435]: time="2024-12-13T14:30:19.646498288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4de8a97c899c943491013bc6457d3613c8755cae5baa2e27e11cda810f57115b pid=2195 runtime=io.containerd.runc.v2 Dec 13 14:30:19.670498 systemd[1]: Started cri-containerd-4de8a97c899c943491013bc6457d3613c8755cae5baa2e27e11cda810f57115b.scope. Dec 13 14:30:19.684260 env[1435]: time="2024-12-13T14:30:19.684191908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:19.684392 env[1435]: time="2024-12-13T14:30:19.684285010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:19.684392 env[1435]: time="2024-12-13T14:30:19.684318810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:19.684514 env[1435]: time="2024-12-13T14:30:19.684481013Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/214e874ce8249c5c3c96e1c1c67a122038ce7697baaedee07f4f4f0ea9f734cc pid=2223 runtime=io.containerd.runc.v2 Dec 13 14:30:19.689017 env[1435]: time="2024-12-13T14:30:19.688954887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:19.689131 env[1435]: time="2024-12-13T14:30:19.689027588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:19.689131 env[1435]: time="2024-12-13T14:30:19.689057288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:19.689353 env[1435]: time="2024-12-13T14:30:19.689310393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/498ed76d7b898bdd41b169b3e1e7952e4e8ca587259c00dd5e622a8f727706a7 pid=2236 runtime=io.containerd.runc.v2 Dec 13 14:30:19.705406 systemd[1]: Started cri-containerd-214e874ce8249c5c3c96e1c1c67a122038ce7697baaedee07f4f4f0ea9f734cc.scope. Dec 13 14:30:19.723760 systemd[1]: Started cri-containerd-498ed76d7b898bdd41b169b3e1e7952e4e8ca587259c00dd5e622a8f727706a7.scope. Dec 13 14:30:19.752833 env[1435]: time="2024-12-13T14:30:19.752721537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b,Uid:8b5c33e5e6ea0b64e302f2377e492844,Namespace:kube-system,Attempt:0,} returns sandbox id \"4de8a97c899c943491013bc6457d3613c8755cae5baa2e27e11cda810f57115b\"" Dec 13 14:30:19.759403 env[1435]: time="2024-12-13T14:30:19.759363446Z" level=info msg="CreateContainer within sandbox \"4de8a97c899c943491013bc6457d3613c8755cae5baa2e27e11cda810f57115b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:30:19.788029 env[1435]: time="2024-12-13T14:30:19.787977417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b,Uid:9593d47cec552f0c1bfe17950fd01ebf,Namespace:kube-system,Attempt:0,} returns sandbox id \"214e874ce8249c5c3c96e1c1c67a122038ce7697baaedee07f4f4f0ea9f734cc\"" Dec 13 14:30:19.791173 env[1435]: time="2024-12-13T14:30:19.791095669Z" level=info msg="CreateContainer within sandbox \"214e874ce8249c5c3c96e1c1c67a122038ce7697baaedee07f4f4f0ea9f734cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:30:19.812508 env[1435]: time="2024-12-13T14:30:19.812462621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b,Uid:b4af8acf5426c3a2ab3b297630d37be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"498ed76d7b898bdd41b169b3e1e7952e4e8ca587259c00dd5e622a8f727706a7\"" Dec 13 14:30:19.815379 env[1435]: time="2024-12-13T14:30:19.815346068Z" level=info msg="CreateContainer within sandbox \"498ed76d7b898bdd41b169b3e1e7952e4e8ca587259c00dd5e622a8f727706a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:30:19.826447 env[1435]: time="2024-12-13T14:30:19.826414150Z" level=info msg="CreateContainer within sandbox \"4de8a97c899c943491013bc6457d3613c8755cae5baa2e27e11cda810f57115b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4e1882305aceaecae71f75ad19fd16be1c247b45baf5c09e2af552367a27873\"" Dec 13 14:30:19.827057 env[1435]: time="2024-12-13T14:30:19.827037261Z" level=info msg="StartContainer for \"d4e1882305aceaecae71f75ad19fd16be1c247b45baf5c09e2af552367a27873\"" Dec 13 14:30:19.845133 systemd[1]: Started cri-containerd-d4e1882305aceaecae71f75ad19fd16be1c247b45baf5c09e2af552367a27873.scope. Dec 13 14:30:19.852632 env[1435]: time="2024-12-13T14:30:19.852591382Z" level=info msg="CreateContainer within sandbox \"214e874ce8249c5c3c96e1c1c67a122038ce7697baaedee07f4f4f0ea9f734cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"edf047ef469181e8d7d7b989d62329803183f0d9ca10246195364a4e48c7bd3d\"" Dec 13 14:30:19.853437 env[1435]: time="2024-12-13T14:30:19.853404895Z" level=info msg="StartContainer for \"edf047ef469181e8d7d7b989d62329803183f0d9ca10246195364a4e48c7bd3d\"" Dec 13 14:30:19.872067 env[1435]: time="2024-12-13T14:30:19.872020502Z" level=info msg="CreateContainer within sandbox \"498ed76d7b898bdd41b169b3e1e7952e4e8ca587259c00dd5e622a8f727706a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6f5927b9bec240527a4250165e0e14325ac7ebf9fc4dc97e9fdf7460bfefb94\"" Dec 13 14:30:19.872739 env[1435]: time="2024-12-13T14:30:19.872709013Z" level=info msg="StartContainer for \"e6f5927b9bec240527a4250165e0e14325ac7ebf9fc4dc97e9fdf7460bfefb94\"" Dec 13 14:30:19.896267 systemd[1]: Started cri-containerd-edf047ef469181e8d7d7b989d62329803183f0d9ca10246195364a4e48c7bd3d.scope. Dec 13 14:30:19.919065 env[1435]: time="2024-12-13T14:30:19.918993475Z" level=info msg="StartContainer for \"d4e1882305aceaecae71f75ad19fd16be1c247b45baf5c09e2af552367a27873\" returns successfully" Dec 13 14:30:19.935377 systemd[1]: Started cri-containerd-e6f5927b9bec240527a4250165e0e14325ac7ebf9fc4dc97e9fdf7460bfefb94.scope. Dec 13 14:30:20.015271 env[1435]: time="2024-12-13T14:30:20.015163853Z" level=info msg="StartContainer for \"e6f5927b9bec240527a4250165e0e14325ac7ebf9fc4dc97e9fdf7460bfefb94\" returns successfully" Dec 13 14:30:20.033579 env[1435]: time="2024-12-13T14:30:20.033534947Z" level=info msg="StartContainer for \"edf047ef469181e8d7d7b989d62329803183f0d9ca10246195364a4e48c7bd3d\" returns successfully" Dec 13 14:30:20.118555 kubelet[2157]: E1213 14:30:20.118508 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-b3ffbcfb3b?timeout=10s\": dial tcp 10.200.8.12:6443: connect: connection refused" interval="3.2s" Dec 13 14:30:20.284044 kubelet[2157]: I1213 14:30:20.283943 2157 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:22.542299 kubelet[2157]: I1213 14:30:22.542258 2157 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:23.001995 kubelet[2157]: I1213 14:30:23.001947 2157 apiserver.go:52] "Watching apiserver" Dec 13 14:30:23.102913 kubelet[2157]: I1213 14:30:23.102861 2157 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:30:23.158413 kubelet[2157]: E1213 14:30:23.158371 2157 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.109917 systemd[1]: Reloading. Dec 13 14:30:26.235735 /usr/lib/systemd/system-generators/torcx-generator[2453]: time="2024-12-13T14:30:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:30:26.235774 /usr/lib/systemd/system-generators/torcx-generator[2453]: time="2024-12-13T14:30:26Z" level=info msg="torcx already run" Dec 13 14:30:26.338924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:30:26.338947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:30:26.357783 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:30:26.475659 kubelet[2157]: I1213 14:30:26.475516 2157 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:30:26.476650 systemd[1]: Stopping kubelet.service... Dec 13 14:30:26.490813 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:30:26.491059 systemd[1]: Stopped kubelet.service. Dec 13 14:30:26.491192 systemd[1]: kubelet.service: Consumed 1.019s CPU time. Dec 13 14:30:26.493634 systemd[1]: Starting kubelet.service... Dec 13 14:30:26.593592 systemd[1]: Started kubelet.service. Dec 13 14:30:26.659831 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:26.659831 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:30:26.659831 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:30:26.660390 kubelet[2519]: I1213 14:30:26.659916 2519 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:30:26.671863 kubelet[2519]: I1213 14:30:26.671826 2519 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:30:26.671863 kubelet[2519]: I1213 14:30:26.671862 2519 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:30:26.672228 kubelet[2519]: I1213 14:30:26.672148 2519 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:30:26.675043 kubelet[2519]: I1213 14:30:26.674193 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:30:26.677582 kubelet[2519]: I1213 14:30:26.677536 2519 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:30:26.689765 kubelet[2519]: I1213 14:30:26.689729 2519 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:30:26.689989 kubelet[2519]: I1213 14:30:26.689978 2519 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:30:26.690358 kubelet[2519]: I1213 14:30:26.690240 2519 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:30:26.690358 kubelet[2519]: I1213 14:30:26.690274 2519 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:30:26.690358 kubelet[2519]: I1213 14:30:26.690289 2519 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:30:26.690358 kubelet[2519]: I1213 14:30:26.690339 2519 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:26.691698 kubelet[2519]: I1213 14:30:26.690443 2519 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:30:26.691698 kubelet[2519]: I1213 14:30:26.690460 2519 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:30:26.691698 kubelet[2519]: I1213 14:30:26.690493 2519 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:30:26.691698 kubelet[2519]: I1213 14:30:26.690512 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:30:26.696225 kubelet[2519]: I1213 14:30:26.696187 2519 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:30:26.696629 kubelet[2519]: I1213 14:30:26.696601 2519 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:30:26.697284 kubelet[2519]: I1213 14:30:26.697234 2519 server.go:1256] "Started kubelet" Dec 13 14:30:26.704238 kubelet[2519]: I1213 14:30:26.700370 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:30:26.708495 kubelet[2519]: I1213 14:30:26.708469 2519 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:30:26.709700 kubelet[2519]: I1213 14:30:26.709596 2519 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:30:26.711067 kubelet[2519]: I1213 14:30:26.711041 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:30:26.711378 kubelet[2519]: I1213 14:30:26.711367 2519 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:30:26.715109 kubelet[2519]: I1213 14:30:26.715082 2519 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:30:26.719820 kubelet[2519]: I1213 14:30:26.719781 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:30:26.723255 kubelet[2519]: I1213 14:30:26.723230 2519 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:30:26.723407 kubelet[2519]: I1213 14:30:26.723393 2519 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:30:26.725812 kubelet[2519]: I1213 14:30:26.725733 2519 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:30:26.725926 kubelet[2519]: I1213 14:30:26.725917 2519 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:30:26.732520 kubelet[2519]: I1213 14:30:26.732488 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:30:26.734916 kubelet[2519]: I1213 14:30:26.734888 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:30:26.735053 kubelet[2519]: I1213 14:30:26.734927 2519 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:30:26.735053 kubelet[2519]: I1213 14:30:26.734947 2519 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:30:26.735053 kubelet[2519]: E1213 14:30:26.735013 2519 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:30:26.793995 kubelet[2519]: I1213 14:30:26.793785 2519 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:30:26.793995 kubelet[2519]: I1213 14:30:26.793808 2519 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:30:26.793995 kubelet[2519]: I1213 14:30:26.793829 2519 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:30:26.794303 kubelet[2519]: I1213 14:30:26.794025 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:30:26.794303 kubelet[2519]: I1213 14:30:26.794053 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:30:26.794303 kubelet[2519]: I1213 14:30:26.794062 2519 policy_none.go:49] "None policy: Start" Dec 13 14:30:26.794824 kubelet[2519]: I1213 14:30:26.794800 2519 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:30:26.794824 kubelet[2519]: I1213 14:30:26.794828 2519 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:30:26.795057 kubelet[2519]: I1213 14:30:26.795037 2519 state_mem.go:75] "Updated machine memory state" Dec 13 14:30:26.800490 kubelet[2519]: I1213 14:30:26.800462 2519 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:30:26.800778 kubelet[2519]: I1213 14:30:26.800750 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:30:26.818912 kubelet[2519]: I1213 14:30:26.818881 2519 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.833546 kubelet[2519]: I1213 14:30:26.833505 2519 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.833746 kubelet[2519]: I1213 14:30:26.833614 2519 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.835543 kubelet[2519]: I1213 14:30:26.835515 2519 topology_manager.go:215] "Topology Admit Handler" podUID="9593d47cec552f0c1bfe17950fd01ebf" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.835662 kubelet[2519]: I1213 14:30:26.835608 2519 topology_manager.go:215] "Topology Admit Handler" podUID="8b5c33e5e6ea0b64e302f2377e492844" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.835662 kubelet[2519]: I1213 14:30:26.835660 2519 topology_manager.go:215] "Topology Admit Handler" podUID="b4af8acf5426c3a2ab3b297630d37be7" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:26.847474 kubelet[2519]: W1213 14:30:26.847437 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:26.852868 kubelet[2519]: W1213 14:30:26.852832 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:26.854266 kubelet[2519]: W1213 14:30:26.853962 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:27.025038 kubelet[2519]: I1213 14:30:27.024908 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025038 kubelet[2519]: I1213 14:30:27.024963 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025038 kubelet[2519]: I1213 14:30:27.024989 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4af8acf5426c3a2ab3b297630d37be7-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"b4af8acf5426c3a2ab3b297630d37be7\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025038 kubelet[2519]: I1213 14:30:27.025015 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025038 kubelet[2519]: I1213 14:30:27.025042 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9593d47cec552f0c1bfe17950fd01ebf-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"9593d47cec552f0c1bfe17950fd01ebf\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025386 kubelet[2519]: I1213 14:30:27.025091 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9593d47cec552f0c1bfe17950fd01ebf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"9593d47cec552f0c1bfe17950fd01ebf\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025386 kubelet[2519]: I1213 14:30:27.025139 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025386 kubelet[2519]: I1213 14:30:27.025167 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b5c33e5e6ea0b64e302f2377e492844-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"8b5c33e5e6ea0b64e302f2377e492844\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.025386 kubelet[2519]: I1213 14:30:27.025194 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9593d47cec552f0c1bfe17950fd01ebf-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" (UID: \"9593d47cec552f0c1bfe17950fd01ebf\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.695707 kubelet[2519]: I1213 14:30:27.695660 2519 apiserver.go:52] "Watching apiserver" Dec 13 14:30:27.724969 kubelet[2519]: I1213 14:30:27.724254 2519 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:30:27.798652 kubelet[2519]: W1213 14:30:27.798620 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:30:27.798978 kubelet[2519]: E1213 14:30:27.798957 2519 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" Dec 13 14:30:27.851890 kubelet[2519]: I1213 14:30:27.851837 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-b3ffbcfb3b" podStartSLOduration=1.851763911 podStartE2EDuration="1.851763911s" podCreationTimestamp="2024-12-13 14:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:27.840358059 +0000 UTC m=+1.241096005" watchObservedRunningTime="2024-12-13 14:30:27.851763911 +0000 UTC m=+1.252501857" Dec 13 14:30:27.874641 kubelet[2519]: I1213 14:30:27.874578 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-b3ffbcfb3b" podStartSLOduration=1.874500514 podStartE2EDuration="1.874500514s" podCreationTimestamp="2024-12-13 14:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:27.855049155 +0000 UTC m=+1.255787001" watchObservedRunningTime="2024-12-13 14:30:27.874500514 +0000 UTC m=+1.275238360" Dec 13 14:30:27.899317 kubelet[2519]: I1213 14:30:27.899281 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-b3ffbcfb3b" podStartSLOduration=1.899207144 podStartE2EDuration="1.899207144s" podCreationTimestamp="2024-12-13 14:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:27.875917733 +0000 UTC m=+1.276655579" watchObservedRunningTime="2024-12-13 14:30:27.899207144 +0000 UTC m=+1.299945090" Dec 13 14:30:28.270041 sudo[1739]: pam_unix(sudo:session): session closed for user root Dec 13 14:30:28.399038 sshd[1736]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:28.403708 systemd[1]: sshd@4-10.200.8.12:22-10.200.16.10:56544.service: Deactivated successfully. Dec 13 14:30:28.404741 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:30:28.404925 systemd[1]: session-7.scope: Consumed 3.453s CPU time. Dec 13 14:30:28.405485 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:30:28.406375 systemd-logind[1425]: Removed session 7. Dec 13 14:30:39.096005 kubelet[2519]: I1213 14:30:39.095965 2519 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:30:39.096522 env[1435]: time="2024-12-13T14:30:39.096476648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:30:39.096848 kubelet[2519]: I1213 14:30:39.096691 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:30:40.195337 kubelet[2519]: I1213 14:30:40.195289 2519 topology_manager.go:215] "Topology Admit Handler" podUID="188424f1-2d51-4e3c-87be-462e1076638d" podNamespace="kube-flannel" podName="kube-flannel-ds-7drnp" Dec 13 14:30:40.196493 kubelet[2519]: I1213 14:30:40.196468 2519 topology_manager.go:215] "Topology Admit Handler" podUID="79703c01-c07c-4c86-a7bc-23b2c4e01452" podNamespace="kube-system" podName="kube-proxy-jjc49" Dec 13 14:30:40.210057 systemd[1]: Created slice kubepods-burstable-pod188424f1_2d51_4e3c_87be_462e1076638d.slice. Dec 13 14:30:40.213658 systemd[1]: Created slice kubepods-besteffort-pod79703c01_c07c_4c86_a7bc_23b2c4e01452.slice. Dec 13 14:30:40.214563 kubelet[2519]: I1213 14:30:40.214459 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/188424f1-2d51-4e3c-87be-462e1076638d-run\") pod \"kube-flannel-ds-7drnp\" (UID: \"188424f1-2d51-4e3c-87be-462e1076638d\") " pod="kube-flannel/kube-flannel-ds-7drnp" Dec 13 14:30:40.214563 kubelet[2519]: I1213 14:30:40.214529 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/188424f1-2d51-4e3c-87be-462e1076638d-xtables-lock\") pod \"kube-flannel-ds-7drnp\" (UID: \"188424f1-2d51-4e3c-87be-462e1076638d\") " pod="kube-flannel/kube-flannel-ds-7drnp" Dec 13 14:30:40.214719 kubelet[2519]: I1213 14:30:40.214584 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x27l\" (UniqueName: \"kubernetes.io/projected/188424f1-2d51-4e3c-87be-462e1076638d-kube-api-access-5x27l\") pod \"kube-flannel-ds-7drnp\" (UID: \"188424f1-2d51-4e3c-87be-462e1076638d\") " pod="kube-flannel/kube-flannel-ds-7drnp" Dec 13 14:30:40.214719 kubelet[2519]: I1213 14:30:40.214613 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79703c01-c07c-4c86-a7bc-23b2c4e01452-kube-proxy\") pod \"kube-proxy-jjc49\" (UID: \"79703c01-c07c-4c86-a7bc-23b2c4e01452\") " pod="kube-system/kube-proxy-jjc49" Dec 13 14:30:40.214719 kubelet[2519]: I1213 14:30:40.214643 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945lm\" (UniqueName: \"kubernetes.io/projected/79703c01-c07c-4c86-a7bc-23b2c4e01452-kube-api-access-945lm\") pod \"kube-proxy-jjc49\" (UID: \"79703c01-c07c-4c86-a7bc-23b2c4e01452\") " pod="kube-system/kube-proxy-jjc49" Dec 13 14:30:40.214719 kubelet[2519]: I1213 14:30:40.214670 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/188424f1-2d51-4e3c-87be-462e1076638d-cni-plugin\") pod \"kube-flannel-ds-7drnp\" (UID: \"188424f1-2d51-4e3c-87be-462e1076638d\") " pod="kube-flannel/kube-flannel-ds-7drnp" Dec 13 14:30:40.214719 kubelet[2519]: I1213 14:30:40.214711 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79703c01-c07c-4c86-a7bc-23b2c4e01452-lib-modules\") pod \"kube-proxy-jjc49\" (UID: \"79703c01-c07c-4c86-a7bc-23b2c4e01452\") " pod="kube-system/kube-proxy-jjc49" Dec 13 14:30:40.214918 kubelet[2519]: I1213 14:30:40.214745 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/188424f1-2d51-4e3c-87be-462e1076638d-cni\") pod \"kube-flannel-ds-7drnp\" (UID: \"188424f1-2d51-4e3c-87be-462e1076638d\") " pod="kube-flannel/kube-flannel-ds-7drnp" Dec 13 14:30:40.214918 kubelet[2519]: I1213 14:30:40.214775 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/188424f1-2d51-4e3c-87be-462e1076638d-flannel-cfg\") pod \"kube-flannel-ds-7drnp\" (UID: \"188424f1-2d51-4e3c-87be-462e1076638d\") " pod="kube-flannel/kube-flannel-ds-7drnp" Dec 13 14:30:40.214918 kubelet[2519]: I1213 14:30:40.214805 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79703c01-c07c-4c86-a7bc-23b2c4e01452-xtables-lock\") pod \"kube-proxy-jjc49\" (UID: \"79703c01-c07c-4c86-a7bc-23b2c4e01452\") " pod="kube-system/kube-proxy-jjc49" Dec 13 14:30:40.524919 env[1435]: time="2024-12-13T14:30:40.524082597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjc49,Uid:79703c01-c07c-4c86-a7bc-23b2c4e01452,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:40.525440 env[1435]: time="2024-12-13T14:30:40.525088206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7drnp,Uid:188424f1-2d51-4e3c-87be-462e1076638d,Namespace:kube-flannel,Attempt:0,}" Dec 13 14:30:40.581594 env[1435]: time="2024-12-13T14:30:40.581358552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:40.581594 env[1435]: time="2024-12-13T14:30:40.581402052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:40.581594 env[1435]: time="2024-12-13T14:30:40.581417152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:40.581847 env[1435]: time="2024-12-13T14:30:40.581630554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f760ecc3aed31aea84df2071e0d69def833bf68a7feb96b765891d2635e575c pid=2593 runtime=io.containerd.runc.v2 Dec 13 14:30:40.588477 env[1435]: time="2024-12-13T14:30:40.588401420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:40.588707 env[1435]: time="2024-12-13T14:30:40.588676923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:40.588851 env[1435]: time="2024-12-13T14:30:40.588827024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:40.589335 env[1435]: time="2024-12-13T14:30:40.589280929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972 pid=2597 runtime=io.containerd.runc.v2 Dec 13 14:30:40.600507 systemd[1]: Started cri-containerd-6f760ecc3aed31aea84df2071e0d69def833bf68a7feb96b765891d2635e575c.scope. Dec 13 14:30:40.615401 systemd[1]: Started cri-containerd-9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972.scope. Dec 13 14:30:40.650586 env[1435]: time="2024-12-13T14:30:40.650538022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjc49,Uid:79703c01-c07c-4c86-a7bc-23b2c4e01452,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f760ecc3aed31aea84df2071e0d69def833bf68a7feb96b765891d2635e575c\"" Dec 13 14:30:40.658323 env[1435]: time="2024-12-13T14:30:40.658269097Z" level=info msg="CreateContainer within sandbox \"6f760ecc3aed31aea84df2071e0d69def833bf68a7feb96b765891d2635e575c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:30:40.683190 env[1435]: time="2024-12-13T14:30:40.682808235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7drnp,Uid:188424f1-2d51-4e3c-87be-462e1076638d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\"" Dec 13 14:30:40.685594 env[1435]: time="2024-12-13T14:30:40.684638053Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 14:30:40.702264 env[1435]: time="2024-12-13T14:30:40.702234124Z" level=info msg="CreateContainer within sandbox \"6f760ecc3aed31aea84df2071e0d69def833bf68a7feb96b765891d2635e575c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"77bef0341ab4e0a8997b784b685f62a08a052cb785d1e31c3f2d2ce8ef103aaf\"" Dec 13 14:30:40.703856 env[1435]: time="2024-12-13T14:30:40.702922030Z" level=info msg="StartContainer for \"77bef0341ab4e0a8997b784b685f62a08a052cb785d1e31c3f2d2ce8ef103aaf\"" Dec 13 14:30:40.719822 systemd[1]: Started cri-containerd-77bef0341ab4e0a8997b784b685f62a08a052cb785d1e31c3f2d2ce8ef103aaf.scope. Dec 13 14:30:40.768279 env[1435]: time="2024-12-13T14:30:40.768238563Z" level=info msg="StartContainer for \"77bef0341ab4e0a8997b784b685f62a08a052cb785d1e31c3f2d2ce8ef103aaf\" returns successfully" Dec 13 14:30:40.822524 kubelet[2519]: I1213 14:30:40.822491 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jjc49" podStartSLOduration=0.822431489 podStartE2EDuration="822.431489ms" podCreationTimestamp="2024-12-13 14:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:40.822415989 +0000 UTC m=+14.223153835" watchObservedRunningTime="2024-12-13 14:30:40.822431489 +0000 UTC m=+14.223169335" Dec 13 14:30:42.552022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2250097865.mount: Deactivated successfully. Dec 13 14:30:42.639602 env[1435]: time="2024-12-13T14:30:42.639558401Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:42.647167 env[1435]: time="2024-12-13T14:30:42.647108071Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:42.651614 env[1435]: time="2024-12-13T14:30:42.651540612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:42.658241 env[1435]: time="2024-12-13T14:30:42.658204773Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:42.658758 env[1435]: time="2024-12-13T14:30:42.658728578Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 14:30:42.661691 env[1435]: time="2024-12-13T14:30:42.661661305Z" level=info msg="CreateContainer within sandbox \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 14:30:42.695970 env[1435]: time="2024-12-13T14:30:42.695923022Z" level=info msg="CreateContainer within sandbox \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593\"" Dec 13 14:30:42.697669 env[1435]: time="2024-12-13T14:30:42.697640738Z" level=info msg="StartContainer for \"adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593\"" Dec 13 14:30:42.719695 systemd[1]: Started cri-containerd-adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593.scope. Dec 13 14:30:42.748536 systemd[1]: cri-containerd-adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593.scope: Deactivated successfully. Dec 13 14:30:42.749986 env[1435]: time="2024-12-13T14:30:42.749898222Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod188424f1_2d51_4e3c_87be_462e1076638d.slice/cri-containerd-adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593.scope/memory.events\": no such file or directory" Dec 13 14:30:42.754995 env[1435]: time="2024-12-13T14:30:42.754955669Z" level=info msg="StartContainer for \"adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593\" returns successfully" Dec 13 14:30:42.847372 env[1435]: time="2024-12-13T14:30:42.847242723Z" level=info msg="shim disconnected" id=adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593 Dec 13 14:30:42.847372 env[1435]: time="2024-12-13T14:30:42.847292223Z" level=warning msg="cleaning up after shim disconnected" id=adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593 namespace=k8s.io Dec 13 14:30:42.847372 env[1435]: time="2024-12-13T14:30:42.847304223Z" level=info msg="cleaning up dead shim" Dec 13 14:30:42.856351 env[1435]: time="2024-12-13T14:30:42.856305507Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2866 runtime=io.containerd.runc.v2\n" Dec 13 14:30:43.458364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adcefdeace31d0adb5346cd81fb77f9544aef1371106f590e97b9d908e48b593-rootfs.mount: Deactivated successfully. Dec 13 14:30:43.818302 env[1435]: time="2024-12-13T14:30:43.818250638Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 14:30:45.798537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235997307.mount: Deactivated successfully. Dec 13 14:30:46.781914 env[1435]: time="2024-12-13T14:30:46.781863480Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:46.789356 env[1435]: time="2024-12-13T14:30:46.789313543Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:46.794215 env[1435]: time="2024-12-13T14:30:46.794177984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:46.798411 env[1435]: time="2024-12-13T14:30:46.798380919Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:46.799021 env[1435]: time="2024-12-13T14:30:46.798985724Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 14:30:46.801938 env[1435]: time="2024-12-13T14:30:46.801897649Z" level=info msg="CreateContainer within sandbox \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:30:46.836189 env[1435]: time="2024-12-13T14:30:46.836156539Z" level=info msg="CreateContainer within sandbox \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330\"" Dec 13 14:30:46.837906 env[1435]: time="2024-12-13T14:30:46.837201248Z" level=info msg="StartContainer for \"7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330\"" Dec 13 14:30:46.860014 systemd[1]: Started cri-containerd-7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330.scope. Dec 13 14:30:46.887206 systemd[1]: cri-containerd-7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330.scope: Deactivated successfully. Dec 13 14:30:46.891453 env[1435]: time="2024-12-13T14:30:46.891289505Z" level=info msg="StartContainer for \"7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330\" returns successfully" Dec 13 14:30:46.917626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330-rootfs.mount: Deactivated successfully. Dec 13 14:30:46.940313 kubelet[2519]: I1213 14:30:46.940187 2519 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:30:46.990320 kubelet[2519]: I1213 14:30:46.974245 2519 topology_manager.go:215] "Topology Admit Handler" podUID="c42dbec2-cb3e-46c5-be0d-b001e4f306a6" podNamespace="kube-system" podName="coredns-76f75df574-fkt6c" Dec 13 14:30:46.990320 kubelet[2519]: W1213 14:30:46.982675 2519 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.6-a-b3ffbcfb3b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-b3ffbcfb3b' and this object Dec 13 14:30:46.990320 kubelet[2519]: E1213 14:30:46.982722 2519 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.6-a-b3ffbcfb3b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.6-a-b3ffbcfb3b' and this object Dec 13 14:30:46.990320 kubelet[2519]: I1213 14:30:46.983657 2519 topology_manager.go:215] "Topology Admit Handler" podUID="42770b59-3d1d-4fe1-adc1-c60bfb139d13" podNamespace="kube-system" podName="coredns-76f75df574-s74cv" Dec 13 14:30:46.981246 systemd[1]: Created slice kubepods-burstable-podc42dbec2_cb3e_46c5_be0d_b001e4f306a6.slice. Dec 13 14:30:46.993261 systemd[1]: Created slice kubepods-burstable-pod42770b59_3d1d_4fe1_adc1_c60bfb139d13.slice. Dec 13 14:30:47.068556 kubelet[2519]: I1213 14:30:47.068499 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz4wm\" (UniqueName: \"kubernetes.io/projected/42770b59-3d1d-4fe1-adc1-c60bfb139d13-kube-api-access-bz4wm\") pod \"coredns-76f75df574-s74cv\" (UID: \"42770b59-3d1d-4fe1-adc1-c60bfb139d13\") " pod="kube-system/coredns-76f75df574-s74cv" Dec 13 14:30:47.068870 kubelet[2519]: I1213 14:30:47.068844 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtw9\" (UniqueName: \"kubernetes.io/projected/c42dbec2-cb3e-46c5-be0d-b001e4f306a6-kube-api-access-5gtw9\") pod \"coredns-76f75df574-fkt6c\" (UID: \"c42dbec2-cb3e-46c5-be0d-b001e4f306a6\") " pod="kube-system/coredns-76f75df574-fkt6c" Dec 13 14:30:47.068991 kubelet[2519]: I1213 14:30:47.068895 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42770b59-3d1d-4fe1-adc1-c60bfb139d13-config-volume\") pod \"coredns-76f75df574-s74cv\" (UID: \"42770b59-3d1d-4fe1-adc1-c60bfb139d13\") " pod="kube-system/coredns-76f75df574-s74cv" Dec 13 14:30:47.068991 kubelet[2519]: I1213 14:30:47.068932 2519 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42dbec2-cb3e-46c5-be0d-b001e4f306a6-config-volume\") pod \"coredns-76f75df574-fkt6c\" (UID: \"c42dbec2-cb3e-46c5-be0d-b001e4f306a6\") " pod="kube-system/coredns-76f75df574-fkt6c" Dec 13 14:30:47.444643 env[1435]: time="2024-12-13T14:30:47.444486500Z" level=info msg="shim disconnected" id=7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330 Dec 13 14:30:47.444643 env[1435]: time="2024-12-13T14:30:47.444554701Z" level=warning msg="cleaning up after shim disconnected" id=7e8508968a2c08c7836902ef8df3ee7aa67f12cea9afad7dd1b6c48b55d8c330 namespace=k8s.io Dec 13 14:30:47.444643 env[1435]: time="2024-12-13T14:30:47.444569501Z" level=info msg="cleaning up dead shim" Dec 13 14:30:47.454202 env[1435]: time="2024-12-13T14:30:47.454149580Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2921 runtime=io.containerd.runc.v2\n" Dec 13 14:30:47.841148 env[1435]: time="2024-12-13T14:30:47.838355157Z" level=info msg="CreateContainer within sandbox \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 14:30:47.873214 env[1435]: time="2024-12-13T14:30:47.873078244Z" level=info msg="CreateContainer within sandbox \"9fe4f06c0530164683e5003b9a392cf85686b4ea0b7d065cfc3762bc2c79a972\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"2ce89cf90154cb6b8c7e1c0785da0e1d009d34868d85cf111fcd71984b00211d\"" Dec 13 14:30:47.874634 env[1435]: time="2024-12-13T14:30:47.874198054Z" level=info msg="StartContainer for \"2ce89cf90154cb6b8c7e1c0785da0e1d009d34868d85cf111fcd71984b00211d\"" Dec 13 14:30:47.903367 systemd[1]: Started cri-containerd-2ce89cf90154cb6b8c7e1c0785da0e1d009d34868d85cf111fcd71984b00211d.scope. Dec 13 14:30:47.933306 env[1435]: time="2024-12-13T14:30:47.933248442Z" level=info msg="StartContainer for \"2ce89cf90154cb6b8c7e1c0785da0e1d009d34868d85cf111fcd71984b00211d\" returns successfully" Dec 13 14:30:48.170782 kubelet[2519]: E1213 14:30:48.170643 2519 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:30:48.170782 kubelet[2519]: E1213 14:30:48.170763 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42770b59-3d1d-4fe1-adc1-c60bfb139d13-config-volume podName:42770b59-3d1d-4fe1-adc1-c60bfb139d13 nodeName:}" failed. No retries permitted until 2024-12-13 14:30:48.670734676 +0000 UTC m=+22.071472522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/42770b59-3d1d-4fe1-adc1-c60bfb139d13-config-volume") pod "coredns-76f75df574-s74cv" (UID: "42770b59-3d1d-4fe1-adc1-c60bfb139d13") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:30:48.171431 kubelet[2519]: E1213 14:30:48.170643 2519 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:30:48.171431 kubelet[2519]: E1213 14:30:48.171105 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c42dbec2-cb3e-46c5-be0d-b001e4f306a6-config-volume podName:c42dbec2-cb3e-46c5-be0d-b001e4f306a6 nodeName:}" failed. No retries permitted until 2024-12-13 14:30:48.671079579 +0000 UTC m=+22.071817425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c42dbec2-cb3e-46c5-be0d-b001e4f306a6-config-volume") pod "coredns-76f75df574-fkt6c" (UID: "c42dbec2-cb3e-46c5-be0d-b001e4f306a6") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:30:48.792217 env[1435]: time="2024-12-13T14:30:48.792160604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fkt6c,Uid:c42dbec2-cb3e-46c5-be0d-b001e4f306a6,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:48.797047 env[1435]: time="2024-12-13T14:30:48.796803641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s74cv,Uid:42770b59-3d1d-4fe1-adc1-c60bfb139d13,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:48.826021 systemd[1]: run-containerd-runc-k8s.io-2ce89cf90154cb6b8c7e1c0785da0e1d009d34868d85cf111fcd71984b00211d-runc.dg3eKM.mount: Deactivated successfully. Dec 13 14:30:48.874991 env[1435]: time="2024-12-13T14:30:48.874921873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fkt6c,Uid:c42dbec2-cb3e-46c5-be0d-b001e4f306a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8355587a220e2b2a58da49785127f70b907982b9bb703576874b6367e546da52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:30:48.876453 kubelet[2519]: E1213 14:30:48.876414 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8355587a220e2b2a58da49785127f70b907982b9bb703576874b6367e546da52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:30:48.876590 kubelet[2519]: E1213 14:30:48.876477 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8355587a220e2b2a58da49785127f70b907982b9bb703576874b6367e546da52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fkt6c" Dec 13 14:30:48.876590 kubelet[2519]: E1213 14:30:48.876501 2519 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8355587a220e2b2a58da49785127f70b907982b9bb703576874b6367e546da52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fkt6c" Dec 13 14:30:48.876590 kubelet[2519]: E1213 14:30:48.876578 2519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fkt6c_kube-system(c42dbec2-cb3e-46c5-be0d-b001e4f306a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fkt6c_kube-system(c42dbec2-cb3e-46c5-be0d-b001e4f306a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8355587a220e2b2a58da49785127f70b907982b9bb703576874b6367e546da52\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-fkt6c" podUID="c42dbec2-cb3e-46c5-be0d-b001e4f306a6" Dec 13 14:30:48.886434 env[1435]: time="2024-12-13T14:30:48.886342966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s74cv,Uid:42770b59-3d1d-4fe1-adc1-c60bfb139d13,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98a7b85d0d127f4a0eb58ca5deb8bc318639c9707069691d2056a91ef6c6b490\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:30:48.886678 kubelet[2519]: E1213 14:30:48.886649 2519 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98a7b85d0d127f4a0eb58ca5deb8bc318639c9707069691d2056a91ef6c6b490\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:30:48.886775 kubelet[2519]: E1213 14:30:48.886706 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98a7b85d0d127f4a0eb58ca5deb8bc318639c9707069691d2056a91ef6c6b490\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-s74cv" Dec 13 14:30:48.886775 kubelet[2519]: E1213 14:30:48.886732 2519 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98a7b85d0d127f4a0eb58ca5deb8bc318639c9707069691d2056a91ef6c6b490\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-s74cv" Dec 13 14:30:48.886865 kubelet[2519]: E1213 14:30:48.886805 2519 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-s74cv_kube-system(42770b59-3d1d-4fe1-adc1-c60bfb139d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-s74cv_kube-system(42770b59-3d1d-4fe1-adc1-c60bfb139d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98a7b85d0d127f4a0eb58ca5deb8bc318639c9707069691d2056a91ef6c6b490\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-s74cv" podUID="42770b59-3d1d-4fe1-adc1-c60bfb139d13" Dec 13 14:30:49.103534 systemd-networkd[1587]: flannel.1: Link UP Dec 13 14:30:49.103543 systemd-networkd[1587]: flannel.1: Gained carrier Dec 13 14:30:49.825735 systemd[1]: run-netns-cni\x2d582d6533\x2d307e\x2d76a1\x2d8e44\x2dc0c1b52d40ad.mount: Deactivated successfully. Dec 13 14:30:49.825850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98a7b85d0d127f4a0eb58ca5deb8bc318639c9707069691d2056a91ef6c6b490-shm.mount: Deactivated successfully. Dec 13 14:30:49.825930 systemd[1]: run-netns-cni\x2df109e29d\x2d4b1b\x2d1e75\x2de72a\x2dd3324b164562.mount: Deactivated successfully. Dec 13 14:30:49.826007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8355587a220e2b2a58da49785127f70b907982b9bb703576874b6367e546da52-shm.mount: Deactivated successfully. Dec 13 14:30:51.087290 systemd-networkd[1587]: flannel.1: Gained IPv6LL Dec 13 14:31:01.736497 env[1435]: time="2024-12-13T14:31:01.736435480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fkt6c,Uid:c42dbec2-cb3e-46c5-be0d-b001e4f306a6,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:01.785622 systemd-networkd[1587]: cni0: Link UP Dec 13 14:31:01.785631 systemd-networkd[1587]: cni0: Gained carrier Dec 13 14:31:01.787419 systemd-networkd[1587]: cni0: Lost carrier Dec 13 14:31:01.843292 systemd-networkd[1587]: veth9fe23f42: Link UP Dec 13 14:31:01.852280 kernel: cni0: port 1(veth9fe23f42) entered blocking state Dec 13 14:31:01.852372 kernel: cni0: port 1(veth9fe23f42) entered disabled state Dec 13 14:31:01.852399 kernel: device veth9fe23f42 entered promiscuous mode Dec 13 14:31:01.856406 kernel: cni0: port 1(veth9fe23f42) entered blocking state Dec 13 14:31:01.862696 kernel: cni0: port 1(veth9fe23f42) entered forwarding state Dec 13 14:31:01.865889 kernel: cni0: port 1(veth9fe23f42) entered disabled state Dec 13 14:31:01.874150 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth9fe23f42: link becomes ready Dec 13 14:31:01.874225 kernel: cni0: port 1(veth9fe23f42) entered blocking state Dec 13 14:31:01.880897 kernel: cni0: port 1(veth9fe23f42) entered forwarding state Dec 13 14:31:01.881003 systemd-networkd[1587]: veth9fe23f42: Gained carrier Dec 13 14:31:01.881685 systemd-networkd[1587]: cni0: Gained carrier Dec 13 14:31:01.883930 env[1435]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000b08e8), "name":"cbr0", "type":"bridge"} Dec 13 14:31:01.883930 env[1435]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:31:01.901966 env[1435]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T14:31:01.901895009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:01.901966 env[1435]: time="2024-12-13T14:31:01.901933009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:01.901966 env[1435]: time="2024-12-13T14:31:01.901946909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:01.902358 env[1435]: time="2024-12-13T14:31:01.902314212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5445edf6cc41e06318c0f50f4128a1b60d41aa51276408d9ac439f2136df273 pid=3183 runtime=io.containerd.runc.v2 Dec 13 14:31:01.926370 systemd[1]: Started cri-containerd-d5445edf6cc41e06318c0f50f4128a1b60d41aa51276408d9ac439f2136df273.scope. Dec 13 14:31:01.964509 env[1435]: time="2024-12-13T14:31:01.964456598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fkt6c,Uid:c42dbec2-cb3e-46c5-be0d-b001e4f306a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5445edf6cc41e06318c0f50f4128a1b60d41aa51276408d9ac439f2136df273\"" Dec 13 14:31:01.967762 env[1435]: time="2024-12-13T14:31:01.967725518Z" level=info msg="CreateContainer within sandbox \"d5445edf6cc41e06318c0f50f4128a1b60d41aa51276408d9ac439f2136df273\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:31:01.994533 env[1435]: time="2024-12-13T14:31:01.994450184Z" level=info msg="CreateContainer within sandbox \"d5445edf6cc41e06318c0f50f4128a1b60d41aa51276408d9ac439f2136df273\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e37c91b7a7574045f05162152b3e41639c778cdd7a326bbc00327815b56658d9\"" Dec 13 14:31:01.996670 env[1435]: time="2024-12-13T14:31:01.996644398Z" level=info msg="StartContainer for \"e37c91b7a7574045f05162152b3e41639c778cdd7a326bbc00327815b56658d9\"" Dec 13 14:31:02.012821 systemd[1]: Started cri-containerd-e37c91b7a7574045f05162152b3e41639c778cdd7a326bbc00327815b56658d9.scope. Dec 13 14:31:02.043553 env[1435]: time="2024-12-13T14:31:02.043500984Z" level=info msg="StartContainer for \"e37c91b7a7574045f05162152b3e41639c778cdd7a326bbc00327815b56658d9\" returns successfully" Dec 13 14:31:02.881584 kubelet[2519]: I1213 14:31:02.881535 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7drnp" podStartSLOduration=16.766241618 podStartE2EDuration="22.881485298s" podCreationTimestamp="2024-12-13 14:30:40 +0000 UTC" firstStartedPulling="2024-12-13 14:30:40.684060247 +0000 UTC m=+14.084798093" lastFinishedPulling="2024-12-13 14:30:46.799303927 +0000 UTC m=+20.200041773" observedRunningTime="2024-12-13 14:30:48.870935341 +0000 UTC m=+22.271673187" watchObservedRunningTime="2024-12-13 14:31:02.881485298 +0000 UTC m=+36.282223144" Dec 13 14:31:02.900389 kubelet[2519]: I1213 14:31:02.900358 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fkt6c" podStartSLOduration=22.900288412 podStartE2EDuration="22.900288412s" podCreationTimestamp="2024-12-13 14:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:02.8819239 +0000 UTC m=+36.282661846" watchObservedRunningTime="2024-12-13 14:31:02.900288412 +0000 UTC m=+36.301026358" Dec 13 14:31:03.695333 systemd-networkd[1587]: cni0: Gained IPv6LL Dec 13 14:31:03.736459 env[1435]: time="2024-12-13T14:31:03.736392132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s74cv,Uid:42770b59-3d1d-4fe1-adc1-c60bfb139d13,Namespace:kube-system,Attempt:0,}" Dec 13 14:31:03.789973 systemd-networkd[1587]: veth70b01f8c: Link UP Dec 13 14:31:03.799303 kernel: cni0: port 2(veth70b01f8c) entered blocking state Dec 13 14:31:03.799429 kernel: cni0: port 2(veth70b01f8c) entered disabled state Dec 13 14:31:03.803571 kernel: device veth70b01f8c entered promiscuous mode Dec 13 14:31:03.817235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:31:03.817368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth70b01f8c: link becomes ready Dec 13 14:31:03.817397 kernel: cni0: port 2(veth70b01f8c) entered blocking state Dec 13 14:31:03.817418 kernel: cni0: port 2(veth70b01f8c) entered forwarding state Dec 13 14:31:03.820559 systemd-networkd[1587]: veth70b01f8c: Gained carrier Dec 13 14:31:03.822587 env[1435]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 14:31:03.822587 env[1435]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:31:03.836740 env[1435]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T14:31:03.836643833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:31:03.836740 env[1435]: time="2024-12-13T14:31:03.836684333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:31:03.836740 env[1435]: time="2024-12-13T14:31:03.836699433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:31:03.837192 env[1435]: time="2024-12-13T14:31:03.837137836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63751387b30d9a5df8f2c87eda237986361ce4cbc1895eca1e8396e9f84bb125 pid=3292 runtime=io.containerd.runc.v2 Dec 13 14:31:03.866007 systemd[1]: Started cri-containerd-63751387b30d9a5df8f2c87eda237986361ce4cbc1895eca1e8396e9f84bb125.scope. Dec 13 14:31:03.887247 systemd-networkd[1587]: veth9fe23f42: Gained IPv6LL Dec 13 14:31:03.910719 env[1435]: time="2024-12-13T14:31:03.910670776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s74cv,Uid:42770b59-3d1d-4fe1-adc1-c60bfb139d13,Namespace:kube-system,Attempt:0,} returns sandbox id \"63751387b30d9a5df8f2c87eda237986361ce4cbc1895eca1e8396e9f84bb125\"" Dec 13 14:31:03.913929 env[1435]: time="2024-12-13T14:31:03.913665694Z" level=info msg="CreateContainer within sandbox \"63751387b30d9a5df8f2c87eda237986361ce4cbc1895eca1e8396e9f84bb125\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:31:03.940899 env[1435]: time="2024-12-13T14:31:03.940844957Z" level=info msg="CreateContainer within sandbox \"63751387b30d9a5df8f2c87eda237986361ce4cbc1895eca1e8396e9f84bb125\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a32910abc879cff72f71f494c26a2b66e15076af0190b9d5af7b1822b6c0c743\"" Dec 13 14:31:03.942918 env[1435]: time="2024-12-13T14:31:03.941694862Z" level=info msg="StartContainer for \"a32910abc879cff72f71f494c26a2b66e15076af0190b9d5af7b1822b6c0c743\"" Dec 13 14:31:03.958976 systemd[1]: Started cri-containerd-a32910abc879cff72f71f494c26a2b66e15076af0190b9d5af7b1822b6c0c743.scope. Dec 13 14:31:03.993313 env[1435]: time="2024-12-13T14:31:03.993240871Z" level=info msg="StartContainer for \"a32910abc879cff72f71f494c26a2b66e15076af0190b9d5af7b1822b6c0c743\" returns successfully" Dec 13 14:31:04.766513 systemd[1]: run-containerd-runc-k8s.io-63751387b30d9a5df8f2c87eda237986361ce4cbc1895eca1e8396e9f84bb125-runc.IdfVqy.mount: Deactivated successfully. Dec 13 14:31:05.103333 systemd-networkd[1587]: veth70b01f8c: Gained IPv6LL Dec 13 14:31:08.813572 kubelet[2519]: I1213 14:31:08.813528 2519 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s74cv" podStartSLOduration=28.813448993 podStartE2EDuration="28.813448993s" podCreationTimestamp="2024-12-13 14:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:31:04.891054954 +0000 UTC m=+38.291792800" watchObservedRunningTime="2024-12-13 14:31:08.813448993 +0000 UTC m=+42.214186939" Dec 13 14:32:30.347159 systemd[1]: Started sshd@5-10.200.8.12:22-10.200.16.10:34968.service. Dec 13 14:32:31.070099 sshd[3760]: Accepted publickey for core from 10.200.16.10 port 34968 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:31.071688 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:31.076214 systemd-logind[1425]: New session 8 of user core. Dec 13 14:32:31.076967 systemd[1]: Started session-8.scope. Dec 13 14:32:31.706861 sshd[3760]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:31.710106 systemd[1]: sshd@5-10.200.8.12:22-10.200.16.10:34968.service: Deactivated successfully. Dec 13 14:32:31.711107 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:32:31.712402 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:32:31.713549 systemd-logind[1425]: Removed session 8. Dec 13 14:32:36.828028 systemd[1]: Started sshd@6-10.200.8.12:22-10.200.16.10:34978.service. Dec 13 14:32:37.539558 sshd[3797]: Accepted publickey for core from 10.200.16.10 port 34978 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:37.541296 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:37.548207 systemd[1]: Started session-9.scope. Dec 13 14:32:37.548814 systemd-logind[1425]: New session 9 of user core. Dec 13 14:32:38.094198 sshd[3797]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:38.096983 systemd[1]: sshd@6-10.200.8.12:22-10.200.16.10:34978.service: Deactivated successfully. Dec 13 14:32:38.097938 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:32:38.098617 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:32:38.099419 systemd-logind[1425]: Removed session 9. Dec 13 14:32:43.215089 systemd[1]: Started sshd@7-10.200.8.12:22-10.200.16.10:34374.service. Dec 13 14:32:43.927265 sshd[3832]: Accepted publickey for core from 10.200.16.10 port 34374 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:43.928666 sshd[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:43.933603 systemd[1]: Started session-10.scope. Dec 13 14:32:43.934230 systemd-logind[1425]: New session 10 of user core. Dec 13 14:32:44.483812 sshd[3832]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:44.487084 systemd[1]: sshd@7-10.200.8.12:22-10.200.16.10:34374.service: Deactivated successfully. Dec 13 14:32:44.488217 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:32:44.489099 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:32:44.490262 systemd-logind[1425]: Removed session 10. Dec 13 14:32:44.603287 systemd[1]: Started sshd@8-10.200.8.12:22-10.200.16.10:34380.service. Dec 13 14:32:45.317362 sshd[3864]: Accepted publickey for core from 10.200.16.10 port 34380 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:45.319036 sshd[3864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:45.324111 systemd[1]: Started session-11.scope. Dec 13 14:32:45.324204 systemd-logind[1425]: New session 11 of user core. Dec 13 14:32:45.909449 sshd[3864]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:45.912724 systemd[1]: sshd@8-10.200.8.12:22-10.200.16.10:34380.service: Deactivated successfully. Dec 13 14:32:45.914160 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:32:45.914257 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:32:45.915697 systemd-logind[1425]: Removed session 11. Dec 13 14:32:46.030058 systemd[1]: Started sshd@9-10.200.8.12:22-10.200.16.10:34382.service. Dec 13 14:32:46.743154 sshd[3875]: Accepted publickey for core from 10.200.16.10 port 34382 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:46.744253 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:46.750284 systemd[1]: Started session-12.scope. Dec 13 14:32:46.751026 systemd-logind[1425]: New session 12 of user core. Dec 13 14:32:47.295815 sshd[3875]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:47.298841 systemd[1]: sshd@9-10.200.8.12:22-10.200.16.10:34382.service: Deactivated successfully. Dec 13 14:32:47.299876 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:32:47.300625 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:32:47.301549 systemd-logind[1425]: Removed session 12. Dec 13 14:32:52.416176 systemd[1]: Started sshd@10-10.200.8.12:22-10.200.16.10:60434.service. Dec 13 14:32:53.130818 sshd[3908]: Accepted publickey for core from 10.200.16.10 port 60434 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:53.132296 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:53.136821 systemd-logind[1425]: New session 13 of user core. Dec 13 14:32:53.137734 systemd[1]: Started session-13.scope. Dec 13 14:32:53.690448 sshd[3908]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:53.693720 systemd[1]: sshd@10-10.200.8.12:22-10.200.16.10:60434.service: Deactivated successfully. Dec 13 14:32:53.694870 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:32:53.695755 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:32:53.696757 systemd-logind[1425]: Removed session 13. Dec 13 14:32:58.809308 systemd[1]: Started sshd@11-10.200.8.12:22-10.200.16.10:56500.service. Dec 13 14:32:59.520978 sshd[3941]: Accepted publickey for core from 10.200.16.10 port 56500 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:32:59.522612 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:59.528538 systemd-logind[1425]: New session 14 of user core. Dec 13 14:32:59.529194 systemd[1]: Started session-14.scope. Dec 13 14:33:00.075974 sshd[3941]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:00.079532 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:33:00.079775 systemd[1]: sshd@11-10.200.8.12:22-10.200.16.10:56500.service: Deactivated successfully. Dec 13 14:33:00.080724 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:33:00.081557 systemd-logind[1425]: Removed session 14. Dec 13 14:33:00.209473 systemd[1]: Started sshd@12-10.200.8.12:22-10.200.16.10:56510.service. Dec 13 14:33:00.920795 sshd[3974]: Accepted publickey for core from 10.200.16.10 port 56510 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:00.922274 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:00.927196 systemd-logind[1425]: New session 15 of user core. Dec 13 14:33:00.927699 systemd[1]: Started session-15.scope. Dec 13 14:33:01.537499 sshd[3974]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:01.541096 systemd[1]: sshd@12-10.200.8.12:22-10.200.16.10:56510.service: Deactivated successfully. Dec 13 14:33:01.542249 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:33:01.544457 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:33:01.546325 systemd-logind[1425]: Removed session 15. Dec 13 14:33:01.657103 systemd[1]: Started sshd@13-10.200.8.12:22-10.200.16.10:56516.service. Dec 13 14:33:02.369416 sshd[3984]: Accepted publickey for core from 10.200.16.10 port 56516 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:02.371080 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:02.376039 systemd[1]: Started session-16.scope. Dec 13 14:33:02.376521 systemd-logind[1425]: New session 16 of user core. Dec 13 14:33:04.146410 sshd[3984]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:04.150033 systemd[1]: sshd@13-10.200.8.12:22-10.200.16.10:56516.service: Deactivated successfully. Dec 13 14:33:04.151190 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:33:04.152166 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:33:04.153214 systemd-logind[1425]: Removed session 16. Dec 13 14:33:04.265840 systemd[1]: Started sshd@14-10.200.8.12:22-10.200.16.10:56522.service. Dec 13 14:33:04.979460 sshd[4002]: Accepted publickey for core from 10.200.16.10 port 56522 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:04.981069 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:04.986147 systemd[1]: Started session-17.scope. Dec 13 14:33:04.986605 systemd-logind[1425]: New session 17 of user core. Dec 13 14:33:05.638496 sshd[4002]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:05.641831 systemd[1]: sshd@14-10.200.8.12:22-10.200.16.10:56522.service: Deactivated successfully. Dec 13 14:33:05.642607 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:33:05.643061 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:33:05.643901 systemd-logind[1425]: Removed session 17. Dec 13 14:33:05.757163 systemd[1]: Started sshd@15-10.200.8.12:22-10.200.16.10:56534.service. Dec 13 14:33:06.469290 sshd[4033]: Accepted publickey for core from 10.200.16.10 port 56534 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:06.470767 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:06.474455 systemd-logind[1425]: New session 18 of user core. Dec 13 14:33:06.476326 systemd[1]: Started session-18.scope. Dec 13 14:33:07.022935 sshd[4033]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:07.025906 systemd[1]: sshd@15-10.200.8.12:22-10.200.16.10:56534.service: Deactivated successfully. Dec 13 14:33:07.026900 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:33:07.027600 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:33:07.028487 systemd-logind[1425]: Removed session 18. Dec 13 14:33:12.144063 systemd[1]: Started sshd@16-10.200.8.12:22-10.200.16.10:42614.service. Dec 13 14:33:12.855250 sshd[4070]: Accepted publickey for core from 10.200.16.10 port 42614 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:12.856880 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:12.862317 systemd[1]: Started session-19.scope. Dec 13 14:33:12.862998 systemd-logind[1425]: New session 19 of user core. Dec 13 14:33:13.409749 sshd[4070]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:13.413288 systemd[1]: sshd@16-10.200.8.12:22-10.200.16.10:42614.service: Deactivated successfully. Dec 13 14:33:13.414314 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:33:13.414972 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:33:13.415793 systemd-logind[1425]: Removed session 19. Dec 13 14:33:18.530785 systemd[1]: Started sshd@17-10.200.8.12:22-10.200.16.10:42622.service. Dec 13 14:33:19.242765 sshd[4103]: Accepted publickey for core from 10.200.16.10 port 42622 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:19.244183 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:19.249098 systemd-logind[1425]: New session 20 of user core. Dec 13 14:33:19.249607 systemd[1]: Started session-20.scope. Dec 13 14:33:19.798056 sshd[4103]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:19.801165 systemd[1]: sshd@17-10.200.8.12:22-10.200.16.10:42622.service: Deactivated successfully. Dec 13 14:33:19.802087 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:33:19.802836 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:33:19.803711 systemd-logind[1425]: Removed session 20. Dec 13 14:33:24.917387 systemd[1]: Started sshd@18-10.200.8.12:22-10.200.16.10:35148.service. Dec 13 14:33:25.626714 sshd[4157]: Accepted publickey for core from 10.200.16.10 port 35148 ssh2: RSA SHA256:VL8LvbxVNxa7jmY6OervfMBnEuOtBvTKJ3L6x/+vjOM Dec 13 14:33:25.628234 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:33:25.633149 systemd[1]: Started session-21.scope. Dec 13 14:33:25.633754 systemd-logind[1425]: New session 21 of user core. Dec 13 14:33:26.178703 sshd[4157]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:26.182004 systemd[1]: sshd@18-10.200.8.12:22-10.200.16.10:35148.service: Deactivated successfully. Dec 13 14:33:26.183149 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:33:26.183945 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:33:26.184892 systemd-logind[1425]: Removed session 21.