Feb 9 19:00:17.054923 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:00:17.054956 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:17.054970 kernel: BIOS-provided physical RAM map: Feb 9 19:00:17.054980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:00:17.054990 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:00:17.055000 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:00:17.055015 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:00:17.055026 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:00:17.055036 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:00:17.055046 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:00:17.055057 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:00:17.055067 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:00:17.055077 kernel: NX (Execute Disable) protection: active Feb 9 19:00:17.055088 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:00:17.055104 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:00:17.055116 kernel: random: crng init done Feb 9 19:00:17.055127 kernel: SMBIOS 3.1.0 present. Feb 9 19:00:17.055138 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:00:17.055149 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:00:17.055161 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:00:17.055172 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:00:17.055183 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:00:17.055196 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:00:17.055207 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:00:17.055219 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:00:17.055230 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:00:17.055242 kernel: tsc: Detected 2593.905 MHz processor Feb 9 19:00:17.055254 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:00:17.055266 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:00:17.055277 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:00:17.055289 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:00:17.055301 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:00:17.055315 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:00:17.055326 kernel: Using GB pages for direct mapping Feb 9 19:00:17.055338 kernel: Secure boot disabled Feb 9 19:00:17.055349 kernel: ACPI: Early table checksum verification disabled Feb 9 19:00:17.055361 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:00:17.055372 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055384 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055396 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:00:17.055415 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:00:17.055427 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055440 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055452 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055464 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055477 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055492 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055505 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.055517 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:00:17.055530 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:00:17.055542 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:00:17.055555 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:00:17.055567 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:00:17.055580 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:00:17.055594 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:00:17.055607 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:00:17.055619 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:00:17.055632 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:00:17.055644 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:00:17.055686 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:00:17.055699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:00:17.055712 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:00:17.055724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:00:17.055740 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:00:17.055753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:00:17.055765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:00:17.055777 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:00:17.055790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:00:17.055802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:00:17.055815 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:00:17.055828 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:00:17.055840 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:00:17.055856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:00:17.055869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:00:17.055881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:00:17.055894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:00:17.055906 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:00:17.055919 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:00:17.055931 kernel: Zone ranges: Feb 9 19:00:17.055944 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:00:17.055956 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:00:17.055971 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:17.055984 kernel: Movable zone start for each node Feb 9 19:00:17.055996 kernel: Early memory node ranges Feb 9 19:00:17.056008 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:00:17.056021 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:00:17.056033 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:00:17.056046 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:17.056058 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:00:17.056071 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:00:17.056085 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:00:17.056098 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:00:17.056111 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:00:17.056123 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:00:17.056136 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:00:17.056149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:00:17.056161 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:00:17.056174 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:00:17.056187 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:00:17.056202 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:00:17.056214 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:00:17.056227 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:00:17.056240 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:00:17.056253 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:00:17.056265 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:00:17.056277 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:00:17.056289 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:00:17.056302 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:00:17.056317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:00:17.056329 kernel: Policy zone: Normal Feb 9 19:00:17.056344 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:17.056357 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:00:17.056369 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:00:17.056382 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:00:17.056394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:00:17.056407 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:00:17.056422 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:00:17.056435 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:00:17.056456 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:00:17.056472 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:00:17.056486 kernel: rcu: RCU event tracing is enabled. Feb 9 19:00:17.056499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:00:17.056513 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:00:17.056526 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:00:17.056540 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:00:17.056553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:00:17.056566 kernel: Using NULL legacy PIC Feb 9 19:00:17.056581 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:00:17.056595 kernel: Console: colour dummy device 80x25 Feb 9 19:00:17.056608 kernel: printk: console [tty1] enabled Feb 9 19:00:17.056621 kernel: printk: console [ttyS0] enabled Feb 9 19:00:17.056635 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:00:17.083188 kernel: ACPI: Core revision 20210730 Feb 9 19:00:17.083229 kernel: Failed to register legacy timer interrupt Feb 9 19:00:17.083244 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:00:17.083258 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:00:17.083271 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 9 19:00:17.083285 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:00:17.083299 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:00:17.083312 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:00:17.083324 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:00:17.083337 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:00:17.083357 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:00:17.083371 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:00:17.083384 kernel: RETBleed: Vulnerable Feb 9 19:00:17.083396 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:00:17.083409 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:17.083422 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:17.083434 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:00:17.083449 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:00:17.083473 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:00:17.083486 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:00:17.083513 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:00:17.083532 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:00:17.083550 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:00:17.083564 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:00:17.083576 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:00:17.083589 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:00:17.083602 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:00:17.083615 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:00:17.083628 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:00:17.083642 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:00:17.083663 kernel: LSM: Security Framework initializing Feb 9 19:00:17.083676 kernel: SELinux: Initializing. Feb 9 19:00:17.083692 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.083706 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.083719 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:00:17.083732 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:00:17.083749 kernel: signal: max sigframe size: 3632 Feb 9 19:00:17.083762 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:00:17.083776 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:00:17.083789 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:00:17.083802 kernel: x86: Booting SMP configuration: Feb 9 19:00:17.083816 kernel: .... node #0, CPUs: #1 Feb 9 19:00:17.083833 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:00:17.083848 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:00:17.083861 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:00:17.083874 kernel: smpboot: Max logical packages: 1 Feb 9 19:00:17.083888 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:00:17.083901 kernel: devtmpfs: initialized Feb 9 19:00:17.083914 kernel: x86/mm: Memory block size: 128MB Feb 9 19:00:17.083928 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:00:17.083943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:00:17.083957 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:00:17.083970 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:00:17.083983 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:00:17.083996 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:00:17.084009 kernel: audit: type=2000 audit(1707505216.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:00:17.084023 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:00:17.084036 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:00:17.084049 kernel: cpuidle: using governor menu Feb 9 19:00:17.084065 kernel: ACPI: bus type PCI registered Feb 9 19:00:17.084078 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:00:17.084091 kernel: dca service started, version 1.12.1 Feb 9 19:00:17.084104 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:00:17.084115 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:00:17.084127 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:00:17.084140 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:00:17.084152 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:00:17.084163 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:00:17.084177 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:00:17.084188 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:00:17.084199 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:00:17.084207 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:00:17.084216 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:00:17.084226 kernel: ACPI: Interpreter enabled Feb 9 19:00:17.084234 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:00:17.084243 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:00:17.084253 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:00:17.084264 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:00:17.084271 kernel: iommu: Default domain type: Translated Feb 9 19:00:17.084281 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:00:17.084289 kernel: vgaarb: loaded Feb 9 19:00:17.084299 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:00:17.084308 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:00:17.084317 kernel: PTP clock support registered Feb 9 19:00:17.084326 kernel: Registered efivars operations Feb 9 19:00:17.084335 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:00:17.084345 kernel: PCI: System does not support PCI Feb 9 19:00:17.084355 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:00:17.084362 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:00:17.084369 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:00:17.084377 kernel: pnp: PnP ACPI init Feb 9 19:00:17.084384 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:00:17.084392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:00:17.084399 kernel: NET: Registered PF_INET protocol family Feb 9 19:00:17.084406 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:00:17.084416 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:00:17.084423 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:00:17.084431 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:00:17.084439 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:00:17.084446 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:00:17.084453 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.084463 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.084471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:00:17.084478 kernel: NET: Registered PF_XDP protocol family Feb 9 19:00:17.084490 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:00:17.084498 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:00:17.084508 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:00:17.084516 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:00:17.084524 kernel: Initialise system trusted keyrings Feb 9 19:00:17.084533 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:00:17.084540 kernel: Key type asymmetric registered Feb 9 19:00:17.084549 kernel: Asymmetric key parser 'x509' registered Feb 9 19:00:17.084558 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:00:17.084567 kernel: io scheduler mq-deadline registered Feb 9 19:00:17.084578 kernel: io scheduler kyber registered Feb 9 19:00:17.084585 kernel: io scheduler bfq registered Feb 9 19:00:17.084595 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:00:17.084602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:00:17.084611 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:00:17.084620 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:00:17.084629 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:00:17.084801 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:00:17.084893 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:00:16 UTC (1707505216) Feb 9 19:00:17.084975 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:00:17.084987 kernel: fail to initialize ptp_kvm Feb 9 19:00:17.084994 kernel: intel_pstate: CPU model not supported Feb 9 19:00:17.085005 kernel: efifb: probing for efifb Feb 9 19:00:17.085012 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:00:17.085020 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:00:17.085030 kernel: efifb: scrolling: redraw Feb 9 19:00:17.085042 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:00:17.085050 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:17.085058 kernel: fb0: EFI VGA frame buffer device Feb 9 19:00:17.085068 kernel: pstore: Registered efi as persistent store backend Feb 9 19:00:17.085076 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:00:17.085086 kernel: Segment Routing with IPv6 Feb 9 19:00:17.085093 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:00:17.085101 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:00:17.085111 kernel: Key type dns_resolver registered Feb 9 19:00:17.085122 kernel: IPI shorthand broadcast: enabled Feb 9 19:00:17.085130 kernel: sched_clock: Marking stable (772833300, 23144900)->(1016982600, -221004400) Feb 9 19:00:17.085138 kernel: registered taskstats version 1 Feb 9 19:00:17.085148 kernel: Loading compiled-in X.509 certificates Feb 9 19:00:17.085155 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:00:17.085165 kernel: Key type .fscrypt registered Feb 9 19:00:17.085173 kernel: Key type fscrypt-provisioning registered Feb 9 19:00:17.085180 kernel: pstore: Using crash dump compression: deflate Feb 9 19:00:17.085192 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:00:17.085200 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:00:17.085210 kernel: ima: No architecture policies found Feb 9 19:00:17.085218 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:00:17.085225 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:00:17.085236 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:00:17.085244 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:00:17.085254 kernel: Run /init as init process Feb 9 19:00:17.085261 kernel: with arguments: Feb 9 19:00:17.085269 kernel: /init Feb 9 19:00:17.085280 kernel: with environment: Feb 9 19:00:17.085289 kernel: HOME=/ Feb 9 19:00:17.085297 kernel: TERM=linux Feb 9 19:00:17.085304 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:00:17.085316 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:17.085328 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:17.085338 systemd[1]: Detected architecture x86-64. Feb 9 19:00:17.085347 systemd[1]: Running in initrd. Feb 9 19:00:17.085355 systemd[1]: No hostname configured, using default hostname. Feb 9 19:00:17.085363 systemd[1]: Hostname set to . Feb 9 19:00:17.085374 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:17.085381 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:00:17.085389 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:17.085398 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:17.085407 systemd[1]: Reached target paths.target. Feb 9 19:00:17.085415 systemd[1]: Reached target slices.target. Feb 9 19:00:17.085425 systemd[1]: Reached target swap.target. Feb 9 19:00:17.085435 systemd[1]: Reached target timers.target. Feb 9 19:00:17.085445 systemd[1]: Listening on iscsid.socket. Feb 9 19:00:17.085454 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:00:17.085461 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:17.085472 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:17.085481 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:17.085493 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:17.085501 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:17.085511 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:17.085519 systemd[1]: Reached target sockets.target. Feb 9 19:00:17.085530 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:17.085538 systemd[1]: Finished network-cleanup.service. Feb 9 19:00:17.085546 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:00:17.085556 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:17.085566 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:17.085577 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:17.085585 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:00:17.085596 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:17.085605 kernel: audit: type=1130 audit(1707505217.058:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.085614 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:00:17.085623 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:00:17.085638 systemd-journald[183]: Journal started Feb 9 19:00:17.085697 systemd-journald[183]: Runtime Journal (/run/log/journal/aa441ca6dd794909950d832b403fa4fd) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:17.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.039324 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:00:17.098047 kernel: Bridge firewalling registered Feb 9 19:00:17.091214 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:00:17.119400 systemd[1]: Started systemd-journald.service. Feb 9 19:00:17.119433 kernel: audit: type=1130 audit(1707505217.100:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.091228 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:17.144151 kernel: SCSI subsystem initialized Feb 9 19:00:17.144182 kernel: audit: type=1130 audit(1707505217.128:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.091275 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:17.098821 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:00:17.176654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:00:17.176692 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:00:17.176711 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:00:17.124910 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:00:17.128962 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:17.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.184108 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:00:17.201960 kernel: audit: type=1130 audit(1707505217.181:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.199257 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:17.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.217697 kernel: audit: type=1130 audit(1707505217.198:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.201787 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:00:17.220982 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:00:17.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.226454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:17.247969 kernel: audit: type=1130 audit(1707505217.232:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.230280 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:17.233662 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:17.245740 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:17.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.262661 kernel: audit: type=1130 audit(1707505217.246:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.276099 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:17.288742 kernel: audit: type=1130 audit(1707505217.275:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.294456 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:00:17.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.298702 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:00:17.313617 kernel: audit: type=1130 audit(1707505217.296:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.319811 dracut-cmdline[205]: dracut-dracut-053 Feb 9 19:00:17.323743 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:17.385671 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:00:17.398679 kernel: iscsi: registered transport (tcp) Feb 9 19:00:17.424418 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:00:17.424492 kernel: QLogic iSCSI HBA Driver Feb 9 19:00:17.453891 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:00:17.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.459959 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:00:17.509677 kernel: raid6: avx512x4 gen() 18314 MB/s Feb 9 19:00:17.529669 kernel: raid6: avx512x4 xor() 8338 MB/s Feb 9 19:00:17.549665 kernel: raid6: avx512x2 gen() 18310 MB/s Feb 9 19:00:17.569669 kernel: raid6: avx512x2 xor() 29468 MB/s Feb 9 19:00:17.589664 kernel: raid6: avx512x1 gen() 18229 MB/s Feb 9 19:00:17.609665 kernel: raid6: avx512x1 xor() 26861 MB/s Feb 9 19:00:17.629676 kernel: raid6: avx2x4 gen() 18246 MB/s Feb 9 19:00:17.649665 kernel: raid6: avx2x4 xor() 7675 MB/s Feb 9 19:00:17.669664 kernel: raid6: avx2x2 gen() 18276 MB/s Feb 9 19:00:17.689668 kernel: raid6: avx2x2 xor() 22251 MB/s Feb 9 19:00:17.709664 kernel: raid6: avx2x1 gen() 13668 MB/s Feb 9 19:00:17.729664 kernel: raid6: avx2x1 xor() 19160 MB/s Feb 9 19:00:17.749667 kernel: raid6: sse2x4 gen() 11635 MB/s Feb 9 19:00:17.769668 kernel: raid6: sse2x4 xor() 7114 MB/s Feb 9 19:00:17.789664 kernel: raid6: sse2x2 gen() 12631 MB/s Feb 9 19:00:17.809666 kernel: raid6: sse2x2 xor() 7313 MB/s Feb 9 19:00:17.829665 kernel: raid6: sse2x1 gen() 11559 MB/s Feb 9 19:00:17.852784 kernel: raid6: sse2x1 xor() 5832 MB/s Feb 9 19:00:17.852820 kernel: raid6: using algorithm avx512x4 gen() 18314 MB/s Feb 9 19:00:17.852833 kernel: raid6: .... xor() 8338 MB/s, rmw enabled Feb 9 19:00:17.856378 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:00:17.875671 kernel: xor: automatically using best checksumming function avx Feb 9 19:00:17.971680 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:00:17.980190 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:00:17.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.983000 audit: BPF prog-id=7 op=LOAD Feb 9 19:00:17.983000 audit: BPF prog-id=8 op=LOAD Feb 9 19:00:17.985048 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:18.000429 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 19:00:18.005227 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:18.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:18.008527 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:00:18.027417 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Feb 9 19:00:18.059122 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:00:18.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:18.064659 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:18.099270 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:18.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:18.152672 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:00:18.156665 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:00:18.199685 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:00:18.203668 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:00:18.203718 kernel: AES CTR mode by8 optimization enabled Feb 9 19:00:18.204671 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:00:18.209680 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:00:18.215867 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:00:18.215922 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:00:18.215939 kernel: scsi host0: storvsc_host_t Feb 9 19:00:18.216182 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:00:18.216683 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:00:18.262672 kernel: scsi host1: storvsc_host_t Feb 9 19:00:18.272668 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:00:18.297557 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:00:18.297615 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:00:18.310011 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:00:18.310280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:00:18.315669 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:00:18.315864 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:00:18.315985 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:00:18.322229 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:00:18.322402 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:00:18.322529 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:00:18.333666 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:18.338667 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:00:18.451258 kernel: hv_netvsc 000d3adc-30ae-000d-3adc-30ae000d3adc eth0: VF slot 1 added Feb 9 19:00:18.460671 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:00:18.469572 kernel: hv_pci e56079b4-dd9f-4999-a507-c9f6c2ef93ac: PCI VMBus probing: Using version 0x10004 Feb 9 19:00:18.469797 kernel: hv_pci e56079b4-dd9f-4999-a507-c9f6c2ef93ac: PCI host bridge to bus dd9f:00 Feb 9 19:00:18.479514 kernel: pci_bus dd9f:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:00:18.479716 kernel: pci_bus dd9f:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:00:18.489787 kernel: pci dd9f:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:00:18.499831 kernel: pci dd9f:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:18.518665 kernel: pci dd9f:00:02.0: enabling Extended Tags Feb 9 19:00:18.532684 kernel: pci dd9f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at dd9f:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:00:18.541839 kernel: pci_bus dd9f:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:00:18.542031 kernel: pci dd9f:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:18.635674 kernel: mlx5_core dd9f:00:02.0: firmware version: 14.30.1224 Feb 9 19:00:18.791235 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:00:18.809766 kernel: mlx5_core dd9f:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:00:18.826672 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (434) Feb 9 19:00:18.840662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:18.951179 kernel: mlx5_core dd9f:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:00:18.951392 kernel: mlx5_core dd9f:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 9 19:00:18.963914 kernel: hv_netvsc 000d3adc-30ae-000d-3adc-30ae000d3adc eth0: VF registering: eth1 Feb 9 19:00:18.964153 kernel: mlx5_core dd9f:00:02.0 eth1: joined to eth0 Feb 9 19:00:18.978669 kernel: mlx5_core dd9f:00:02.0 enP56735s1: renamed from eth1 Feb 9 19:00:18.990508 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:00:19.039100 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:00:19.045914 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:00:19.052863 systemd[1]: Starting disk-uuid.service... Feb 9 19:00:19.065675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:19.074675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:20.082680 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:20.083846 disk-uuid[560]: The operation has completed successfully. Feb 9 19:00:20.150626 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:00:20.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.150748 systemd[1]: Finished disk-uuid.service. Feb 9 19:00:20.166406 systemd[1]: Starting verity-setup.service... Feb 9 19:00:20.222678 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:00:20.544013 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:00:20.547862 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:00:20.553980 systemd[1]: Finished verity-setup.service. Feb 9 19:00:20.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.625951 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:00:20.626394 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:00:20.630587 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:00:20.635235 systemd[1]: Starting ignition-setup.service... Feb 9 19:00:20.640319 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:00:20.661694 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:20.661770 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:20.661793 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:20.708339 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:00:20.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.713000 audit: BPF prog-id=9 op=LOAD Feb 9 19:00:20.714270 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:20.739851 systemd-networkd[799]: lo: Link UP Feb 9 19:00:20.739861 systemd-networkd[799]: lo: Gained carrier Feb 9 19:00:20.744758 systemd-networkd[799]: Enumeration completed Feb 9 19:00:20.745613 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:20.746299 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:20.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.753548 systemd[1]: Reached target network.target. Feb 9 19:00:20.759814 systemd[1]: Starting iscsiuio.service... Feb 9 19:00:20.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.768959 systemd[1]: Started iscsiuio.service. Feb 9 19:00:20.777557 iscsid[810]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:20.777557 iscsid[810]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:00:20.777557 iscsid[810]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:00:20.777557 iscsid[810]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:00:20.777557 iscsid[810]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:00:20.777557 iscsid[810]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:20.777557 iscsid[810]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:00:20.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.772559 systemd[1]: Starting iscsid.service... Feb 9 19:00:20.775233 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:00:20.777868 systemd[1]: Started iscsid.service. Feb 9 19:00:20.784298 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:00:20.800447 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:00:20.805247 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:00:20.810746 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:20.832669 kernel: mlx5_core dd9f:00:02.0 enP56735s1: Link up Feb 9 19:00:20.835877 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:20.841164 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:00:20.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.849846 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:00:20.884440 systemd[1]: Finished ignition-setup.service. Feb 9 19:00:20.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.890231 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:00:20.912920 kernel: hv_netvsc 000d3adc-30ae-000d-3adc-30ae000d3adc eth0: Data path switched to VF: enP56735s1 Feb 9 19:00:20.913164 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:20.913445 systemd-networkd[799]: enP56735s1: Link UP Feb 9 19:00:20.913700 systemd-networkd[799]: eth0: Link UP Feb 9 19:00:20.914179 systemd-networkd[799]: eth0: Gained carrier Feb 9 19:00:20.924142 systemd-networkd[799]: enP56735s1: Gained carrier Feb 9 19:00:20.978768 systemd-networkd[799]: eth0: DHCPv4 address 10.200.8.35/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:22.051791 systemd-networkd[799]: eth0: Gained IPv6LL Feb 9 19:00:23.956400 ignition[825]: Ignition 2.14.0 Feb 9 19:00:23.956418 ignition[825]: Stage: fetch-offline Feb 9 19:00:23.956514 ignition[825]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.956564 ignition[825]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.042469 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.042664 ignition[825]: parsed url from cmdline: "" Feb 9 19:00:24.042668 ignition[825]: no config URL provided Feb 9 19:00:24.042674 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:24.047317 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:00:24.042683 ignition[825]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:24.046024 ignition[825]: failed to fetch config: resource requires networking Feb 9 19:00:24.046426 ignition[825]: Ignition finished successfully Feb 9 19:00:24.068206 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:00:24.068263 kernel: audit: type=1130 audit(1707505224.062:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.064301 systemd[1]: Starting ignition-fetch.service... Feb 9 19:00:24.073211 ignition[831]: Ignition 2.14.0 Feb 9 19:00:24.073217 ignition[831]: Stage: fetch Feb 9 19:00:24.073327 ignition[831]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:24.073355 ignition[831]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.076768 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.095195 ignition[831]: parsed url from cmdline: "" Feb 9 19:00:24.095206 ignition[831]: no config URL provided Feb 9 19:00:24.095215 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:24.095230 ignition[831]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:24.095274 ignition[831]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:00:24.201552 ignition[831]: GET result: OK Feb 9 19:00:24.201802 ignition[831]: config has been read from IMDS userdata Feb 9 19:00:24.204081 ignition[831]: parsing config with SHA512: 24ad09caacc33bede119bd2e5f62c872b8e8b6242a555815412f1375ead9c5954d64faf61e412293a34a8432117c705317d3da8788d47ae0f94c95c6c2ce8a69 Feb 9 19:00:24.234475 unknown[831]: fetched base config from "system" Feb 9 19:00:24.234811 unknown[831]: fetched base config from "system" Feb 9 19:00:24.235469 ignition[831]: fetch: fetch complete Feb 9 19:00:24.234817 unknown[831]: fetched user config from "azure" Feb 9 19:00:24.235474 ignition[831]: fetch: fetch passed Feb 9 19:00:24.235514 ignition[831]: Ignition finished successfully Feb 9 19:00:24.247371 systemd[1]: Finished ignition-fetch.service. Feb 9 19:00:24.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.250618 systemd[1]: Starting ignition-kargs.service... Feb 9 19:00:24.267458 kernel: audit: type=1130 audit(1707505224.249:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.276203 ignition[837]: Ignition 2.14.0 Feb 9 19:00:24.276214 ignition[837]: Stage: kargs Feb 9 19:00:24.276346 ignition[837]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:24.276379 ignition[837]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.287210 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.288642 ignition[837]: kargs: kargs passed Feb 9 19:00:24.290645 systemd[1]: Finished ignition-kargs.service. Feb 9 19:00:24.313701 kernel: audit: type=1130 audit(1707505224.294:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.288696 ignition[837]: Ignition finished successfully Feb 9 19:00:24.307976 systemd[1]: Starting ignition-disks.service... Feb 9 19:00:24.316738 ignition[843]: Ignition 2.14.0 Feb 9 19:00:24.316745 ignition[843]: Stage: disks Feb 9 19:00:24.316856 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:24.316885 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.324835 systemd[1]: Finished ignition-disks.service. Feb 9 19:00:24.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.321661 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.353898 kernel: audit: type=1130 audit(1707505224.327:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.328675 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:00:24.323948 ignition[843]: disks: disks passed Feb 9 19:00:24.343110 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:24.323996 ignition[843]: Ignition finished successfully Feb 9 19:00:24.343205 systemd[1]: Reached target local-fs.target. Feb 9 19:00:24.343638 systemd[1]: Reached target sysinit.target. Feb 9 19:00:24.344159 systemd[1]: Reached target basic.target. Feb 9 19:00:24.351003 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:00:24.408604 systemd-fsck[851]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:00:24.419520 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:00:24.440107 kernel: audit: type=1130 audit(1707505224.422:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.423140 systemd[1]: Mounting sysroot.mount... Feb 9 19:00:24.451681 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:00:24.452181 systemd[1]: Mounted sysroot.mount. Feb 9 19:00:24.454380 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:00:24.488554 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:00:24.494686 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:00:24.500139 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:00:24.500303 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:00:24.510535 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:00:24.545432 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:24.551493 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:00:24.561854 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (862) Feb 9 19:00:24.572368 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:24.572424 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:24.572444 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:24.576844 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:00:24.584295 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:24.599275 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:00:24.604073 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:00:24.609389 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:00:25.091481 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:00:25.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.096943 systemd[1]: Starting ignition-mount.service... Feb 9 19:00:25.109664 kernel: audit: type=1130 audit(1707505225.095:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.114122 systemd[1]: Starting sysroot-boot.service... Feb 9 19:00:25.117280 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:25.117399 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:25.142868 systemd[1]: Finished sysroot-boot.service. Feb 9 19:00:25.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.159689 kernel: audit: type=1130 audit(1707505225.144:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.161893 ignition[928]: INFO : Ignition 2.14.0 Feb 9 19:00:25.161893 ignition[928]: INFO : Stage: mount Feb 9 19:00:25.166607 ignition[928]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:25.166607 ignition[928]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:25.178722 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:25.178722 ignition[928]: INFO : mount: mount passed Feb 9 19:00:25.178722 ignition[928]: INFO : Ignition finished successfully Feb 9 19:00:25.199573 kernel: audit: type=1130 audit(1707505225.178:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.174435 systemd[1]: Finished ignition-mount.service. Feb 9 19:00:26.084747 coreos-metadata[861]: Feb 09 19:00:26.084 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:00:26.106222 coreos-metadata[861]: Feb 09 19:00:26.106 INFO Fetch successful Feb 9 19:00:26.138792 coreos-metadata[861]: Feb 09 19:00:26.138 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:00:26.152243 coreos-metadata[861]: Feb 09 19:00:26.152 INFO Fetch successful Feb 9 19:00:26.169853 coreos-metadata[861]: Feb 09 19:00:26.169 INFO wrote hostname ci-3510.3.2-a-21fad7fabd to /sysroot/etc/hostname Feb 9 19:00:26.176083 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:00:26.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.179924 systemd[1]: Starting ignition-files.service... Feb 9 19:00:26.196504 kernel: audit: type=1130 audit(1707505226.175:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.201640 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:26.213668 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (940) Feb 9 19:00:26.223019 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:26.223071 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:26.223082 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:26.232212 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:26.245931 ignition[959]: INFO : Ignition 2.14.0 Feb 9 19:00:26.245931 ignition[959]: INFO : Stage: files Feb 9 19:00:26.250827 ignition[959]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:26.250827 ignition[959]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:26.260771 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:26.320360 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:00:26.323389 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:00:26.323389 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:00:26.397041 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:00:26.401752 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:00:26.423244 unknown[959]: wrote ssh authorized keys file for user: core Feb 9 19:00:26.426097 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:00:26.426097 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:00:26.426097 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:27.133411 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:00:27.267492 ignition[959]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:00:27.274665 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:00:27.274665 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:27.274665 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:27.454521 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:00:27.560019 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:27.565296 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:00:27.565296 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:00:28.154349 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:00:28.396021 ignition[959]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:00:28.403237 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:00:28.403237 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:28.403237 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:00:33.643599 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:00:55.598896 ignition[959]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 9 19:00:55.607896 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:55.607896 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:00:55.607896 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:00:56.388670 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:01:20.314277 ignition[959]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 19:01:20.321895 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:20.321895 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:20.321895 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:01:21.128588 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:02:10.000168 ignition[959]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:02:10.011490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:10.085906 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3048594187" Feb 9 19:02:10.085906 ignition[959]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3048594187": device or resource busy Feb 9 19:02:10.085906 ignition[959]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3048594187", trying btrfs: device or resource busy Feb 9 19:02:10.085906 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3048594187" Feb 9 19:02:10.106712 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (961) Feb 9 19:02:10.106747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3048594187" Feb 9 19:02:10.106747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3048594187" Feb 9 19:02:10.106747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3048594187" Feb 9 19:02:10.106747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:02:10.106747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:10.106747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:10.142243 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3316409718" Feb 9 19:02:10.147769 ignition[959]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3316409718": device or resource busy Feb 9 19:02:10.147769 ignition[959]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3316409718", trying btrfs: device or resource busy Feb 9 19:02:10.147769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3316409718" Feb 9 19:02:10.147769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3316409718" Feb 9 19:02:10.147769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3316409718" Feb 9 19:02:10.147769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3316409718" Feb 9 19:02:10.147769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:10.147769 ignition[959]: INFO : files: op(17): [started] processing unit "waagent.service" Feb 9 19:02:10.147769 ignition[959]: INFO : files: op(17): [finished] processing unit "waagent.service" Feb 9 19:02:10.147769 ignition[959]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 19:02:10.147769 ignition[959]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 19:02:10.147769 ignition[959]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:02:10.224500 kernel: audit: type=1130 audit(1707505330.162:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1f): [started] setting preset to enabled for "nvidia.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(1f): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:02:10.224621 ignition[959]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:02:10.347149 kernel: audit: type=1130 audit(1707505330.224:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.347189 kernel: audit: type=1131 audit(1707505330.224:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.347209 kernel: audit: type=1130 audit(1707505330.282:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.157205 systemd[1]: Finished ignition-files.service. Feb 9 19:02:10.352542 ignition[959]: INFO : files: op(23): [started] setting preset to enabled for "waagent.service" Feb 9 19:02:10.352542 ignition[959]: INFO : files: op(23): [finished] setting preset to enabled for "waagent.service" Feb 9 19:02:10.352542 ignition[959]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:10.352542 ignition[959]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:10.352542 ignition[959]: INFO : files: files passed Feb 9 19:02:10.352542 ignition[959]: INFO : Ignition finished successfully Feb 9 19:02:10.401312 kernel: audit: type=1130 audit(1707505330.367:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.401345 kernel: audit: type=1131 audit(1707505330.367:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.165827 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:02:10.406006 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:02:10.182825 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:02:10.183729 systemd[1]: Starting ignition-quench.service... Feb 9 19:02:10.215458 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:02:10.215563 systemd[1]: Finished ignition-quench.service. Feb 9 19:02:10.274200 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:02:10.282529 systemd[1]: Reached target ignition-complete.target. Feb 9 19:02:10.347862 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:02:10.363541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:02:10.363637 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:02:10.367875 systemd[1]: Reached target initrd-fs.target. Feb 9 19:02:10.401303 systemd[1]: Reached target initrd.target. Feb 9 19:02:10.406066 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:02:10.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.423827 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:02:10.459828 kernel: audit: type=1130 audit(1707505330.444:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.438848 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:02:10.457894 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:02:10.471888 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:02:10.476475 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:02:10.481103 systemd[1]: Stopped target timers.target. Feb 9 19:02:10.485187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:02:10.488010 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:02:10.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.492661 systemd[1]: Stopped target initrd.target. Feb 9 19:02:10.506677 kernel: audit: type=1131 audit(1707505330.492:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.510163 systemd[1]: Stopped target basic.target. Feb 9 19:02:10.514616 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:02:10.573541 kernel: audit: type=1131 audit(1707505330.516:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.573578 kernel: audit: type=1131 audit(1707505330.516:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.517063 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:02:10.517185 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:02:10.578600 ignition[997]: INFO : Ignition 2.14.0 Feb 9 19:02:10.578600 ignition[997]: INFO : Stage: umount Feb 9 19:02:10.578600 ignition[997]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:10.578600 ignition[997]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:02:10.578600 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:02:10.517639 systemd[1]: Stopped target remote-fs.target. Feb 9 19:02:10.598031 ignition[997]: INFO : umount: umount passed Feb 9 19:02:10.598031 ignition[997]: INFO : Ignition finished successfully Feb 9 19:02:10.518249 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:02:10.518287 systemd[1]: Stopped target sysinit.target. Feb 9 19:02:10.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.518692 systemd[1]: Stopped target local-fs.target. Feb 9 19:02:10.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.519037 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:02:10.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.519398 systemd[1]: Stopped target swap.target. Feb 9 19:02:10.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.519828 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:02:10.519901 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:02:10.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.520354 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:02:10.520782 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:02:10.520815 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:02:10.521375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:02:10.521408 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:02:10.521743 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:02:10.521775 systemd[1]: Stopped ignition-files.service. Feb 9 19:02:10.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.522138 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:02:10.522170 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:02:10.557417 systemd[1]: Stopping ignition-mount.service... Feb 9 19:02:10.558313 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:02:10.558393 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:02:10.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.559779 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:02:10.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.560266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:02:10.560333 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:02:10.561248 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:02:10.561284 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:02:10.609077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:02:10.609203 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:02:10.613016 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:02:10.613099 systemd[1]: Stopped ignition-mount.service. Feb 9 19:02:10.617849 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:02:10.617903 systemd[1]: Stopped ignition-disks.service. Feb 9 19:02:10.621680 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:02:10.621752 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:02:10.623826 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:02:10.623876 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:02:10.628227 systemd[1]: Stopped target network.target. Feb 9 19:02:10.630370 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:02:10.632937 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:02:10.637451 systemd[1]: Stopped target paths.target. Feb 9 19:02:10.641580 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:02:10.644690 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:02:10.649045 systemd[1]: Stopped target slices.target. Feb 9 19:02:10.651016 systemd[1]: Stopped target sockets.target. Feb 9 19:02:10.656284 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:02:10.656345 systemd[1]: Closed iscsid.socket. Feb 9 19:02:10.660856 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:02:10.660894 systemd[1]: Closed iscsiuio.socket. Feb 9 19:02:10.665476 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:02:10.665540 systemd[1]: Stopped ignition-setup.service. Feb 9 19:02:10.669857 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:10.674865 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:02:10.679711 systemd-networkd[799]: eth0: DHCPv6 lease lost Feb 9 19:02:10.724000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:02:10.682044 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:10.682149 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:10.690166 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:02:10.690551 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:02:10.690660 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:02:10.696997 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:02:10.697034 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:02:10.701828 systemd[1]: Stopping network-cleanup.service... Feb 9 19:02:10.712954 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:02:10.713036 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:02:10.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.787234 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:02:10.786000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:02:10.787312 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:02:10.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.794027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:02:10.794094 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:02:10.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.801330 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:02:10.807051 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:02:10.810804 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:02:10.813367 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:02:10.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.817954 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:02:10.820501 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:02:10.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.825736 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:02:10.825795 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:02:10.831555 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:02:10.831606 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:02:10.840913 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:02:10.840984 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:02:10.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.848061 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:02:10.848116 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:02:10.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.853035 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:02:10.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.853085 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:02:10.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.857572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:02:10.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.857621 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:02:10.860693 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:02:10.872725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:02:10.872785 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:02:10.877910 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:02:10.902871 kernel: hv_netvsc 000d3adc-30ae-000d-3adc-30ae000d3adc eth0: Data path switched from VF: enP56735s1 Feb 9 19:02:10.878009 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:02:10.925606 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:02:10.925751 systemd[1]: Stopped network-cleanup.service. Feb 9 19:02:10.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.930885 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:02:10.935893 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:02:10.950552 systemd[1]: Switching root. Feb 9 19:02:10.980304 iscsid[810]: iscsid shutting down. Feb 9 19:02:10.982389 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 9 19:02:10.982440 systemd-journald[183]: Journal stopped Feb 9 19:02:23.639986 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:02:23.640012 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:02:23.640024 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:02:23.640036 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:02:23.640045 kernel: SELinux: policy capability open_perms=1 Feb 9 19:02:23.640055 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:02:23.640065 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:02:23.640079 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:02:23.640088 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:02:23.640099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:02:23.640108 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:02:23.640120 systemd[1]: Successfully loaded SELinux policy in 279.508ms. Feb 9 19:02:23.640133 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.869ms. Feb 9 19:02:23.640147 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:23.640161 systemd[1]: Detected virtualization microsoft. Feb 9 19:02:23.640174 systemd[1]: Detected architecture x86-64. Feb 9 19:02:23.640185 systemd[1]: Detected first boot. Feb 9 19:02:23.640199 systemd[1]: Hostname set to . Feb 9 19:02:23.640212 systemd[1]: Initializing machine ID from random generator. Feb 9 19:02:23.640231 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:02:23.640243 kernel: kauditd_printk_skb: 39 callbacks suppressed Feb 9 19:02:23.640257 kernel: audit: type=1400 audit(1707505335.265:87): avc: denied { associate } for pid=1030 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:23.640273 kernel: audit: type=1300 audit(1707505335.265:87): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1013 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:23.640289 kernel: audit: type=1327 audit(1707505335.265:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:23.640308 kernel: audit: type=1400 audit(1707505335.274:88): avc: denied { associate } for pid=1030 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:23.640324 kernel: audit: type=1300 audit(1707505335.274:88): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1013 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:23.640338 kernel: audit: type=1307 audit(1707505335.274:88): cwd="/" Feb 9 19:02:23.640352 kernel: audit: type=1302 audit(1707505335.274:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:23.640363 kernel: audit: type=1302 audit(1707505335.274:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:23.640375 kernel: audit: type=1327 audit(1707505335.274:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:23.640389 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:02:23.640400 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:23.640413 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:23.640424 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:23.640436 kernel: audit: type=1334 audit(1707505343.125:89): prog-id=12 op=LOAD Feb 9 19:02:23.640445 kernel: audit: type=1334 audit(1707505343.125:90): prog-id=3 op=UNLOAD Feb 9 19:02:23.640456 kernel: audit: type=1334 audit(1707505343.130:91): prog-id=13 op=LOAD Feb 9 19:02:23.640465 kernel: audit: type=1334 audit(1707505343.140:92): prog-id=14 op=LOAD Feb 9 19:02:23.640478 kernel: audit: type=1334 audit(1707505343.140:93): prog-id=4 op=UNLOAD Feb 9 19:02:23.640487 kernel: audit: type=1334 audit(1707505343.140:94): prog-id=5 op=UNLOAD Feb 9 19:02:23.640501 kernel: audit: type=1334 audit(1707505343.151:95): prog-id=15 op=LOAD Feb 9 19:02:23.640510 kernel: audit: type=1334 audit(1707505343.151:96): prog-id=12 op=UNLOAD Feb 9 19:02:23.640523 kernel: audit: type=1334 audit(1707505343.166:97): prog-id=16 op=LOAD Feb 9 19:02:23.640533 kernel: audit: type=1334 audit(1707505343.171:98): prog-id=17 op=LOAD Feb 9 19:02:23.640543 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:02:23.640553 systemd[1]: Stopped iscsiuio.service. Feb 9 19:02:23.641419 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:02:23.641437 systemd[1]: Stopped iscsid.service. Feb 9 19:02:23.641451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:02:23.641462 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:02:23.641473 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:02:23.641485 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:02:23.641497 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:02:23.641509 systemd[1]: Created slice system-getty.slice. Feb 9 19:02:23.641519 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:02:23.641534 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:02:23.641546 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:02:23.641558 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:02:23.641568 systemd[1]: Created slice user.slice. Feb 9 19:02:23.641580 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:23.641593 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:02:23.641603 systemd[1]: Set up automount boot.automount. Feb 9 19:02:23.641616 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:02:23.641629 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:02:23.641641 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:02:23.641678 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:02:23.641689 systemd[1]: Reached target integritysetup.target. Feb 9 19:02:23.641699 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:23.641711 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:23.641724 systemd[1]: Reached target slices.target. Feb 9 19:02:23.641734 systemd[1]: Reached target swap.target. Feb 9 19:02:23.641749 systemd[1]: Reached target torcx.target. Feb 9 19:02:23.641762 systemd[1]: Reached target veritysetup.target. Feb 9 19:02:23.642095 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:02:23.642106 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:02:23.642116 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:23.642131 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:23.642144 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:23.642154 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:02:23.642167 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:02:23.642180 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:02:23.642191 systemd[1]: Mounting media.mount... Feb 9 19:02:23.642204 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:23.642217 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:02:23.642227 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:02:23.642241 systemd[1]: Mounting tmp.mount... Feb 9 19:02:23.642252 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:02:23.642265 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:02:23.642275 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:23.642288 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:02:23.642300 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:02:23.642312 systemd[1]: Starting modprobe@drm.service... Feb 9 19:02:23.642323 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:02:23.642335 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:02:23.642349 systemd[1]: Starting modprobe@loop.service... Feb 9 19:02:23.642360 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:02:23.642372 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:02:23.642385 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:02:23.642396 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:02:23.642407 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:02:23.642418 systemd[1]: Stopped systemd-journald.service. Feb 9 19:02:23.642431 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:23.642443 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:23.642456 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:02:23.642469 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:02:23.642479 kernel: loop: module loaded Feb 9 19:02:23.642488 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:23.642501 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:02:23.642512 systemd[1]: Stopped verity-setup.service. Feb 9 19:02:23.642524 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:23.642535 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:02:23.642550 kernel: fuse: init (API version 7.34) Feb 9 19:02:23.642562 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:02:23.642573 systemd[1]: Mounted media.mount. Feb 9 19:02:23.642584 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:02:23.642601 systemd-journald[1127]: Journal started Feb 9 19:02:23.642665 systemd-journald[1127]: Runtime Journal (/run/log/journal/a723b8ae7c7048119083eefa65308600) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:02:13.323000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:02:14.023000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:14.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:14.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:14.040000 audit: BPF prog-id=10 op=LOAD Feb 9 19:02:14.040000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:02:14.040000 audit: BPF prog-id=11 op=LOAD Feb 9 19:02:14.040000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:02:15.265000 audit[1030]: AVC avc: denied { associate } for pid=1030 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:15.265000 audit[1030]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1013 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:15.265000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:15.274000 audit[1030]: AVC avc: denied { associate } for pid=1030 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:15.274000 audit[1030]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1013 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:15.274000 audit: CWD cwd="/" Feb 9 19:02:15.274000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:15.274000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:15.274000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:23.125000 audit: BPF prog-id=12 op=LOAD Feb 9 19:02:23.125000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:02:23.130000 audit: BPF prog-id=13 op=LOAD Feb 9 19:02:23.140000 audit: BPF prog-id=14 op=LOAD Feb 9 19:02:23.140000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:02:23.140000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:02:23.151000 audit: BPF prog-id=15 op=LOAD Feb 9 19:02:23.151000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:02:23.166000 audit: BPF prog-id=16 op=LOAD Feb 9 19:02:23.171000 audit: BPF prog-id=17 op=LOAD Feb 9 19:02:23.171000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:02:23.171000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:02:23.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.188000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:02:23.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.541000 audit: BPF prog-id=18 op=LOAD Feb 9 19:02:23.541000 audit: BPF prog-id=19 op=LOAD Feb 9 19:02:23.541000 audit: BPF prog-id=20 op=LOAD Feb 9 19:02:23.542000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:02:23.542000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:02:23.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.636000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:23.636000 audit[1127]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe82042650 a2=4000 a3=7ffe820426ec items=0 ppid=1 pid=1127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:23.636000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:15.249058 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:23.124514 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:02:15.249771 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:23.176759 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:02:15.249793 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:02:15.249833 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:02:15.249844 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:02:15.249890 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:02:15.249904 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:02:15.250114 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:02:15.250169 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:15.250185 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:02:15.250794 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:02:15.250833 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:02:15.250853 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:02:15.250870 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:02:15.250889 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:02:15.250905 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:02:22.008007 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:22.008238 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:22.008338 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:22.008502 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:22.008548 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:02:22.008601 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-09T19:02:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:02:23.651961 systemd[1]: Started systemd-journald.service. Feb 9 19:02:23.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.652876 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:02:23.655489 systemd[1]: Mounted tmp.mount. Feb 9 19:02:23.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.657958 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:02:23.660716 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:23.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.663462 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:02:23.663583 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:02:23.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.666065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:02:23.666231 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:02:23.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.668722 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:02:23.668865 systemd[1]: Finished modprobe@drm.service. Feb 9 19:02:23.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.671353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:02:23.671495 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:02:23.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.674080 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:02:23.674218 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:02:23.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.676566 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:02:23.676830 systemd[1]: Finished modprobe@loop.service. Feb 9 19:02:23.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.679439 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:02:23.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.682258 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:02:23.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.685109 systemd[1]: Reached target network-pre.target. Feb 9 19:02:23.688623 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:02:23.692450 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:02:23.696492 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:02:23.713381 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:02:23.717846 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:02:23.720866 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:02:23.722615 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:02:23.725350 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:02:23.727180 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:02:23.735203 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:02:23.738088 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:02:23.753730 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:23.755675 systemd-journald[1127]: Time spent on flushing to /var/log/journal/a723b8ae7c7048119083eefa65308600 is 26.575ms for 1188 entries. Feb 9 19:02:23.755675 systemd-journald[1127]: System Journal (/var/log/journal/a723b8ae7c7048119083eefa65308600) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:02:23.843390 systemd-journald[1127]: Received client request to flush runtime journal. Feb 9 19:02:23.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.762098 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:23.778538 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:02:23.844730 udevadm[1153]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:02:23.781578 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:02:23.793817 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:23.797643 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:02:23.844498 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:02:23.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:23.850118 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:23.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:24.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:24.381341 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:02:24.926325 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:02:24.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:24.929000 audit: BPF prog-id=21 op=LOAD Feb 9 19:02:24.929000 audit: BPF prog-id=22 op=LOAD Feb 9 19:02:24.929000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:02:24.929000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:02:24.930738 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:24.948836 systemd-udevd[1156]: Using default interface naming scheme 'v252'. Feb 9 19:02:25.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.190000 audit: BPF prog-id=23 op=LOAD Feb 9 19:02:25.186190 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:25.192077 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:25.242796 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:02:25.293687 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:02:25.294000 audit[1163]: AVC avc: denied { confidentiality } for pid=1163 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:25.313914 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:02:25.313999 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:02:25.314052 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:02:25.322685 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:02:25.332516 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:02:25.332566 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:02:25.332590 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:02:26.200226 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:02:26.200313 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:02:26.195000 audit: BPF prog-id=24 op=LOAD Feb 9 19:02:26.199000 audit: BPF prog-id=25 op=LOAD Feb 9 19:02:26.200000 audit: BPF prog-id=26 op=LOAD Feb 9 19:02:26.205043 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:02:26.207867 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:02:26.207932 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:02:26.239833 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:02:25.294000 audit[1163]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558dc4917620 a1=f884 a2=7f816e06dbc5 a3=5 items=12 ppid=1156 pid=1163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:25.294000 audit: CWD cwd="/" Feb 9 19:02:25.294000 audit: PATH item=0 name=(null) inode=237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=1 name=(null) inode=15693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=2 name=(null) inode=15693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=3 name=(null) inode=15694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=4 name=(null) inode=15693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=5 name=(null) inode=15695 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=6 name=(null) inode=15693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=7 name=(null) inode=15696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=8 name=(null) inode=15693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=9 name=(null) inode=15697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=10 name=(null) inode=15693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PATH item=11 name=(null) inode=15698 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.294000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:02:26.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.269073 systemd[1]: Started systemd-userdbd.service. Feb 9 19:02:26.421920 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1167) Feb 9 19:02:26.450836 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:02:26.493269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:02:26.545211 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:02:26.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.549018 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:02:26.612528 systemd-networkd[1169]: lo: Link UP Feb 9 19:02:26.612540 systemd-networkd[1169]: lo: Gained carrier Feb 9 19:02:26.613292 systemd-networkd[1169]: Enumeration completed Feb 9 19:02:26.613428 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:26.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.617947 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:26.643624 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:26.697836 kernel: mlx5_core dd9f:00:02.0 enP56735s1: Link up Feb 9 19:02:26.733841 kernel: hv_netvsc 000d3adc-30ae-000d-3adc-30ae000d3adc eth0: Data path switched to VF: enP56735s1 Feb 9 19:02:26.735186 systemd-networkd[1169]: enP56735s1: Link UP Feb 9 19:02:26.735328 systemd-networkd[1169]: eth0: Link UP Feb 9 19:02:26.735332 systemd-networkd[1169]: eth0: Gained carrier Feb 9 19:02:26.740705 systemd-networkd[1169]: enP56735s1: Gained carrier Feb 9 19:02:26.763945 systemd-networkd[1169]: eth0: DHCPv4 address 10.200.8.35/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:26.915582 lvm[1233]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:26.944970 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:02:26.948035 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:26.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.951579 systemd[1]: Starting lvm2-activation.service... Feb 9 19:02:26.958085 lvm[1236]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:26.980937 systemd[1]: Finished lvm2-activation.service. Feb 9 19:02:26.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.983697 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:26.985928 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:02:26.985959 systemd[1]: Reached target local-fs.target. Feb 9 19:02:26.988066 systemd[1]: Reached target machines.target. Feb 9 19:02:26.991482 systemd[1]: Starting ldconfig.service... Feb 9 19:02:26.993910 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:02:26.994024 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:26.995314 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:02:26.998765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:02:27.002801 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:02:27.005400 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:27.005496 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:27.006825 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:02:27.036436 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1238 (bootctl) Feb 9 19:02:27.037939 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:02:27.072445 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:02:27.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.443568 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:02:27.444148 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:02:27.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.458037 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:02:27.555115 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:02:27.626998 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:02:27.839101 systemd-networkd[1169]: eth0: Gained IPv6LL Feb 9 19:02:27.845892 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:27.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.929415 systemd-fsck[1246]: fsck.fat 4.2 (2021-01-31) Feb 9 19:02:27.929415 systemd-fsck[1246]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:02:27.931681 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:02:27.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.936913 systemd[1]: Mounting boot.mount... Feb 9 19:02:27.951115 systemd[1]: Mounted boot.mount. Feb 9 19:02:27.965522 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:02:27.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.874640 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:02:28.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.879074 systemd[1]: Starting audit-rules.service... Feb 9 19:02:28.882855 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:02:28.886687 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:02:28.889000 audit: BPF prog-id=27 op=LOAD Feb 9 19:02:28.892139 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:28.894000 audit: BPF prog-id=28 op=LOAD Feb 9 19:02:28.896491 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:02:28.901217 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:02:28.927000 audit[1260]: SYSTEM_BOOT pid=1260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.933553 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:02:28.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.963652 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:02:28.967093 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:02:28.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.982777 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:02:28.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.985539 systemd[1]: Reached target time-set.target. Feb 9 19:02:29.017997 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:02:29.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.024056 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 19:02:29.024118 kernel: audit: type=1130 audit(1707505349.020:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.092394 systemd-resolved[1256]: Positive Trust Anchors: Feb 9 19:02:29.092409 systemd-resolved[1256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:29.092448 systemd-resolved[1256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:29.115969 systemd-timesyncd[1257]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:02:29.116098 systemd-timesyncd[1257]: Initial clock synchronization to Fri 2024-02-09 19:02:29.116306 UTC. Feb 9 19:02:29.199000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:29.201645 systemd[1]: Finished audit-rules.service. Feb 9 19:02:29.202148 augenrules[1273]: No rules Feb 9 19:02:29.224562 kernel: audit: type=1305 audit(1707505349.199:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:29.224684 kernel: audit: type=1300 audit(1707505349.199:170): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf9fd8360 a2=420 a3=0 items=0 ppid=1252 pid=1273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:29.224709 kernel: audit: type=1327 audit(1707505349.199:170): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:29.199000 audit[1273]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf9fd8360 a2=420 a3=0 items=0 ppid=1252 pid=1273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:29.199000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:29.262690 systemd-resolved[1256]: Using system hostname 'ci-3510.3.2-a-21fad7fabd'. Feb 9 19:02:29.264315 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:29.267241 systemd[1]: Reached target network.target. Feb 9 19:02:29.269667 systemd[1]: Reached target network-online.target. Feb 9 19:02:29.272283 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:34.388272 ldconfig[1237]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:02:34.397626 systemd[1]: Finished ldconfig.service. Feb 9 19:02:34.401695 systemd[1]: Starting systemd-update-done.service... Feb 9 19:02:34.408405 systemd[1]: Finished systemd-update-done.service. Feb 9 19:02:34.411275 systemd[1]: Reached target sysinit.target. Feb 9 19:02:34.413546 systemd[1]: Started motdgen.path. Feb 9 19:02:34.415410 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:02:34.418523 systemd[1]: Started logrotate.timer. Feb 9 19:02:34.420649 systemd[1]: Started mdadm.timer. Feb 9 19:02:34.422524 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:02:34.424922 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:02:34.424962 systemd[1]: Reached target paths.target. Feb 9 19:02:34.427132 systemd[1]: Reached target timers.target. Feb 9 19:02:34.429559 systemd[1]: Listening on dbus.socket. Feb 9 19:02:34.432763 systemd[1]: Starting docker.socket... Feb 9 19:02:34.437371 systemd[1]: Listening on sshd.socket. Feb 9 19:02:34.439724 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:34.440261 systemd[1]: Listening on docker.socket. Feb 9 19:02:34.442456 systemd[1]: Reached target sockets.target. Feb 9 19:02:34.444596 systemd[1]: Reached target basic.target. Feb 9 19:02:34.446662 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:34.446688 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:34.447907 systemd[1]: Starting containerd.service... Feb 9 19:02:34.451385 systemd[1]: Starting dbus.service... Feb 9 19:02:34.454457 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:02:34.458156 systemd[1]: Starting extend-filesystems.service... Feb 9 19:02:34.460613 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:02:34.462201 systemd[1]: Starting motdgen.service... Feb 9 19:02:34.468863 systemd[1]: Started nvidia.service. Feb 9 19:02:34.472160 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:02:34.475406 systemd[1]: Starting prepare-critools.service... Feb 9 19:02:34.478534 systemd[1]: Starting prepare-helm.service... Feb 9 19:02:34.481622 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:02:34.485035 systemd[1]: Starting sshd-keygen.service... Feb 9 19:02:34.490686 systemd[1]: Starting systemd-logind.service... Feb 9 19:02:34.492688 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:34.492762 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:02:34.493368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:02:34.495064 systemd[1]: Starting update-engine.service... Feb 9 19:02:34.498096 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:02:34.508902 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:02:34.509134 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda1 Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda2 Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda3 Feb 9 19:02:34.553198 extend-filesystems[1284]: Found usr Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda4 Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda6 Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda7 Feb 9 19:02:34.553198 extend-filesystems[1284]: Found sda9 Feb 9 19:02:34.553198 extend-filesystems[1284]: Checking size of /dev/sda9 Feb 9 19:02:34.588422 jq[1298]: true Feb 9 19:02:34.564121 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:02:34.588713 jq[1283]: false Feb 9 19:02:34.588865 jq[1315]: true Feb 9 19:02:34.564339 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:02:34.581730 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:02:34.581988 systemd[1]: Finished motdgen.service. Feb 9 19:02:34.597832 tar[1301]: ./ Feb 9 19:02:34.597832 tar[1301]: ./loopback Feb 9 19:02:34.600804 tar[1303]: linux-amd64/helm Feb 9 19:02:34.603258 tar[1302]: crictl Feb 9 19:02:34.616651 systemd-logind[1295]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:02:34.618060 systemd-logind[1295]: New seat seat0. Feb 9 19:02:34.641518 extend-filesystems[1284]: Old size kept for /dev/sda9 Feb 9 19:02:34.644248 extend-filesystems[1284]: Found sr0 Feb 9 19:02:34.646397 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:02:34.646614 systemd[1]: Finished extend-filesystems.service. Feb 9 19:02:34.756929 tar[1301]: ./bandwidth Feb 9 19:02:34.771234 bash[1336]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:34.771929 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:02:34.786642 env[1316]: time="2024-02-09T19:02:34.786579646Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:02:34.826682 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:02:34.837135 dbus-daemon[1282]: [system] SELinux support is enabled Feb 9 19:02:34.837318 systemd[1]: Started dbus.service. Feb 9 19:02:34.841868 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:02:34.841909 systemd[1]: Reached target system-config.target. Feb 9 19:02:34.844484 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:02:34.844513 systemd[1]: Reached target user-config.target. Feb 9 19:02:34.848167 systemd[1]: Started systemd-logind.service. Feb 9 19:02:34.848633 dbus-daemon[1282]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:02:34.879234 tar[1301]: ./ptp Feb 9 19:02:34.916115 env[1316]: time="2024-02-09T19:02:34.916002838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:02:34.916243 env[1316]: time="2024-02-09T19:02:34.916196842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:34.917698 env[1316]: time="2024-02-09T19:02:34.917654772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:34.917698 env[1316]: time="2024-02-09T19:02:34.917696373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918549 env[1316]: time="2024-02-09T19:02:34.917982179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918549 env[1316]: time="2024-02-09T19:02:34.918007780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918549 env[1316]: time="2024-02-09T19:02:34.918026580Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:02:34.918549 env[1316]: time="2024-02-09T19:02:34.918040580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918549 env[1316]: time="2024-02-09T19:02:34.918134382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918549 env[1316]: time="2024-02-09T19:02:34.918379787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918818 env[1316]: time="2024-02-09T19:02:34.918552491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:34.918818 env[1316]: time="2024-02-09T19:02:34.918573491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:02:34.918818 env[1316]: time="2024-02-09T19:02:34.918634793Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:02:34.918818 env[1316]: time="2024-02-09T19:02:34.918651193Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:02:34.953651 env[1316]: time="2024-02-09T19:02:34.953603320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:02:34.953801 env[1316]: time="2024-02-09T19:02:34.953658321Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:02:34.953801 env[1316]: time="2024-02-09T19:02:34.953675421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:02:34.953801 env[1316]: time="2024-02-09T19:02:34.953733423Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953801 env[1316]: time="2024-02-09T19:02:34.953751623Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953770623Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953863525Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953893226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953913426Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953932127Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953949427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.953981 env[1316]: time="2024-02-09T19:02:34.953967627Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:02:34.954213 env[1316]: time="2024-02-09T19:02:34.954119531Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:02:34.954253 env[1316]: time="2024-02-09T19:02:34.954215933Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:02:34.954661 env[1316]: time="2024-02-09T19:02:34.954627441Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:02:34.954737 env[1316]: time="2024-02-09T19:02:34.954679242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.954737 env[1316]: time="2024-02-09T19:02:34.954698743Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:02:34.954831 env[1316]: time="2024-02-09T19:02:34.954780244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.954893 env[1316]: time="2024-02-09T19:02:34.954804945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.954933 env[1316]: time="2024-02-09T19:02:34.954901847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.954933 env[1316]: time="2024-02-09T19:02:34.954920847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955021 env[1316]: time="2024-02-09T19:02:34.954939348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955021 env[1316]: time="2024-02-09T19:02:34.954956948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955021 env[1316]: time="2024-02-09T19:02:34.954973648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955021 env[1316]: time="2024-02-09T19:02:34.954989649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955021 env[1316]: time="2024-02-09T19:02:34.955009849Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:02:34.955200 env[1316]: time="2024-02-09T19:02:34.955169852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955200 env[1316]: time="2024-02-09T19:02:34.955191953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955275 env[1316]: time="2024-02-09T19:02:34.955209453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955275 env[1316]: time="2024-02-09T19:02:34.955226654Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:02:34.955275 env[1316]: time="2024-02-09T19:02:34.955247254Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:02:34.955275 env[1316]: time="2024-02-09T19:02:34.955263654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:02:34.955465 env[1316]: time="2024-02-09T19:02:34.955287755Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:02:34.955465 env[1316]: time="2024-02-09T19:02:34.955330856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:02:34.955665 env[1316]: time="2024-02-09T19:02:34.955595161Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.955681263Z" level=info msg="Connect containerd service" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.955727764Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.956459779Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.956767886Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.956838187Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.960381761Z" level=info msg="containerd successfully booted in 0.220713s" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.960723868Z" level=info msg="Start subscribing containerd event" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.960794569Z" level=info msg="Start recovering state" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.961323880Z" level=info msg="Start event monitor" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.961351381Z" level=info msg="Start snapshots syncer" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.961366581Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:02:34.992061 env[1316]: time="2024-02-09T19:02:34.961478484Z" level=info msg="Start streaming server" Feb 9 19:02:34.956975 systemd[1]: Started containerd.service. Feb 9 19:02:35.006962 tar[1301]: ./vlan Feb 9 19:02:35.122500 tar[1301]: ./host-device Feb 9 19:02:35.211721 tar[1301]: ./tuning Feb 9 19:02:35.296607 tar[1301]: ./vrf Feb 9 19:02:35.379138 tar[1301]: ./sbr Feb 9 19:02:35.390050 update_engine[1297]: I0209 19:02:35.389461 1297 main.cc:92] Flatcar Update Engine starting Feb 9 19:02:35.439663 systemd[1]: Started update-engine.service. Feb 9 19:02:35.444189 update_engine[1297]: I0209 19:02:35.439707 1297 update_check_scheduler.cc:74] Next update check in 3m59s Feb 9 19:02:35.445176 systemd[1]: Started locksmithd.service. Feb 9 19:02:35.454122 tar[1301]: ./tap Feb 9 19:02:35.546604 tar[1301]: ./dhcp Feb 9 19:02:35.776272 tar[1301]: ./static Feb 9 19:02:35.791982 tar[1303]: linux-amd64/LICENSE Feb 9 19:02:35.792358 tar[1303]: linux-amd64/README.md Feb 9 19:02:35.805608 systemd[1]: Finished prepare-helm.service. Feb 9 19:02:35.823410 tar[1301]: ./firewall Feb 9 19:02:35.873335 tar[1301]: ./macvlan Feb 9 19:02:35.919267 tar[1301]: ./dummy Feb 9 19:02:35.935090 systemd[1]: Finished prepare-critools.service. Feb 9 19:02:35.966767 tar[1301]: ./bridge Feb 9 19:02:36.015703 tar[1301]: ./ipvlan Feb 9 19:02:36.030448 sshd_keygen[1307]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:02:36.061829 tar[1301]: ./portmap Feb 9 19:02:36.062768 systemd[1]: Finished sshd-keygen.service. Feb 9 19:02:36.067281 systemd[1]: Starting issuegen.service... Feb 9 19:02:36.071325 systemd[1]: Started waagent.service. Feb 9 19:02:36.083645 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:02:36.083869 systemd[1]: Finished issuegen.service. Feb 9 19:02:36.088488 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:02:36.103289 tar[1301]: ./host-local Feb 9 19:02:36.127178 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:02:36.132055 systemd[1]: Started getty@tty1.service. Feb 9 19:02:36.136474 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:02:36.139271 systemd[1]: Reached target getty.target. Feb 9 19:02:36.187118 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:02:36.196443 systemd[1]: Reached target multi-user.target. Feb 9 19:02:36.200956 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:02:36.208845 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:02:36.209036 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:02:36.212312 systemd[1]: Startup finished in 1.009s (firmware) + 27.757s (loader) + 934ms (kernel) + 1min 56.081s (initrd) + 22.552s (userspace) = 2min 48.334s. Feb 9 19:02:36.509536 login[1404]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:02:36.511836 login[1405]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:36.535732 systemd[1]: Created slice user-500.slice. Feb 9 19:02:36.537401 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:02:36.539880 systemd-logind[1295]: New session 1 of user core. Feb 9 19:02:36.580408 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:02:36.582175 systemd[1]: Starting user@500.service... Feb 9 19:02:36.599785 (systemd)[1416]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:36.716053 systemd[1416]: Queued start job for default target default.target. Feb 9 19:02:36.716663 systemd[1416]: Reached target paths.target. Feb 9 19:02:36.716691 systemd[1416]: Reached target sockets.target. Feb 9 19:02:36.716708 systemd[1416]: Reached target timers.target. Feb 9 19:02:36.716722 systemd[1416]: Reached target basic.target. Feb 9 19:02:36.716779 systemd[1416]: Reached target default.target. Feb 9 19:02:36.716828 systemd[1416]: Startup finished in 110ms. Feb 9 19:02:36.716876 systemd[1]: Started user@500.service. Feb 9 19:02:36.718218 systemd[1]: Started session-1.scope. Feb 9 19:02:37.409061 locksmithd[1388]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:02:37.510029 login[1404]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:37.515676 systemd[1]: Started session-2.scope. Feb 9 19:02:37.516195 systemd-logind[1295]: New session 2 of user core. Feb 9 19:02:42.557953 waagent[1399]: 2024-02-09T19:02:42.557805Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:02:42.562361 waagent[1399]: 2024-02-09T19:02:42.562269Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:02:42.565271 waagent[1399]: 2024-02-09T19:02:42.565193Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:02:42.568260 waagent[1399]: 2024-02-09T19:02:42.568162Z INFO Daemon Daemon Run daemon Feb 9 19:02:42.570719 waagent[1399]: 2024-02-09T19:02:42.570649Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:02:42.583500 waagent[1399]: 2024-02-09T19:02:42.583365Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:42.591482 waagent[1399]: 2024-02-09T19:02:42.591347Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:42.595528 waagent[1399]: 2024-02-09T19:02:42.591833Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:42.595528 waagent[1399]: 2024-02-09T19:02:42.592686Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:02:42.595528 waagent[1399]: 2024-02-09T19:02:42.594147Z INFO Daemon Daemon Activate resource disk Feb 9 19:02:42.595528 waagent[1399]: 2024-02-09T19:02:42.595085Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:02:42.622924 waagent[1399]: 2024-02-09T19:02:42.602410Z INFO Daemon Daemon Found device: None Feb 9 19:02:42.622924 waagent[1399]: 2024-02-09T19:02:42.603148Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:02:42.622924 waagent[1399]: 2024-02-09T19:02:42.604032Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:02:42.622924 waagent[1399]: 2024-02-09T19:02:42.605807Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:42.622924 waagent[1399]: 2024-02-09T19:02:42.606860Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:02:42.625672 waagent[1399]: 2024-02-09T19:02:42.625525Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:42.632760 waagent[1399]: 2024-02-09T19:02:42.632626Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:42.642252 waagent[1399]: 2024-02-09T19:02:42.633225Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:42.642252 waagent[1399]: 2024-02-09T19:02:42.634613Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:02:42.766898 waagent[1399]: 2024-02-09T19:02:42.766710Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:02:42.890233 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:02:42.912295 waagent[1399]: 2024-02-09T19:02:42.912158Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:02:42.915359 waagent[1399]: 2024-02-09T19:02:42.915270Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:42.918963 waagent[1399]: 2024-02-09T19:02:42.918882Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:02:42.922554 waagent[1399]: 2024-02-09T19:02:42.922481Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:02:42.925743 waagent[1399]: 2024-02-09T19:02:42.925668Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:02:42.927491 waagent[1399]: 2024-02-09T19:02:42.927425Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:02:43.052047 waagent[1399]: 2024-02-09T19:02:43.051964Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:02:43.061769 waagent[1399]: 2024-02-09T19:02:43.052917Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:02:43.061769 waagent[1399]: 2024-02-09T19:02:43.054127Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:02:43.343086 waagent[1399]: 2024-02-09T19:02:43.342871Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:02:43.354441 waagent[1399]: 2024-02-09T19:02:43.354345Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:02:43.358070 waagent[1399]: 2024-02-09T19:02:43.357988Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:02:43.441566 waagent[1399]: 2024-02-09T19:02:43.441431Z INFO Daemon Daemon Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:43.452461 waagent[1399]: 2024-02-09T19:02:43.441933Z INFO Daemon Daemon Certificate with thumbprint AFD7683C7805545F10FE5873F3897506C3BAA328 has no matching private key. Feb 9 19:02:43.452461 waagent[1399]: 2024-02-09T19:02:43.443043Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:02:43.465576 waagent[1399]: 2024-02-09T19:02:43.465498Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: f17ccde5-0cc9-4fb8-86b3-3cb662445b80 New eTag: 14524515594534758184] Feb 9 19:02:43.473773 waagent[1399]: 2024-02-09T19:02:43.466346Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:43.476051 waagent[1399]: 2024-02-09T19:02:43.475995Z INFO Daemon Daemon Starting provisioning Feb 9 19:02:43.483184 waagent[1399]: 2024-02-09T19:02:43.476368Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:02:43.483184 waagent[1399]: 2024-02-09T19:02:43.477462Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-21fad7fabd] Feb 9 19:02:43.498866 waagent[1399]: 2024-02-09T19:02:43.498718Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-21fad7fabd] Feb 9 19:02:43.506958 waagent[1399]: 2024-02-09T19:02:43.499466Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:02:43.506958 waagent[1399]: 2024-02-09T19:02:43.500510Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:02:43.513599 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:02:43.513873 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:02:43.513947 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:02:43.514303 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:43.517872 systemd-networkd[1169]: eth0: DHCPv6 lease lost Feb 9 19:02:43.519164 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:43.519323 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:43.521663 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:43.552689 systemd-networkd[1460]: enP56735s1: Link UP Feb 9 19:02:43.552700 systemd-networkd[1460]: enP56735s1: Gained carrier Feb 9 19:02:43.554273 systemd-networkd[1460]: eth0: Link UP Feb 9 19:02:43.554282 systemd-networkd[1460]: eth0: Gained carrier Feb 9 19:02:43.554709 systemd-networkd[1460]: lo: Link UP Feb 9 19:02:43.554719 systemd-networkd[1460]: lo: Gained carrier Feb 9 19:02:43.555118 systemd-networkd[1460]: eth0: Gained IPv6LL Feb 9 19:02:43.555403 systemd-networkd[1460]: Enumeration completed Feb 9 19:02:43.555533 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:43.557899 waagent[1399]: 2024-02-09T19:02:43.557128Z INFO Daemon Daemon Create user account if not exists Feb 9 19:02:43.559480 waagent[1399]: 2024-02-09T19:02:43.557807Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:02:43.559480 waagent[1399]: 2024-02-09T19:02:43.558356Z INFO Daemon Daemon Configure sudoer Feb 9 19:02:43.560037 waagent[1399]: 2024-02-09T19:02:43.559976Z INFO Daemon Daemon Configure sshd Feb 9 19:02:43.560868 waagent[1399]: 2024-02-09T19:02:43.560803Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:02:43.572509 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:43.574006 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:43.600924 systemd-networkd[1460]: eth0: DHCPv4 address 10.200.8.35/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:43.603599 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:44.839301 waagent[1399]: 2024-02-09T19:02:44.839205Z INFO Daemon Daemon Provisioning complete Feb 9 19:02:44.857373 waagent[1399]: 2024-02-09T19:02:44.857281Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:02:44.861014 waagent[1399]: 2024-02-09T19:02:44.860935Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:02:44.867102 waagent[1399]: 2024-02-09T19:02:44.867027Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:02:45.142485 waagent[1469]: 2024-02-09T19:02:45.142373Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:02:45.143294 waagent[1469]: 2024-02-09T19:02:45.143224Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:45.143443 waagent[1469]: 2024-02-09T19:02:45.143387Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:45.154850 waagent[1469]: 2024-02-09T19:02:45.154754Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:02:45.155047 waagent[1469]: 2024-02-09T19:02:45.154987Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:02:45.219353 waagent[1469]: 2024-02-09T19:02:45.219220Z INFO ExtHandler ExtHandler Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:45.219592 waagent[1469]: 2024-02-09T19:02:45.219530Z INFO ExtHandler ExtHandler Certificate with thumbprint AFD7683C7805545F10FE5873F3897506C3BAA328 has no matching private key. Feb 9 19:02:45.219882 waagent[1469]: 2024-02-09T19:02:45.219798Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:02:45.234166 waagent[1469]: 2024-02-09T19:02:45.234099Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 0b543d61-5759-49af-a5d5-cd53667e5a12 New eTag: 14524515594534758184] Feb 9 19:02:45.234759 waagent[1469]: 2024-02-09T19:02:45.234700Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:45.590542 waagent[1469]: 2024-02-09T19:02:45.590298Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:45.600089 waagent[1469]: 2024-02-09T19:02:45.599995Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1469 Feb 9 19:02:45.603550 waagent[1469]: 2024-02-09T19:02:45.603474Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:45.604790 waagent[1469]: 2024-02-09T19:02:45.604728Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:45.695051 waagent[1469]: 2024-02-09T19:02:45.694970Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:45.695543 waagent[1469]: 2024-02-09T19:02:45.695476Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:45.703660 waagent[1469]: 2024-02-09T19:02:45.703596Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:45.704194 waagent[1469]: 2024-02-09T19:02:45.704133Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:45.705321 waagent[1469]: 2024-02-09T19:02:45.705252Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:02:45.706638 waagent[1469]: 2024-02-09T19:02:45.706578Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:45.707252 waagent[1469]: 2024-02-09T19:02:45.707197Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:45.707674 waagent[1469]: 2024-02-09T19:02:45.707617Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:45.708113 waagent[1469]: 2024-02-09T19:02:45.708060Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:45.708927 waagent[1469]: 2024-02-09T19:02:45.708868Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:45.709542 waagent[1469]: 2024-02-09T19:02:45.709486Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:45.710004 waagent[1469]: 2024-02-09T19:02:45.709949Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:45.710206 waagent[1469]: 2024-02-09T19:02:45.710148Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:45.710904 waagent[1469]: 2024-02-09T19:02:45.710848Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:45.711094 waagent[1469]: 2024-02-09T19:02:45.711040Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:45.711322 waagent[1469]: 2024-02-09T19:02:45.711273Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:45.711431 waagent[1469]: 2024-02-09T19:02:45.711380Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:45.712109 waagent[1469]: 2024-02-09T19:02:45.712052Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:45.712556 waagent[1469]: 2024-02-09T19:02:45.712500Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:45.712556 waagent[1469]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:45.712556 waagent[1469]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:45.712556 waagent[1469]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:45.712556 waagent[1469]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:45.712556 waagent[1469]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:45.712556 waagent[1469]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:45.712958 waagent[1469]: 2024-02-09T19:02:45.712761Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:45.713008 waagent[1469]: 2024-02-09T19:02:45.712933Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:45.735030 waagent[1469]: 2024-02-09T19:02:45.734960Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:02:45.735835 waagent[1469]: 2024-02-09T19:02:45.735775Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:45.736762 waagent[1469]: 2024-02-09T19:02:45.736715Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:02:45.747142 waagent[1469]: 2024-02-09T19:02:45.747075Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1460' Feb 9 19:02:45.787213 waagent[1469]: 2024-02-09T19:02:45.787123Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:02:45.853489 waagent[1469]: 2024-02-09T19:02:45.853293Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:45.853489 waagent[1469]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:45.853489 waagent[1469]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:45.853489 waagent[1469]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dc:30:ae brd ff:ff:ff:ff:ff:ff Feb 9 19:02:45.853489 waagent[1469]: 3: enP56735s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dc:30:ae brd ff:ff:ff:ff:ff:ff\ altname enP56735p0s2 Feb 9 19:02:45.853489 waagent[1469]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:45.853489 waagent[1469]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:45.853489 waagent[1469]: 2: eth0 inet 10.200.8.35/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:45.853489 waagent[1469]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:45.853489 waagent[1469]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:45.853489 waagent[1469]: 2: eth0 inet6 fe80::20d:3aff:fedc:30ae/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:46.103762 waagent[1469]: 2024-02-09T19:02:46.098765Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:02:46.871830 waagent[1399]: 2024-02-09T19:02:46.871624Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:02:46.878139 waagent[1399]: 2024-02-09T19:02:46.878060Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:02:47.904419 waagent[1499]: 2024-02-09T19:02:47.904288Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:02:47.905157 waagent[1499]: 2024-02-09T19:02:47.905087Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:02:47.905308 waagent[1499]: 2024-02-09T19:02:47.905253Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:02:47.914864 waagent[1499]: 2024-02-09T19:02:47.914744Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:47.915258 waagent[1499]: 2024-02-09T19:02:47.915200Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:47.915424 waagent[1499]: 2024-02-09T19:02:47.915374Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:47.926670 waagent[1499]: 2024-02-09T19:02:47.926590Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:02:47.935203 waagent[1499]: 2024-02-09T19:02:47.935134Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:02:47.936179 waagent[1499]: 2024-02-09T19:02:47.936111Z INFO ExtHandler Feb 9 19:02:47.936336 waagent[1499]: 2024-02-09T19:02:47.936282Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8f1d9d95-3d60-4858-a6dd-2be62d640eb9 eTag: 14524515594534758184 source: Fabric] Feb 9 19:02:47.937067 waagent[1499]: 2024-02-09T19:02:47.937008Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:02:47.938181 waagent[1499]: 2024-02-09T19:02:47.938118Z INFO ExtHandler Feb 9 19:02:47.938318 waagent[1499]: 2024-02-09T19:02:47.938268Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:02:47.944920 waagent[1499]: 2024-02-09T19:02:47.944866Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:02:47.945368 waagent[1499]: 2024-02-09T19:02:47.945318Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:47.965226 waagent[1499]: 2024-02-09T19:02:47.965155Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:02:48.029961 waagent[1499]: 2024-02-09T19:02:48.029795Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AFD7683C7805545F10FE5873F3897506C3BAA328', 'hasPrivateKey': False} Feb 9 19:02:48.030958 waagent[1499]: 2024-02-09T19:02:48.030893Z INFO ExtHandler Downloaded certificate {'thumbprint': '72599646ED232C05D754C75EB4D54D781DD81FA4', 'hasPrivateKey': True} Feb 9 19:02:48.031940 waagent[1499]: 2024-02-09T19:02:48.031879Z INFO ExtHandler Fetch goal state completed Feb 9 19:02:48.054630 waagent[1499]: 2024-02-09T19:02:48.054541Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1499 Feb 9 19:02:48.057933 waagent[1499]: 2024-02-09T19:02:48.057858Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:48.059368 waagent[1499]: 2024-02-09T19:02:48.059308Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:48.064600 waagent[1499]: 2024-02-09T19:02:48.064543Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:48.064996 waagent[1499]: 2024-02-09T19:02:48.064936Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:48.072992 waagent[1499]: 2024-02-09T19:02:48.072938Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:48.073458 waagent[1499]: 2024-02-09T19:02:48.073401Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:48.079460 waagent[1499]: 2024-02-09T19:02:48.079363Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 19:02:48.084128 waagent[1499]: 2024-02-09T19:02:48.084066Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:02:48.085573 waagent[1499]: 2024-02-09T19:02:48.085510Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:48.086020 waagent[1499]: 2024-02-09T19:02:48.085955Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:48.086177 waagent[1499]: 2024-02-09T19:02:48.086128Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:48.086726 waagent[1499]: 2024-02-09T19:02:48.086665Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:48.087259 waagent[1499]: 2024-02-09T19:02:48.087200Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:48.087567 waagent[1499]: 2024-02-09T19:02:48.087513Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:48.087567 waagent[1499]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:48.087567 waagent[1499]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:48.087567 waagent[1499]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:48.087567 waagent[1499]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:48.087567 waagent[1499]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:48.087567 waagent[1499]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:48.087880 waagent[1499]: 2024-02-09T19:02:48.087703Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:48.087949 waagent[1499]: 2024-02-09T19:02:48.087892Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:48.088792 waagent[1499]: 2024-02-09T19:02:48.088734Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:48.092320 waagent[1499]: 2024-02-09T19:02:48.092161Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:48.092478 waagent[1499]: 2024-02-09T19:02:48.092426Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:48.096618 waagent[1499]: 2024-02-09T19:02:48.096484Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:48.099383 waagent[1499]: 2024-02-09T19:02:48.099115Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:48.104576 waagent[1499]: 2024-02-09T19:02:48.102549Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:48.104576 waagent[1499]: 2024-02-09T19:02:48.102446Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:48.104576 waagent[1499]: 2024-02-09T19:02:48.103917Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:48.126045 waagent[1499]: 2024-02-09T19:02:48.125974Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:02:48.127200 waagent[1499]: 2024-02-09T19:02:48.127138Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:02:48.131878 waagent[1499]: 2024-02-09T19:02:48.131797Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:48.131878 waagent[1499]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:48.131878 waagent[1499]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:48.131878 waagent[1499]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dc:30:ae brd ff:ff:ff:ff:ff:ff Feb 9 19:02:48.131878 waagent[1499]: 3: enP56735s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dc:30:ae brd ff:ff:ff:ff:ff:ff\ altname enP56735p0s2 Feb 9 19:02:48.131878 waagent[1499]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:48.131878 waagent[1499]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:48.131878 waagent[1499]: 2: eth0 inet 10.200.8.35/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:48.131878 waagent[1499]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:48.131878 waagent[1499]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:48.131878 waagent[1499]: 2: eth0 inet6 fe80::20d:3aff:fedc:30ae/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:48.192201 waagent[1499]: 2024-02-09T19:02:48.192077Z INFO ExtHandler ExtHandler Feb 9 19:02:48.195189 waagent[1499]: 2024-02-09T19:02:48.195123Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 19d990c9-a06f-43f7-b918-59a12cb66e43 correlation 64d1a9f6-2921-481f-a049-3d2cedf6a058 created: 2024-02-09T18:59:36.530838Z] Feb 9 19:02:48.196239 waagent[1499]: 2024-02-09T19:02:48.196172Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:02:48.198317 waagent[1499]: 2024-02-09T19:02:48.198257Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Feb 9 19:02:48.222315 waagent[1499]: 2024-02-09T19:02:48.222209Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:02:48.241557 waagent[1499]: 2024-02-09T19:02:48.241416Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 22F61036-2A32-44F7-B512-CCE0253883B5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:02:48.283420 waagent[1499]: 2024-02-09T19:02:48.283297Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 19:02:48.283420 waagent[1499]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:48.283420 waagent[1499]: pkts bytes target prot opt in out source destination Feb 9 19:02:48.283420 waagent[1499]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:48.283420 waagent[1499]: pkts bytes target prot opt in out source destination Feb 9 19:02:48.283420 waagent[1499]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:48.283420 waagent[1499]: pkts bytes target prot opt in out source destination Feb 9 19:02:48.283420 waagent[1499]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:02:48.283420 waagent[1499]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:48.283420 waagent[1499]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:48.290671 waagent[1499]: 2024-02-09T19:02:48.290566Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:02:48.290671 waagent[1499]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:48.290671 waagent[1499]: pkts bytes target prot opt in out source destination Feb 9 19:02:48.290671 waagent[1499]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:48.290671 waagent[1499]: pkts bytes target prot opt in out source destination Feb 9 19:02:48.290671 waagent[1499]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:48.290671 waagent[1499]: pkts bytes target prot opt in out source destination Feb 9 19:02:48.290671 waagent[1499]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:02:48.290671 waagent[1499]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:48.290671 waagent[1499]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:48.291267 waagent[1499]: 2024-02-09T19:02:48.291211Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:03:14.355434 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:03:14.719353 systemd[1]: Created slice system-sshd.slice. Feb 9 19:03:14.721244 systemd[1]: Started sshd@0-10.200.8.35:22-10.200.12.6:34354.service. Feb 9 19:03:15.586253 sshd[1547]: Accepted publickey for core from 10.200.12.6 port 34354 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:15.587948 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:15.593017 systemd-logind[1295]: New session 3 of user core. Feb 9 19:03:15.593933 systemd[1]: Started session-3.scope. Feb 9 19:03:16.126494 systemd[1]: Started sshd@1-10.200.8.35:22-10.200.12.6:34362.service. Feb 9 19:03:16.741716 sshd[1552]: Accepted publickey for core from 10.200.12.6 port 34362 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:16.743373 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:16.749066 systemd[1]: Started session-4.scope. Feb 9 19:03:16.749792 systemd-logind[1295]: New session 4 of user core. Feb 9 19:03:17.180166 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:17.183565 systemd[1]: sshd@1-10.200.8.35:22-10.200.12.6:34362.service: Deactivated successfully. Feb 9 19:03:17.185010 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:03:17.185049 systemd-logind[1295]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:03:17.186348 systemd-logind[1295]: Removed session 4. Feb 9 19:03:17.285745 systemd[1]: Started sshd@2-10.200.8.35:22-10.200.12.6:34272.service. Feb 9 19:03:17.904106 sshd[1558]: Accepted publickey for core from 10.200.12.6 port 34272 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:17.905845 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:17.911514 systemd[1]: Started session-5.scope. Feb 9 19:03:17.912278 systemd-logind[1295]: New session 5 of user core. Feb 9 19:03:18.347380 sshd[1558]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:18.350939 systemd[1]: sshd@2-10.200.8.35:22-10.200.12.6:34272.service: Deactivated successfully. Feb 9 19:03:18.351976 systemd-logind[1295]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:03:18.352066 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:03:18.353142 systemd-logind[1295]: Removed session 5. Feb 9 19:03:18.450657 systemd[1]: Started sshd@3-10.200.8.35:22-10.200.12.6:34280.service. Feb 9 19:03:19.068982 sshd[1564]: Accepted publickey for core from 10.200.12.6 port 34280 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:19.070637 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:19.075668 systemd[1]: Started session-6.scope. Feb 9 19:03:19.076130 systemd-logind[1295]: New session 6 of user core. Feb 9 19:03:19.508857 sshd[1564]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:19.512015 systemd[1]: sshd@3-10.200.8.35:22-10.200.12.6:34280.service: Deactivated successfully. Feb 9 19:03:19.513245 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:03:19.513276 systemd-logind[1295]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:03:19.514317 systemd-logind[1295]: Removed session 6. Feb 9 19:03:19.614860 systemd[1]: Started sshd@4-10.200.8.35:22-10.200.12.6:34288.service. Feb 9 19:03:20.238155 sshd[1570]: Accepted publickey for core from 10.200.12.6 port 34288 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:20.239733 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:20.244480 systemd[1]: Started session-7.scope. Feb 9 19:03:20.245127 systemd-logind[1295]: New session 7 of user core. Feb 9 19:03:20.645290 update_engine[1297]: I0209 19:03:20.645234 1297 update_attempter.cc:509] Updating boot flags... Feb 9 19:03:20.841341 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:03:20.841695 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:21.743495 systemd[1]: Starting docker.service... Feb 9 19:03:21.842503 env[1627]: time="2024-02-09T19:03:21.842437247Z" level=info msg="Starting up" Feb 9 19:03:21.843773 env[1627]: time="2024-02-09T19:03:21.843733548Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:21.843773 env[1627]: time="2024-02-09T19:03:21.843755948Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:21.844501 env[1627]: time="2024-02-09T19:03:21.843778548Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:21.844501 env[1627]: time="2024-02-09T19:03:21.843790848Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:21.845538 env[1627]: time="2024-02-09T19:03:21.845511850Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:21.845538 env[1627]: time="2024-02-09T19:03:21.845530150Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:21.845696 env[1627]: time="2024-02-09T19:03:21.845546750Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:21.845696 env[1627]: time="2024-02-09T19:03:21.845557750Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:21.984549 env[1627]: time="2024-02-09T19:03:21.984501289Z" level=info msg="Loading containers: start." Feb 9 19:03:22.108844 kernel: Initializing XFRM netlink socket Feb 9 19:03:22.157036 env[1627]: time="2024-02-09T19:03:22.156986252Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:03:22.267911 systemd-networkd[1460]: docker0: Link UP Feb 9 19:03:22.296070 env[1627]: time="2024-02-09T19:03:22.296027182Z" level=info msg="Loading containers: done." Feb 9 19:03:22.307013 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3589574425-merged.mount: Deactivated successfully. Feb 9 19:03:22.324789 env[1627]: time="2024-02-09T19:03:22.324725809Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:03:22.325037 env[1627]: time="2024-02-09T19:03:22.325007210Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:03:22.325165 env[1627]: time="2024-02-09T19:03:22.325142210Z" level=info msg="Daemon has completed initialization" Feb 9 19:03:22.377196 systemd[1]: Started docker.service. Feb 9 19:03:22.387956 env[1627]: time="2024-02-09T19:03:22.387884769Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:03:22.404422 systemd[1]: Reloading. Feb 9 19:03:22.485365 /usr/lib/systemd/system-generators/torcx-generator[1755]: time="2024-02-09T19:03:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:22.485406 /usr/lib/systemd/system-generators/torcx-generator[1755]: time="2024-02-09T19:03:22Z" level=info msg="torcx already run" Feb 9 19:03:22.575216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:22.575236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:22.591439 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:22.681185 systemd[1]: Started kubelet.service. Feb 9 19:03:22.751495 kubelet[1816]: E0209 19:03:22.751424 1816 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:03:22.753208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:22.753375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:26.633823 env[1316]: time="2024-02-09T19:03:26.633753486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 19:03:27.317588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033082249.mount: Deactivated successfully. Feb 9 19:03:29.386230 env[1316]: time="2024-02-09T19:03:29.386171200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:29.396655 env[1316]: time="2024-02-09T19:03:29.396595607Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:29.400908 env[1316]: time="2024-02-09T19:03:29.400861809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:29.408178 env[1316]: time="2024-02-09T19:03:29.408117414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:29.408897 env[1316]: time="2024-02-09T19:03:29.408859814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 9 19:03:29.419133 env[1316]: time="2024-02-09T19:03:29.419089520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 19:03:31.346594 env[1316]: time="2024-02-09T19:03:31.346526610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:31.359163 env[1316]: time="2024-02-09T19:03:31.359106016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:31.364509 env[1316]: time="2024-02-09T19:03:31.364458419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:31.375776 env[1316]: time="2024-02-09T19:03:31.375725425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:31.376590 env[1316]: time="2024-02-09T19:03:31.376553526Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 9 19:03:31.387315 env[1316]: time="2024-02-09T19:03:31.387273531Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 19:03:32.711536 env[1316]: time="2024-02-09T19:03:32.711477804Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.717294 env[1316]: time="2024-02-09T19:03:32.717242506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.722804 env[1316]: time="2024-02-09T19:03:32.722758309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.731722 env[1316]: time="2024-02-09T19:03:32.731676114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.732317 env[1316]: time="2024-02-09T19:03:32.732282614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 9 19:03:32.742723 env[1316]: time="2024-02-09T19:03:32.742678019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 19:03:32.859298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:03:32.859614 systemd[1]: Stopped kubelet.service. Feb 9 19:03:32.861543 systemd[1]: Started kubelet.service. Feb 9 19:03:32.915193 kubelet[1851]: E0209 19:03:32.915138 1851 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:03:32.918302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:32.918467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:33.748104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900204558.mount: Deactivated successfully. Feb 9 19:03:34.316906 env[1316]: time="2024-02-09T19:03:34.316848643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.322444 env[1316]: time="2024-02-09T19:03:34.322396840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.327227 env[1316]: time="2024-02-09T19:03:34.327185310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.330094 env[1316]: time="2024-02-09T19:03:34.330055012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.330405 env[1316]: time="2024-02-09T19:03:34.330373423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 19:03:34.340653 env[1316]: time="2024-02-09T19:03:34.340605186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:03:34.768388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839239490.mount: Deactivated successfully. Feb 9 19:03:34.796457 env[1316]: time="2024-02-09T19:03:34.796406861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.805840 env[1316]: time="2024-02-09T19:03:34.805771494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.817393 env[1316]: time="2024-02-09T19:03:34.817344404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.823760 env[1316]: time="2024-02-09T19:03:34.823715731Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.824168 env[1316]: time="2024-02-09T19:03:34.824133445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:03:34.834188 env[1316]: time="2024-02-09T19:03:34.834149301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 19:03:35.604190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41157648.mount: Deactivated successfully. Feb 9 19:03:40.102272 env[1316]: time="2024-02-09T19:03:40.102206433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:40.122010 env[1316]: time="2024-02-09T19:03:40.121948326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:40.129940 env[1316]: time="2024-02-09T19:03:40.129888964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:40.138069 env[1316]: time="2024-02-09T19:03:40.138019108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:40.138641 env[1316]: time="2024-02-09T19:03:40.138603626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 9 19:03:40.149994 env[1316]: time="2024-02-09T19:03:40.149942466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:03:40.770717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662995307.mount: Deactivated successfully. Feb 9 19:03:41.443642 env[1316]: time="2024-02-09T19:03:41.443581237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.455390 env[1316]: time="2024-02-09T19:03:41.455340780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.462786 env[1316]: time="2024-02-09T19:03:41.462732996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.469964 env[1316]: time="2024-02-09T19:03:41.469903405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:41.470394 env[1316]: time="2024-02-09T19:03:41.470355218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 19:03:43.109456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:03:43.109725 systemd[1]: Stopped kubelet.service. Feb 9 19:03:43.114972 systemd[1]: Started kubelet.service. Feb 9 19:03:43.195287 kubelet[1931]: E0209 19:03:43.195234 1931 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:03:43.197965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:43.198124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:44.176104 systemd[1]: Stopped kubelet.service. Feb 9 19:03:44.191197 systemd[1]: Reloading. Feb 9 19:03:44.261480 /usr/lib/systemd/system-generators/torcx-generator[1961]: time="2024-02-09T19:03:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:44.267925 /usr/lib/systemd/system-generators/torcx-generator[1961]: time="2024-02-09T19:03:44Z" level=info msg="torcx already run" Feb 9 19:03:44.367441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:44.367468 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:44.383640 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:44.478562 systemd[1]: Started kubelet.service. Feb 9 19:03:44.532240 kubelet[2023]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:44.532580 kubelet[2023]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:44.532627 kubelet[2023]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:44.532736 kubelet[2023]: I0209 19:03:44.532714 2023 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:44.902474 kubelet[2023]: I0209 19:03:44.902439 2023 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:03:44.902474 kubelet[2023]: I0209 19:03:44.902470 2023 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:44.902734 kubelet[2023]: I0209 19:03:44.902715 2023 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:03:44.907027 kubelet[2023]: E0209 19:03:44.907000 2023 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.907249 kubelet[2023]: I0209 19:03:44.907233 2023 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:44.909686 kubelet[2023]: I0209 19:03:44.909653 2023 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:44.909924 kubelet[2023]: I0209 19:03:44.909901 2023 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:44.910007 kubelet[2023]: I0209 19:03:44.909990 2023 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:44.910150 kubelet[2023]: I0209 19:03:44.910022 2023 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:44.910150 kubelet[2023]: I0209 19:03:44.910038 2023 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:03:44.910236 kubelet[2023]: I0209 19:03:44.910151 2023 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:44.916224 kubelet[2023]: I0209 19:03:44.916199 2023 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:03:44.916224 kubelet[2023]: I0209 19:03:44.916224 2023 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:44.916426 kubelet[2023]: I0209 19:03:44.916247 2023 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:03:44.916426 kubelet[2023]: I0209 19:03:44.916263 2023 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:44.917171 kubelet[2023]: I0209 19:03:44.917148 2023 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:44.917493 kubelet[2023]: W0209 19:03:44.917472 2023 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:03:44.917988 kubelet[2023]: I0209 19:03:44.917966 2023 server.go:1168] "Started kubelet" Feb 9 19:03:44.918155 kubelet[2023]: W0209 19:03:44.918108 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.918223 kubelet[2023]: E0209 19:03:44.918183 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.922179 kubelet[2023]: W0209 19:03:44.922143 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.922302 kubelet[2023]: E0209 19:03:44.922293 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.922482 kubelet[2023]: E0209 19:03:44.922397 2023 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-21fad7fabd.17b247232049575b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-21fad7fabd", UID:"ci-3510.3.2-a-21fad7fabd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-21fad7fabd"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 44, 917944155, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 44, 917944155, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.35:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.35:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:44.922714 kubelet[2023]: I0209 19:03:44.922703 2023 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:03:44.923049 kubelet[2023]: I0209 19:03:44.923037 2023 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:44.923726 kubelet[2023]: I0209 19:03:44.923709 2023 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:03:44.924876 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:03:44.925507 kubelet[2023]: E0209 19:03:44.925491 2023 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:44.925635 kubelet[2023]: E0209 19:03:44.925622 2023 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:44.925753 kubelet[2023]: I0209 19:03:44.925504 2023 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:44.928180 kubelet[2023]: I0209 19:03:44.927909 2023 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:03:44.928371 kubelet[2023]: I0209 19:03:44.928349 2023 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:03:44.928796 kubelet[2023]: W0209 19:03:44.928750 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.928895 kubelet[2023]: E0209 19:03:44.928806 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.929840 kubelet[2023]: E0209 19:03:44.929803 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="200ms" Feb 9 19:03:44.977017 kubelet[2023]: I0209 19:03:44.976982 2023 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:44.977017 kubelet[2023]: I0209 19:03:44.977008 2023 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:44.977231 kubelet[2023]: I0209 19:03:44.977028 2023 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:44.984847 kubelet[2023]: I0209 19:03:44.983876 2023 policy_none.go:49] "None policy: Start" Feb 9 19:03:44.986543 kubelet[2023]: I0209 19:03:44.986524 2023 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:44.986703 kubelet[2023]: I0209 19:03:44.986691 2023 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:44.987765 kubelet[2023]: I0209 19:03:44.987738 2023 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:44.988872 kubelet[2023]: I0209 19:03:44.988859 2023 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:44.991478 kubelet[2023]: I0209 19:03:44.988930 2023 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:03:44.991478 kubelet[2023]: I0209 19:03:44.988949 2023 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:03:44.991478 kubelet[2023]: E0209 19:03:44.988990 2023 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:03:44.994903 kubelet[2023]: W0209 19:03:44.994883 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.995013 kubelet[2023]: E0209 19:03:44.995005 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:44.998960 systemd[1]: Created slice kubepods.slice. Feb 9 19:03:45.002922 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:03:45.006017 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:03:45.011876 kubelet[2023]: I0209 19:03:45.011855 2023 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:45.012565 kubelet[2023]: I0209 19:03:45.012535 2023 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:45.013549 kubelet[2023]: E0209 19:03:45.013533 2023 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:03:45.029430 kubelet[2023]: I0209 19:03:45.029398 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.029770 kubelet[2023]: E0209 19:03:45.029747 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.089228 kubelet[2023]: I0209 19:03:45.089175 2023 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:45.091183 kubelet[2023]: I0209 19:03:45.091154 2023 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:45.092831 kubelet[2023]: I0209 19:03:45.092784 2023 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:45.099559 systemd[1]: Created slice kubepods-burstable-pod438ad7df7064ff34abe840769ae22bd4.slice. Feb 9 19:03:45.111283 systemd[1]: Created slice kubepods-burstable-pod4de065b7a552bb0ab726c6d45525b0cf.slice. Feb 9 19:03:45.115851 systemd[1]: Created slice kubepods-burstable-pod1674114442a022d83ada34ac5b3c7075.slice. Feb 9 19:03:45.131138 kubelet[2023]: E0209 19:03:45.131104 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="400ms" Feb 9 19:03:45.230343 kubelet[2023]: I0209 19:03:45.229638 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4de065b7a552bb0ab726c6d45525b0cf-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-21fad7fabd\" (UID: \"4de065b7a552bb0ab726c6d45525b0cf\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230343 kubelet[2023]: I0209 19:03:45.229707 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/438ad7df7064ff34abe840769ae22bd4-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-21fad7fabd\" (UID: \"438ad7df7064ff34abe840769ae22bd4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230343 kubelet[2023]: I0209 19:03:45.229740 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/438ad7df7064ff34abe840769ae22bd4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-21fad7fabd\" (UID: \"438ad7df7064ff34abe840769ae22bd4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230343 kubelet[2023]: I0209 19:03:45.229775 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/438ad7df7064ff34abe840769ae22bd4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-21fad7fabd\" (UID: \"438ad7df7064ff34abe840769ae22bd4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230343 kubelet[2023]: I0209 19:03:45.229806 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230785 kubelet[2023]: I0209 19:03:45.229874 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230785 kubelet[2023]: I0209 19:03:45.229908 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230785 kubelet[2023]: I0209 19:03:45.229946 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.230785 kubelet[2023]: I0209 19:03:45.229982 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.233545 kubelet[2023]: I0209 19:03:45.233508 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.233919 kubelet[2023]: E0209 19:03:45.233896 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.411690 env[1316]: time="2024-02-09T19:03:45.411635345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-21fad7fabd,Uid:438ad7df7064ff34abe840769ae22bd4,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:45.415170 env[1316]: time="2024-02-09T19:03:45.415132936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-21fad7fabd,Uid:4de065b7a552bb0ab726c6d45525b0cf,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:45.418842 env[1316]: time="2024-02-09T19:03:45.418785232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-21fad7fabd,Uid:1674114442a022d83ada34ac5b3c7075,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:45.531921 kubelet[2023]: E0209 19:03:45.531779 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="800ms" Feb 9 19:03:45.636013 kubelet[2023]: I0209 19:03:45.635973 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.636540 kubelet[2023]: E0209 19:03:45.636370 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.817161 env[1316]: time="2024-02-09T19:03:45.817003061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-21fad7fabd,Uid:438ad7df7064ff34abe840769ae22bd4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Feb 9 19:03:45.817891 kubelet[2023]: E0209 19:03:45.817860 2023 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Feb 9 19:03:45.818053 kubelet[2023]: E0209 19:03:45.817959 2023 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.818053 kubelet[2023]: E0209 19:03:45.818010 2023 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:45.818202 kubelet[2023]: E0209 19:03:45.818105 2023 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ci-3510.3.2-a-21fad7fabd_kube-system(438ad7df7064ff34abe840769ae22bd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ci-3510.3.2-a-21fad7fabd_kube-system(438ad7df7064ff34abe840769ae22bd4)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\\\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host\"" pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" podUID=438ad7df7064ff34abe840769ae22bd4 Feb 9 19:03:45.870676 kubelet[2023]: W0209 19:03:45.870630 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:45.870676 kubelet[2023]: E0209 19:03:45.870671 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.012001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618037816.mount: Deactivated successfully. Feb 9 19:03:46.040330 env[1316]: time="2024-02-09T19:03:46.040277281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.045499 env[1316]: time="2024-02-09T19:03:46.045454713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.058988 env[1316]: time="2024-02-09T19:03:46.058938357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.061795 env[1316]: time="2024-02-09T19:03:46.061750128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.067544 env[1316]: time="2024-02-09T19:03:46.067423073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.070081 env[1316]: time="2024-02-09T19:03:46.070040840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.074640 env[1316]: time="2024-02-09T19:03:46.074593856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.077484 env[1316]: time="2024-02-09T19:03:46.077445129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:46.150413 env[1316]: time="2024-02-09T19:03:46.150188783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:46.150413 env[1316]: time="2024-02-09T19:03:46.150232384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:46.150413 env[1316]: time="2024-02-09T19:03:46.150248185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:46.150784 env[1316]: time="2024-02-09T19:03:46.150729097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44 pid=2061 runtime=io.containerd.runc.v2 Feb 9 19:03:46.160230 env[1316]: time="2024-02-09T19:03:46.160152237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:46.160373 env[1316]: time="2024-02-09T19:03:46.160249240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:46.160373 env[1316]: time="2024-02-09T19:03:46.160277240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:46.160494 env[1316]: time="2024-02-09T19:03:46.160455445Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f pid=2080 runtime=io.containerd.runc.v2 Feb 9 19:03:46.174970 systemd[1]: Started cri-containerd-6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44.scope. Feb 9 19:03:46.180510 systemd[1]: Started cri-containerd-ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f.scope. Feb 9 19:03:46.239643 env[1316]: time="2024-02-09T19:03:46.239580462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-21fad7fabd,Uid:1674114442a022d83ada34ac5b3c7075,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44\"" Feb 9 19:03:46.244586 env[1316]: time="2024-02-09T19:03:46.244519688Z" level=info msg="CreateContainer within sandbox \"6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:03:46.247312 env[1316]: time="2024-02-09T19:03:46.247282358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-21fad7fabd,Uid:4de065b7a552bb0ab726c6d45525b0cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f\"" Feb 9 19:03:46.249654 env[1316]: time="2024-02-09T19:03:46.249625018Z" level=info msg="CreateContainer within sandbox \"ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:03:46.332252 kubelet[2023]: E0209 19:03:46.332148 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="1.6s" Feb 9 19:03:46.355890 env[1316]: time="2024-02-09T19:03:46.355825126Z" level=info msg="CreateContainer within sandbox \"6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240\"" Feb 9 19:03:46.356640 env[1316]: time="2024-02-09T19:03:46.356604445Z" level=info msg="StartContainer for \"daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240\"" Feb 9 19:03:46.362750 env[1316]: time="2024-02-09T19:03:46.362700101Z" level=info msg="CreateContainer within sandbox \"ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5\"" Feb 9 19:03:46.363329 env[1316]: time="2024-02-09T19:03:46.363295816Z" level=info msg="StartContainer for \"4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5\"" Feb 9 19:03:46.376457 kubelet[2023]: W0209 19:03:46.376361 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.376457 kubelet[2023]: E0209 19:03:46.376431 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.386363 systemd[1]: Started cri-containerd-4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5.scope. Feb 9 19:03:46.387550 systemd[1]: Started cri-containerd-daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240.scope. Feb 9 19:03:46.439217 kubelet[2023]: I0209 19:03:46.439185 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:46.439599 kubelet[2023]: E0209 19:03:46.439575 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:46.441174 kubelet[2023]: W0209 19:03:46.441074 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.441174 kubelet[2023]: E0209 19:03:46.441152 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.467160 env[1316]: time="2024-02-09T19:03:46.467097862Z" level=info msg="StartContainer for \"daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240\" returns successfully" Feb 9 19:03:46.472977 env[1316]: time="2024-02-09T19:03:46.472933911Z" level=info msg="StartContainer for \"4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5\" returns successfully" Feb 9 19:03:46.520410 kubelet[2023]: W0209 19:03:46.520281 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.520410 kubelet[2023]: E0209 19:03:46.520366 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:46.962032 kubelet[2023]: E0209 19:03:46.962000 2023 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:47.932769 kubelet[2023]: E0209 19:03:47.932726 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="3.2s" Feb 9 19:03:48.041826 kubelet[2023]: I0209 19:03:48.041775 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:48.042445 kubelet[2023]: E0209 19:03:48.042391 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:48.245217 kubelet[2023]: W0209 19:03:48.245078 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:48.245217 kubelet[2023]: E0209 19:03:48.245141 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:48.310943 kubelet[2023]: W0209 19:03:48.310877 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:48.310943 kubelet[2023]: E0209 19:03:48.310949 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:48.647836 kubelet[2023]: W0209 19:03:48.647760 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:48.648030 kubelet[2023]: E0209 19:03:48.647860 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:48.911017 kubelet[2023]: E0209 19:03:48.910828 2023 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-21fad7fabd.17b247232049575b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-21fad7fabd", UID:"ci-3510.3.2-a-21fad7fabd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-21fad7fabd"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 44, 917944155, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 44, 917944155, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.35:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.35:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:49.663919 kubelet[2023]: W0209 19:03:49.663855 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:49.663919 kubelet[2023]: E0209 19:03:49.663920 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:50.999971 kubelet[2023]: E0209 19:03:50.999932 2023 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:51.133703 kubelet[2023]: E0209 19:03:51.133661 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="6.4s" Feb 9 19:03:51.244288 kubelet[2023]: I0209 19:03:51.244245 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:51.244667 kubelet[2023]: E0209 19:03:51.244641 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:52.273148 kubelet[2023]: W0209 19:03:52.273102 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:52.273148 kubelet[2023]: E0209 19:03:52.273150 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:53.072215 kubelet[2023]: W0209 19:03:53.072169 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:53.072215 kubelet[2023]: E0209 19:03:53.072218 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:53.306072 kubelet[2023]: W0209 19:03:53.306026 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:53.306072 kubelet[2023]: E0209 19:03:53.306073 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:54.677537 kubelet[2023]: W0209 19:03:54.677408 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:54.677537 kubelet[2023]: E0209 19:03:54.677477 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:55.014355 kubelet[2023]: E0209 19:03:55.014163 2023 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:03:57.534318 kubelet[2023]: E0209 19:03:57.534275 2023 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": dial tcp 10.200.8.35:6443: connect: connection refused" interval="7s" Feb 9 19:03:57.646726 kubelet[2023]: I0209 19:03:57.646686 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:57.647123 kubelet[2023]: E0209 19:03:57.647096 2023 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.35:6443/api/v1/nodes\": dial tcp 10.200.8.35:6443: connect: connection refused" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:03:58.911926 kubelet[2023]: E0209 19:03:58.911788 2023 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-21fad7fabd.17b247232049575b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-21fad7fabd", UID:"ci-3510.3.2-a-21fad7fabd", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-21fad7fabd"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 44, 917944155, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 44, 917944155, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.35:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.35:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:59.619135 kubelet[2023]: E0209 19:03:59.619094 2023 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:03:59.992452 env[1316]: time="2024-02-09T19:03:59.992311103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-21fad7fabd,Uid:438ad7df7064ff34abe840769ae22bd4,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:03.241094 kubelet[2023]: W0209 19:04:03.241047 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:04:03.241094 kubelet[2023]: E0209 19:04:03.241097 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-21fad7fabd&limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:04:03.395891 env[1316]: time="2024-02-09T19:04:03.395799478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:03.396371 env[1316]: time="2024-02-09T19:04:03.395858379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:03.396371 env[1316]: time="2024-02-09T19:04:03.395872780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:03.396371 env[1316]: time="2024-02-09T19:04:03.396029382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f8f80fa5ffe89a14d57ee8a2beb0f7cffbdddadd66095bc5c8804cc423ccd5b pid=2222 runtime=io.containerd.runc.v2 Feb 9 19:04:03.416742 systemd[1]: run-containerd-runc-k8s.io-3f8f80fa5ffe89a14d57ee8a2beb0f7cffbdddadd66095bc5c8804cc423ccd5b-runc.YkTPSV.mount: Deactivated successfully. Feb 9 19:04:03.422363 systemd[1]: Started cri-containerd-3f8f80fa5ffe89a14d57ee8a2beb0f7cffbdddadd66095bc5c8804cc423ccd5b.scope. Feb 9 19:04:03.461453 env[1316]: time="2024-02-09T19:04:03.461403761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-21fad7fabd,Uid:438ad7df7064ff34abe840769ae22bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f8f80fa5ffe89a14d57ee8a2beb0f7cffbdddadd66095bc5c8804cc423ccd5b\"" Feb 9 19:04:03.464056 env[1316]: time="2024-02-09T19:04:03.464015704Z" level=info msg="CreateContainer within sandbox \"3f8f80fa5ffe89a14d57ee8a2beb0f7cffbdddadd66095bc5c8804cc423ccd5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:04:03.506159 env[1316]: time="2024-02-09T19:04:03.505500389Z" level=info msg="CreateContainer within sandbox \"3f8f80fa5ffe89a14d57ee8a2beb0f7cffbdddadd66095bc5c8804cc423ccd5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"37a239736abff61396856804df6c22c8fcc2eb562774d381753cd02ce58dc530\"" Feb 9 19:04:03.506321 env[1316]: time="2024-02-09T19:04:03.506151400Z" level=info msg="StartContainer for \"37a239736abff61396856804df6c22c8fcc2eb562774d381753cd02ce58dc530\"" Feb 9 19:04:03.522998 systemd[1]: Started cri-containerd-37a239736abff61396856804df6c22c8fcc2eb562774d381753cd02ce58dc530.scope. Feb 9 19:04:03.573608 env[1316]: time="2024-02-09T19:04:03.573554412Z" level=info msg="StartContainer for \"37a239736abff61396856804df6c22c8fcc2eb562774d381753cd02ce58dc530\" returns successfully" Feb 9 19:04:03.581947 kubelet[2023]: W0209 19:04:03.581852 2023 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:04:03.581947 kubelet[2023]: E0209 19:04:03.581900 2023 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.35:6443: connect: connection refused Feb 9 19:04:04.650088 kubelet[2023]: I0209 19:04:04.650048 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:05.015390 kubelet[2023]: E0209 19:04:05.015276 2023 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.167804 kubelet[2023]: I0209 19:04:05.167759 2023 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:05.168419 kubelet[2023]: E0209 19:04:05.168390 2023 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-21fad7fabd\" not found" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:05.219117 kubelet[2023]: E0209 19:04:05.219077 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.319933 kubelet[2023]: E0209 19:04:05.319793 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.420130 kubelet[2023]: E0209 19:04:05.420084 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.520691 kubelet[2023]: E0209 19:04:05.520651 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.621284 kubelet[2023]: E0209 19:04:05.621242 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.722005 kubelet[2023]: E0209 19:04:05.721961 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.822675 kubelet[2023]: E0209 19:04:05.822624 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:05.923066 kubelet[2023]: E0209 19:04:05.922928 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.023929 kubelet[2023]: E0209 19:04:06.023888 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.124226 kubelet[2023]: E0209 19:04:06.124187 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.224958 kubelet[2023]: E0209 19:04:06.224827 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.325464 kubelet[2023]: E0209 19:04:06.325389 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.426360 kubelet[2023]: E0209 19:04:06.426316 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.526960 kubelet[2023]: E0209 19:04:06.526841 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.627529 kubelet[2023]: E0209 19:04:06.627479 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.728118 kubelet[2023]: E0209 19:04:06.728066 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.828957 kubelet[2023]: E0209 19:04:06.828798 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:06.928964 kubelet[2023]: E0209 19:04:06.928916 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.029933 kubelet[2023]: E0209 19:04:07.029884 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.130825 kubelet[2023]: E0209 19:04:07.130770 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.231483 kubelet[2023]: E0209 19:04:07.231429 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.332343 kubelet[2023]: E0209 19:04:07.332285 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.433443 kubelet[2023]: E0209 19:04:07.433306 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.534192 kubelet[2023]: E0209 19:04:07.534151 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.634916 kubelet[2023]: E0209 19:04:07.634846 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.735538 kubelet[2023]: E0209 19:04:07.735409 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.836179 kubelet[2023]: E0209 19:04:07.836125 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:07.936786 kubelet[2023]: E0209 19:04:07.936733 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:08.036993 kubelet[2023]: E0209 19:04:08.036880 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:08.076825 systemd[1]: Reloading. Feb 9 19:04:08.137865 kubelet[2023]: E0209 19:04:08.137825 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:08.162748 /usr/lib/systemd/system-generators/torcx-generator[2315]: time="2024-02-09T19:04:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:04:08.162789 /usr/lib/systemd/system-generators/torcx-generator[2315]: time="2024-02-09T19:04:08Z" level=info msg="torcx already run" Feb 9 19:04:08.238380 kubelet[2023]: E0209 19:04:08.238325 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:08.312581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:04:08.312601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:04:08.329383 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:04:08.339114 kubelet[2023]: E0209 19:04:08.339054 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:08.439793 kubelet[2023]: E0209 19:04:08.439666 2023 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-21fad7fabd\" not found" Feb 9 19:04:08.445486 systemd[1]: Stopping kubelet.service... Feb 9 19:04:08.458231 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:04:08.458464 systemd[1]: Stopped kubelet.service. Feb 9 19:04:08.460729 systemd[1]: Started kubelet.service. Feb 9 19:04:08.547182 kubelet[2378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:04:08.547182 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:04:08.547182 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:04:08.547789 kubelet[2378]: I0209 19:04:08.547233 2378 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:04:08.551751 kubelet[2378]: I0209 19:04:08.551718 2378 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:04:08.551893 kubelet[2378]: I0209 19:04:08.551851 2378 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:04:08.552181 kubelet[2378]: I0209 19:04:08.552160 2378 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:04:08.553747 kubelet[2378]: I0209 19:04:08.553719 2378 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:04:08.554895 kubelet[2378]: I0209 19:04:08.554876 2378 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:04:08.558739 kubelet[2378]: I0209 19:04:08.558710 2378 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:04:08.559016 kubelet[2378]: I0209 19:04:08.558997 2378 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:04:08.559115 kubelet[2378]: I0209 19:04:08.559097 2378 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:04:08.559232 kubelet[2378]: I0209 19:04:08.559129 2378 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:04:08.559232 kubelet[2378]: I0209 19:04:08.559151 2378 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:04:08.559232 kubelet[2378]: I0209 19:04:08.559189 2378 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:04:08.566636 kubelet[2378]: I0209 19:04:08.566540 2378 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:04:08.566636 kubelet[2378]: I0209 19:04:08.566563 2378 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:04:08.566636 kubelet[2378]: I0209 19:04:08.566585 2378 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:04:08.566636 kubelet[2378]: I0209 19:04:08.566601 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:04:08.576888 kubelet[2378]: I0209 19:04:08.573960 2378 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:04:08.576888 kubelet[2378]: I0209 19:04:08.574539 2378 server.go:1168] "Started kubelet" Feb 9 19:04:08.576888 kubelet[2378]: I0209 19:04:08.576788 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:04:08.582920 kubelet[2378]: I0209 19:04:08.582761 2378 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:04:08.584031 kubelet[2378]: I0209 19:04:08.584012 2378 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:04:08.585405 kubelet[2378]: I0209 19:04:08.585388 2378 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:04:08.587621 kubelet[2378]: I0209 19:04:08.587600 2378 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:04:08.595004 kubelet[2378]: I0209 19:04:08.594979 2378 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:04:08.597261 kubelet[2378]: I0209 19:04:08.597239 2378 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:04:08.598517 kubelet[2378]: I0209 19:04:08.598498 2378 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:04:08.598634 kubelet[2378]: I0209 19:04:08.598625 2378 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:04:08.598711 kubelet[2378]: I0209 19:04:08.598704 2378 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:04:08.598832 kubelet[2378]: E0209 19:04:08.598805 2378 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:04:08.617235 kubelet[2378]: E0209 19:04:08.617207 2378 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:04:08.617435 kubelet[2378]: E0209 19:04:08.617425 2378 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:04:08.685718 kubelet[2378]: I0209 19:04:08.685692 2378 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:04:08.685895 kubelet[2378]: I0209 19:04:08.685869 2378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:04:08.685895 kubelet[2378]: I0209 19:04:08.685892 2378 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:04:08.686127 kubelet[2378]: I0209 19:04:08.686104 2378 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:04:08.686210 kubelet[2378]: I0209 19:04:08.686139 2378 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:04:08.686210 kubelet[2378]: I0209 19:04:08.686149 2378 policy_none.go:49] "None policy: Start" Feb 9 19:04:08.686938 kubelet[2378]: I0209 19:04:08.686901 2378 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:04:08.687066 kubelet[2378]: I0209 19:04:08.687057 2378 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:04:08.687255 kubelet[2378]: I0209 19:04:08.687247 2378 state_mem.go:75] "Updated machine memory state" Feb 9 19:04:08.690538 kubelet[2378]: I0209 19:04:08.690520 2378 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.698028 kubelet[2378]: I0209 19:04:08.698005 2378 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:04:08.698291 kubelet[2378]: I0209 19:04:08.698269 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:04:08.699520 kubelet[2378]: I0209 19:04:08.699500 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:08.699922 kubelet[2378]: I0209 19:04:08.699887 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:08.700134 kubelet[2378]: I0209 19:04:08.700117 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:08.702656 kubelet[2378]: I0209 19:04:08.702624 2378 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.702835 kubelet[2378]: I0209 19:04:08.702824 2378 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.715889 kubelet[2378]: W0209 19:04:08.715868 2378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:04:08.722146 kubelet[2378]: W0209 19:04:08.721933 2378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:04:08.722146 kubelet[2378]: W0209 19:04:08.722071 2378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:04:08.897016 kubelet[2378]: I0209 19:04:08.896976 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.897306 kubelet[2378]: I0209 19:04:08.897290 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.897506 kubelet[2378]: I0209 19:04:08.897492 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.897664 kubelet[2378]: I0209 19:04:08.897652 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.897798 kubelet[2378]: I0209 19:04:08.897788 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4de065b7a552bb0ab726c6d45525b0cf-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-21fad7fabd\" (UID: \"4de065b7a552bb0ab726c6d45525b0cf\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.897941 kubelet[2378]: I0209 19:04:08.897931 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/438ad7df7064ff34abe840769ae22bd4-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-21fad7fabd\" (UID: \"438ad7df7064ff34abe840769ae22bd4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.898084 kubelet[2378]: I0209 19:04:08.898073 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1674114442a022d83ada34ac5b3c7075-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-21fad7fabd\" (UID: \"1674114442a022d83ada34ac5b3c7075\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.898218 kubelet[2378]: I0209 19:04:08.898208 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/438ad7df7064ff34abe840769ae22bd4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-21fad7fabd\" (UID: \"438ad7df7064ff34abe840769ae22bd4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:08.898417 kubelet[2378]: I0209 19:04:08.898405 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/438ad7df7064ff34abe840769ae22bd4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-21fad7fabd\" (UID: \"438ad7df7064ff34abe840769ae22bd4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" Feb 9 19:04:09.574310 kubelet[2378]: I0209 19:04:09.574263 2378 apiserver.go:52] "Watching apiserver" Feb 9 19:04:09.597170 kubelet[2378]: I0209 19:04:09.597122 2378 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 19:04:09.603233 kubelet[2378]: I0209 19:04:09.603193 2378 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:04:09.680088 kubelet[2378]: I0209 19:04:09.679740 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" podStartSLOduration=1.679678071 podCreationTimestamp="2024-02-09 19:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:09.67891586 +0000 UTC m=+1.213508266" watchObservedRunningTime="2024-02-09 19:04:09.679678071 +0000 UTC m=+1.214270477" Feb 9 19:04:09.696049 kubelet[2378]: I0209 19:04:09.696012 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-21fad7fabd" podStartSLOduration=1.695952304 podCreationTimestamp="2024-02-09 19:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:09.686804473 +0000 UTC m=+1.221396979" watchObservedRunningTime="2024-02-09 19:04:09.695952304 +0000 UTC m=+1.230544710" Feb 9 19:04:09.707451 kubelet[2378]: I0209 19:04:09.707411 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-21fad7fabd" podStartSLOduration=1.707358267 podCreationTimestamp="2024-02-09 19:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:09.696507312 +0000 UTC m=+1.231099718" watchObservedRunningTime="2024-02-09 19:04:09.707358267 +0000 UTC m=+1.241950773" Feb 9 19:04:09.800264 sudo[1573]: pam_unix(sudo:session): session closed for user root Feb 9 19:04:09.917552 sshd[1570]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:09.921163 systemd[1]: sshd@4-10.200.8.35:22-10.200.12.6:34288.service: Deactivated successfully. Feb 9 19:04:09.922327 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:04:09.922582 systemd[1]: session-7.scope: Consumed 3.189s CPU time. Feb 9 19:04:09.923268 systemd-logind[1295]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:04:09.924309 systemd-logind[1295]: Removed session 7. Feb 9 19:04:21.085155 kubelet[2378]: I0209 19:04:21.085124 2378 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:04:21.085676 env[1316]: time="2024-02-09T19:04:21.085615769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:04:21.086245 kubelet[2378]: I0209 19:04:21.086224 2378 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:04:30.300549 kubelet[2378]: I0209 19:04:30.300499 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:30.306573 systemd[1]: Created slice kubepods-burstable-podac984984_5ce2_4618_bd8a_8a16f92165fb.slice. Feb 9 19:04:30.311828 kubelet[2378]: I0209 19:04:30.311785 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:30.317472 systemd[1]: Created slice kubepods-besteffort-pod3b824312_0539_4baa_b977_4fb8257411f7.slice. Feb 9 19:04:30.435486 kubelet[2378]: I0209 19:04:30.435434 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b824312-0539-4baa-b977-4fb8257411f7-lib-modules\") pod \"kube-proxy-x4gsj\" (UID: \"3b824312-0539-4baa-b977-4fb8257411f7\") " pod="kube-system/kube-proxy-x4gsj" Feb 9 19:04:30.435711 kubelet[2378]: I0209 19:04:30.435568 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b824312-0539-4baa-b977-4fb8257411f7-kube-proxy\") pod \"kube-proxy-x4gsj\" (UID: \"3b824312-0539-4baa-b977-4fb8257411f7\") " pod="kube-system/kube-proxy-x4gsj" Feb 9 19:04:30.435711 kubelet[2378]: I0209 19:04:30.435626 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ac984984-5ce2-4618-bd8a-8a16f92165fb-cni\") pod \"kube-flannel-ds-rd8wp\" (UID: \"ac984984-5ce2-4618-bd8a-8a16f92165fb\") " pod="kube-flannel/kube-flannel-ds-rd8wp" Feb 9 19:04:30.435711 kubelet[2378]: I0209 19:04:30.435660 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ac984984-5ce2-4618-bd8a-8a16f92165fb-flannel-cfg\") pod \"kube-flannel-ds-rd8wp\" (UID: \"ac984984-5ce2-4618-bd8a-8a16f92165fb\") " pod="kube-flannel/kube-flannel-ds-rd8wp" Feb 9 19:04:30.435952 kubelet[2378]: I0209 19:04:30.435753 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b824312-0539-4baa-b977-4fb8257411f7-xtables-lock\") pod \"kube-proxy-x4gsj\" (UID: \"3b824312-0539-4baa-b977-4fb8257411f7\") " pod="kube-system/kube-proxy-x4gsj" Feb 9 19:04:30.435952 kubelet[2378]: I0209 19:04:30.435860 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac984984-5ce2-4618-bd8a-8a16f92165fb-run\") pod \"kube-flannel-ds-rd8wp\" (UID: \"ac984984-5ce2-4618-bd8a-8a16f92165fb\") " pod="kube-flannel/kube-flannel-ds-rd8wp" Feb 9 19:04:30.435952 kubelet[2378]: I0209 19:04:30.435936 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7ggl\" (UniqueName: \"kubernetes.io/projected/ac984984-5ce2-4618-bd8a-8a16f92165fb-kube-api-access-v7ggl\") pod \"kube-flannel-ds-rd8wp\" (UID: \"ac984984-5ce2-4618-bd8a-8a16f92165fb\") " pod="kube-flannel/kube-flannel-ds-rd8wp" Feb 9 19:04:30.436124 kubelet[2378]: I0209 19:04:30.436014 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ac984984-5ce2-4618-bd8a-8a16f92165fb-cni-plugin\") pod \"kube-flannel-ds-rd8wp\" (UID: \"ac984984-5ce2-4618-bd8a-8a16f92165fb\") " pod="kube-flannel/kube-flannel-ds-rd8wp" Feb 9 19:04:30.436124 kubelet[2378]: I0209 19:04:30.436084 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac984984-5ce2-4618-bd8a-8a16f92165fb-xtables-lock\") pod \"kube-flannel-ds-rd8wp\" (UID: \"ac984984-5ce2-4618-bd8a-8a16f92165fb\") " pod="kube-flannel/kube-flannel-ds-rd8wp" Feb 9 19:04:30.436239 kubelet[2378]: I0209 19:04:30.436127 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxrg\" (UniqueName: \"kubernetes.io/projected/3b824312-0539-4baa-b977-4fb8257411f7-kube-api-access-9mxrg\") pod \"kube-proxy-x4gsj\" (UID: \"3b824312-0539-4baa-b977-4fb8257411f7\") " pod="kube-system/kube-proxy-x4gsj" Feb 9 19:04:30.610390 env[1316]: time="2024-02-09T19:04:30.610345296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rd8wp,Uid:ac984984-5ce2-4618-bd8a-8a16f92165fb,Namespace:kube-flannel,Attempt:0,}" Feb 9 19:04:30.627523 env[1316]: time="2024-02-09T19:04:30.627483155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4gsj,Uid:3b824312-0539-4baa-b977-4fb8257411f7,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:30.669484 env[1316]: time="2024-02-09T19:04:30.669415344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:30.669484 env[1316]: time="2024-02-09T19:04:30.669450044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:30.669752 env[1316]: time="2024-02-09T19:04:30.669464544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:30.669752 env[1316]: time="2024-02-09T19:04:30.669625246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73 pid=2443 runtime=io.containerd.runc.v2 Feb 9 19:04:30.702904 systemd[1]: Started cri-containerd-ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73.scope. Feb 9 19:04:30.714120 env[1316]: time="2024-02-09T19:04:30.714042957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:30.714349 env[1316]: time="2024-02-09T19:04:30.714322360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:30.714474 env[1316]: time="2024-02-09T19:04:30.714450361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:30.715368 env[1316]: time="2024-02-09T19:04:30.715321169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5dbcc70592dc1281d405af93a64919d7e004dba51e40e6e4d2a76445a9e255d9 pid=2471 runtime=io.containerd.runc.v2 Feb 9 19:04:30.737341 systemd[1]: Started cri-containerd-5dbcc70592dc1281d405af93a64919d7e004dba51e40e6e4d2a76445a9e255d9.scope. Feb 9 19:04:30.773928 env[1316]: time="2024-02-09T19:04:30.773876912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rd8wp,Uid:ac984984-5ce2-4618-bd8a-8a16f92165fb,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\"" Feb 9 19:04:30.776910 env[1316]: time="2024-02-09T19:04:30.775521228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4gsj,Uid:3b824312-0539-4baa-b977-4fb8257411f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dbcc70592dc1281d405af93a64919d7e004dba51e40e6e4d2a76445a9e255d9\"" Feb 9 19:04:30.778544 env[1316]: time="2024-02-09T19:04:30.778508055Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 9 19:04:30.780616 env[1316]: time="2024-02-09T19:04:30.780586375Z" level=info msg="CreateContainer within sandbox \"5dbcc70592dc1281d405af93a64919d7e004dba51e40e6e4d2a76445a9e255d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:04:30.823633 env[1316]: time="2024-02-09T19:04:30.823589773Z" level=info msg="CreateContainer within sandbox \"5dbcc70592dc1281d405af93a64919d7e004dba51e40e6e4d2a76445a9e255d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8829878403eb7cc7cc152febc56738c915b7de25b1e063370e5386eb935bb06\"" Feb 9 19:04:30.824453 env[1316]: time="2024-02-09T19:04:30.824418981Z" level=info msg="StartContainer for \"e8829878403eb7cc7cc152febc56738c915b7de25b1e063370e5386eb935bb06\"" Feb 9 19:04:30.844071 systemd[1]: Started cri-containerd-e8829878403eb7cc7cc152febc56738c915b7de25b1e063370e5386eb935bb06.scope. Feb 9 19:04:30.884741 env[1316]: time="2024-02-09T19:04:30.883728231Z" level=info msg="StartContainer for \"e8829878403eb7cc7cc152febc56738c915b7de25b1e063370e5386eb935bb06\" returns successfully" Feb 9 19:04:31.705078 kubelet[2378]: I0209 19:04:31.705046 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-x4gsj" podStartSLOduration=10.70499363 podCreationTimestamp="2024-02-09 19:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:31.702860011 +0000 UTC m=+23.237452417" watchObservedRunningTime="2024-02-09 19:04:31.70499363 +0000 UTC m=+23.239586036" Feb 9 19:04:32.806917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318292836.mount: Deactivated successfully. Feb 9 19:04:32.901074 env[1316]: time="2024-02-09T19:04:32.901007778Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.909081 env[1316]: time="2024-02-09T19:04:32.909032950Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.912210 env[1316]: time="2024-02-09T19:04:32.912168578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.915243 env[1316]: time="2024-02-09T19:04:32.915203905Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.916152 env[1316]: time="2024-02-09T19:04:32.915802110Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 9 19:04:32.918615 env[1316]: time="2024-02-09T19:04:32.918579335Z" level=info msg="CreateContainer within sandbox \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 19:04:32.955537 env[1316]: time="2024-02-09T19:04:32.955475265Z" level=info msg="CreateContainer within sandbox \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288\"" Feb 9 19:04:32.956094 env[1316]: time="2024-02-09T19:04:32.956062070Z" level=info msg="StartContainer for \"0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288\"" Feb 9 19:04:32.977362 systemd[1]: Started cri-containerd-0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288.scope. Feb 9 19:04:33.006212 systemd[1]: cri-containerd-0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288.scope: Deactivated successfully. Feb 9 19:04:33.013470 env[1316]: time="2024-02-09T19:04:33.013427481Z" level=info msg="StartContainer for \"0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288\" returns successfully" Feb 9 19:04:33.132371 env[1316]: time="2024-02-09T19:04:33.132317727Z" level=info msg="shim disconnected" id=0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288 Feb 9 19:04:33.132371 env[1316]: time="2024-02-09T19:04:33.132372427Z" level=warning msg="cleaning up after shim disconnected" id=0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288 namespace=k8s.io Feb 9 19:04:33.132705 env[1316]: time="2024-02-09T19:04:33.132384227Z" level=info msg="cleaning up dead shim" Feb 9 19:04:33.140797 env[1316]: time="2024-02-09T19:04:33.140746001Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2715 runtime=io.containerd.runc.v2\n" Feb 9 19:04:33.702465 env[1316]: time="2024-02-09T19:04:33.702415637Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 9 19:04:33.716635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ec62b20639dce709c4321d38b4d0a25b212f90b1d460e1d9186d002bd2a7288-rootfs.mount: Deactivated successfully. Feb 9 19:04:35.711621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536709763.mount: Deactivated successfully. Feb 9 19:04:36.892956 env[1316]: time="2024-02-09T19:04:36.892893738Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:36.901978 env[1316]: time="2024-02-09T19:04:36.901893114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:36.906572 env[1316]: time="2024-02-09T19:04:36.906521852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:36.910555 env[1316]: time="2024-02-09T19:04:36.910505386Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:36.911284 env[1316]: time="2024-02-09T19:04:36.911246692Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 9 19:04:36.914507 env[1316]: time="2024-02-09T19:04:36.914355818Z" level=info msg="CreateContainer within sandbox \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:04:36.977273 env[1316]: time="2024-02-09T19:04:36.977218643Z" level=info msg="CreateContainer within sandbox \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446\"" Feb 9 19:04:36.978070 env[1316]: time="2024-02-09T19:04:36.977854948Z" level=info msg="StartContainer for \"6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446\"" Feb 9 19:04:37.005149 systemd[1]: Started cri-containerd-6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446.scope. Feb 9 19:04:37.032597 systemd[1]: cri-containerd-6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446.scope: Deactivated successfully. Feb 9 19:04:37.037022 env[1316]: time="2024-02-09T19:04:37.036930836Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac984984_5ce2_4618_bd8a_8a16f92165fb.slice/cri-containerd-6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446.scope/memory.events\": no such file or directory" Feb 9 19:04:37.040431 kubelet[2378]: I0209 19:04:37.040206 2378 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:04:37.045499 env[1316]: time="2024-02-09T19:04:37.043048586Z" level=info msg="StartContainer for \"6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446\" returns successfully" Feb 9 19:04:37.067513 kubelet[2378]: I0209 19:04:37.067480 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:37.073689 kubelet[2378]: I0209 19:04:37.073144 2378 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:37.075782 systemd[1]: Created slice kubepods-burstable-pod7bc2931e_0508_4b0d_94b3_91d379dd40d1.slice. Feb 9 19:04:37.086942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446-rootfs.mount: Deactivated successfully. Feb 9 19:04:37.088522 systemd[1]: Created slice kubepods-burstable-pod47ab8a40_26cd_4daf_95ab_fe5e2b39e71b.slice. Feb 9 19:04:37.222886 kubelet[2378]: I0209 19:04:37.191617 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmzt\" (UniqueName: \"kubernetes.io/projected/7bc2931e-0508-4b0d-94b3-91d379dd40d1-kube-api-access-slmzt\") pod \"coredns-5d78c9869d-s2lw4\" (UID: \"7bc2931e-0508-4b0d-94b3-91d379dd40d1\") " pod="kube-system/coredns-5d78c9869d-s2lw4" Feb 9 19:04:37.222886 kubelet[2378]: I0209 19:04:37.191791 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bc2931e-0508-4b0d-94b3-91d379dd40d1-config-volume\") pod \"coredns-5d78c9869d-s2lw4\" (UID: \"7bc2931e-0508-4b0d-94b3-91d379dd40d1\") " pod="kube-system/coredns-5d78c9869d-s2lw4" Feb 9 19:04:37.222886 kubelet[2378]: I0209 19:04:37.191881 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47ab8a40-26cd-4daf-95ab-fe5e2b39e71b-config-volume\") pod \"coredns-5d78c9869d-hql9l\" (UID: \"47ab8a40-26cd-4daf-95ab-fe5e2b39e71b\") " pod="kube-system/coredns-5d78c9869d-hql9l" Feb 9 19:04:37.222886 kubelet[2378]: I0209 19:04:37.191928 2378 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qthc\" (UniqueName: \"kubernetes.io/projected/47ab8a40-26cd-4daf-95ab-fe5e2b39e71b-kube-api-access-7qthc\") pod \"coredns-5d78c9869d-hql9l\" (UID: \"47ab8a40-26cd-4daf-95ab-fe5e2b39e71b\") " pod="kube-system/coredns-5d78c9869d-hql9l" Feb 9 19:04:37.384434 env[1316]: time="2024-02-09T19:04:37.384380289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-s2lw4,Uid:7bc2931e-0508-4b0d-94b3-91d379dd40d1,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:37.526356 env[1316]: time="2024-02-09T19:04:37.525924052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-hql9l,Uid:47ab8a40-26cd-4daf-95ab-fe5e2b39e71b,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:37.642420 env[1316]: time="2024-02-09T19:04:37.642362808Z" level=info msg="shim disconnected" id=6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446 Feb 9 19:04:37.642420 env[1316]: time="2024-02-09T19:04:37.642413808Z" level=warning msg="cleaning up after shim disconnected" id=6da716211b2b0a368083ec4008690a2ea36e4535fa66002241115eaad6bfa446 namespace=k8s.io Feb 9 19:04:37.642420 env[1316]: time="2024-02-09T19:04:37.642426808Z" level=info msg="cleaning up dead shim" Feb 9 19:04:37.650938 env[1316]: time="2024-02-09T19:04:37.650888678Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2769 runtime=io.containerd.runc.v2\n" Feb 9 19:04:37.726849 env[1316]: time="2024-02-09T19:04:37.722982870Z" level=info msg="CreateContainer within sandbox \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 19:04:37.748646 env[1316]: time="2024-02-09T19:04:37.748573780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-s2lw4,Uid:7bc2931e-0508-4b0d-94b3-91d379dd40d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8aaa77f5cf2c5ae2f209e4dd5a89c22a3062fd7e0e62d2944a76f7afa7a8fabd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:04:37.749023 kubelet[2378]: E0209 19:04:37.748988 2378 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aaa77f5cf2c5ae2f209e4dd5a89c22a3062fd7e0e62d2944a76f7afa7a8fabd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:04:37.749161 kubelet[2378]: E0209 19:04:37.749061 2378 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aaa77f5cf2c5ae2f209e4dd5a89c22a3062fd7e0e62d2944a76f7afa7a8fabd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-s2lw4" Feb 9 19:04:37.749161 kubelet[2378]: E0209 19:04:37.749090 2378 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aaa77f5cf2c5ae2f209e4dd5a89c22a3062fd7e0e62d2944a76f7afa7a8fabd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-s2lw4" Feb 9 19:04:37.749161 kubelet[2378]: E0209 19:04:37.749156 2378 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-s2lw4_kube-system(7bc2931e-0508-4b0d-94b3-91d379dd40d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-s2lw4_kube-system(7bc2931e-0508-4b0d-94b3-91d379dd40d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8aaa77f5cf2c5ae2f209e4dd5a89c22a3062fd7e0e62d2944a76f7afa7a8fabd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5d78c9869d-s2lw4" podUID=7bc2931e-0508-4b0d-94b3-91d379dd40d1 Feb 9 19:04:37.768916 env[1316]: time="2024-02-09T19:04:37.768849746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-hql9l,Uid:47ab8a40-26cd-4daf-95ab-fe5e2b39e71b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0fda49e2150b98dc9883702eca0f7ef56343706026a0acf802e7e61d4c9dd6e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:04:37.769411 kubelet[2378]: E0209 19:04:37.769378 2378 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0fda49e2150b98dc9883702eca0f7ef56343706026a0acf802e7e61d4c9dd6e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:04:37.769564 kubelet[2378]: E0209 19:04:37.769433 2378 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0fda49e2150b98dc9883702eca0f7ef56343706026a0acf802e7e61d4c9dd6e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-hql9l" Feb 9 19:04:37.769564 kubelet[2378]: E0209 19:04:37.769459 2378 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0fda49e2150b98dc9883702eca0f7ef56343706026a0acf802e7e61d4c9dd6e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5d78c9869d-hql9l" Feb 9 19:04:37.769564 kubelet[2378]: E0209 19:04:37.769525 2378 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-hql9l_kube-system(47ab8a40-26cd-4daf-95ab-fe5e2b39e71b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-hql9l_kube-system(47ab8a40-26cd-4daf-95ab-fe5e2b39e71b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0fda49e2150b98dc9883702eca0f7ef56343706026a0acf802e7e61d4c9dd6e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5d78c9869d-hql9l" podUID=47ab8a40-26cd-4daf-95ab-fe5e2b39e71b Feb 9 19:04:37.794223 env[1316]: time="2024-02-09T19:04:37.794106654Z" level=info msg="CreateContainer within sandbox \"ffba2b107d1944c980e02b9d971bb6b3ab576ad60d84535fcedcc974f7ebde73\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"fde62818cd9ce578f460b84c28a3436f3a3c2dc21a1149163a57bb9efb2eb269\"" Feb 9 19:04:37.796280 env[1316]: time="2024-02-09T19:04:37.796237871Z" level=info msg="StartContainer for \"fde62818cd9ce578f460b84c28a3436f3a3c2dc21a1149163a57bb9efb2eb269\"" Feb 9 19:04:37.813210 systemd[1]: Started cri-containerd-fde62818cd9ce578f460b84c28a3436f3a3c2dc21a1149163a57bb9efb2eb269.scope. Feb 9 19:04:37.858954 env[1316]: time="2024-02-09T19:04:37.858902286Z" level=info msg="StartContainer for \"fde62818cd9ce578f460b84c28a3436f3a3c2dc21a1149163a57bb9efb2eb269\" returns successfully" Feb 9 19:04:39.004680 systemd-networkd[1460]: flannel.1: Link UP Feb 9 19:04:39.004692 systemd-networkd[1460]: flannel.1: Gained carrier Feb 9 19:04:40.766968 systemd-networkd[1460]: flannel.1: Gained IPv6LL Feb 9 19:04:49.599981 env[1316]: time="2024-02-09T19:04:49.599921242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-hql9l,Uid:47ab8a40-26cd-4daf-95ab-fe5e2b39e71b,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:49.652161 systemd-networkd[1460]: cni0: Link UP Feb 9 19:04:49.652171 systemd-networkd[1460]: cni0: Gained carrier Feb 9 19:04:49.659209 systemd-networkd[1460]: cni0: Lost carrier Feb 9 19:04:49.672908 systemd-networkd[1460]: veth903675b3: Link UP Feb 9 19:04:49.680143 kernel: cni0: port 1(veth903675b3) entered blocking state Feb 9 19:04:49.680270 kernel: cni0: port 1(veth903675b3) entered disabled state Feb 9 19:04:49.689616 kernel: device veth903675b3 entered promiscuous mode Feb 9 19:04:49.689726 kernel: cni0: port 1(veth903675b3) entered blocking state Feb 9 19:04:49.689752 kernel: cni0: port 1(veth903675b3) entered forwarding state Feb 9 19:04:49.696119 kernel: cni0: port 1(veth903675b3) entered disabled state Feb 9 19:04:49.707688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth903675b3: link becomes ready Feb 9 19:04:49.707788 kernel: cni0: port 1(veth903675b3) entered blocking state Feb 9 19:04:49.707826 kernel: cni0: port 1(veth903675b3) entered forwarding state Feb 9 19:04:49.711761 systemd-networkd[1460]: veth903675b3: Gained carrier Feb 9 19:04:49.712062 systemd-networkd[1460]: cni0: Gained carrier Feb 9 19:04:49.714465 env[1316]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000b08e8), "name":"cbr0", "type":"bridge"} Feb 9 19:04:49.714465 env[1316]: delegateAdd: netconf sent to delegate plugin: Feb 9 19:04:49.745800 env[1316]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T19:04:49.745722742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:49.745800 env[1316]: time="2024-02-09T19:04:49.745763043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:49.746106 env[1316]: time="2024-02-09T19:04:49.745777843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:49.746106 env[1316]: time="2024-02-09T19:04:49.745929644Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2ae53aa6916d137afef6f16e9aa23917cd7fbbb2d835e54bf8873e8449d2a5f pid=3027 runtime=io.containerd.runc.v2 Feb 9 19:04:49.771041 systemd[1]: Started cri-containerd-b2ae53aa6916d137afef6f16e9aa23917cd7fbbb2d835e54bf8873e8449d2a5f.scope. Feb 9 19:04:49.811561 env[1316]: time="2024-02-09T19:04:49.811513394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-hql9l,Uid:47ab8a40-26cd-4daf-95ab-fe5e2b39e71b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2ae53aa6916d137afef6f16e9aa23917cd7fbbb2d835e54bf8873e8449d2a5f\"" Feb 9 19:04:49.814621 env[1316]: time="2024-02-09T19:04:49.814575715Z" level=info msg="CreateContainer within sandbox \"b2ae53aa6916d137afef6f16e9aa23917cd7fbbb2d835e54bf8873e8449d2a5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:49.843396 env[1316]: time="2024-02-09T19:04:49.843333012Z" level=info msg="CreateContainer within sandbox \"b2ae53aa6916d137afef6f16e9aa23917cd7fbbb2d835e54bf8873e8449d2a5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5160dc9ff67aeb96b9b053b56d1c2b6f0443eb5759e196857ed41a5a7d703448\"" Feb 9 19:04:49.845335 env[1316]: time="2024-02-09T19:04:49.844084817Z" level=info msg="StartContainer for \"5160dc9ff67aeb96b9b053b56d1c2b6f0443eb5759e196857ed41a5a7d703448\"" Feb 9 19:04:49.862017 systemd[1]: Started cri-containerd-5160dc9ff67aeb96b9b053b56d1c2b6f0443eb5759e196857ed41a5a7d703448.scope. Feb 9 19:04:49.897718 env[1316]: time="2024-02-09T19:04:49.897658985Z" level=info msg="StartContainer for \"5160dc9ff67aeb96b9b053b56d1c2b6f0443eb5759e196857ed41a5a7d703448\" returns successfully" Feb 9 19:04:50.637479 systemd[1]: run-containerd-runc-k8s.io-b2ae53aa6916d137afef6f16e9aa23917cd7fbbb2d835e54bf8873e8449d2a5f-runc.aSlyz6.mount: Deactivated successfully. Feb 9 19:04:50.754859 kubelet[2378]: I0209 19:04:50.754513 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rd8wp" podStartSLOduration=23.618649428 podCreationTimestamp="2024-02-09 19:04:21 +0000 UTC" firstStartedPulling="2024-02-09 19:04:30.775683129 +0000 UTC m=+22.310275535" lastFinishedPulling="2024-02-09 19:04:36.911503094 +0000 UTC m=+28.446095500" observedRunningTime="2024-02-09 19:04:38.728162627 +0000 UTC m=+30.262755133" watchObservedRunningTime="2024-02-09 19:04:50.754469393 +0000 UTC m=+42.289061899" Feb 9 19:04:50.770008 kubelet[2378]: I0209 19:04:50.769968 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-hql9l" podStartSLOduration=29.769914898 podCreationTimestamp="2024-02-09 19:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:50.755764402 +0000 UTC m=+42.290356808" watchObservedRunningTime="2024-02-09 19:04:50.769914898 +0000 UTC m=+42.304507404" Feb 9 19:04:51.135125 systemd-networkd[1460]: veth903675b3: Gained IPv6LL Feb 9 19:04:51.583001 systemd-networkd[1460]: cni0: Gained IPv6LL Feb 9 19:04:52.600458 env[1316]: time="2024-02-09T19:04:52.600386095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-s2lw4,Uid:7bc2931e-0508-4b0d-94b3-91d379dd40d1,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:52.665524 systemd-networkd[1460]: veth0a2b35cf: Link UP Feb 9 19:04:52.673032 kernel: cni0: port 2(veth0a2b35cf) entered blocking state Feb 9 19:04:52.673137 kernel: cni0: port 2(veth0a2b35cf) entered disabled state Feb 9 19:04:52.676264 kernel: device veth0a2b35cf entered promiscuous mode Feb 9 19:04:52.687841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:04:52.687967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0a2b35cf: link becomes ready Feb 9 19:04:52.687994 kernel: cni0: port 2(veth0a2b35cf) entered blocking state Feb 9 19:04:52.690638 kernel: cni0: port 2(veth0a2b35cf) entered forwarding state Feb 9 19:04:52.693940 systemd-networkd[1460]: veth0a2b35cf: Gained carrier Feb 9 19:04:52.704366 env[1316]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000022928), "name":"cbr0", "type":"bridge"} Feb 9 19:04:52.704366 env[1316]: delegateAdd: netconf sent to delegate plugin: Feb 9 19:04:52.720881 env[1316]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T19:04:52.720771088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:52.721081 env[1316]: time="2024-02-09T19:04:52.720888189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:52.721081 env[1316]: time="2024-02-09T19:04:52.720906789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:52.721193 env[1316]: time="2024-02-09T19:04:52.721048290Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5cb4baa0419e303b4671ace7e0f2940e9b73665389801897876c3466555535a0 pid=3140 runtime=io.containerd.runc.v2 Feb 9 19:04:52.739659 systemd[1]: Started cri-containerd-5cb4baa0419e303b4671ace7e0f2940e9b73665389801897876c3466555535a0.scope. Feb 9 19:04:52.746582 systemd[1]: run-containerd-runc-k8s.io-5cb4baa0419e303b4671ace7e0f2940e9b73665389801897876c3466555535a0-runc.oZPsrN.mount: Deactivated successfully. Feb 9 19:04:52.790914 env[1316]: time="2024-02-09T19:04:52.790862351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-s2lw4,Uid:7bc2931e-0508-4b0d-94b3-91d379dd40d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cb4baa0419e303b4671ace7e0f2940e9b73665389801897876c3466555535a0\"" Feb 9 19:04:52.795271 env[1316]: time="2024-02-09T19:04:52.795215679Z" level=info msg="CreateContainer within sandbox \"5cb4baa0419e303b4671ace7e0f2940e9b73665389801897876c3466555535a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:52.835725 env[1316]: time="2024-02-09T19:04:52.835672346Z" level=info msg="CreateContainer within sandbox \"5cb4baa0419e303b4671ace7e0f2940e9b73665389801897876c3466555535a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4703e46f1fbd05ddec1b26655664611c6667a4ad49719451a8d5af713397d10\"" Feb 9 19:04:52.838375 env[1316]: time="2024-02-09T19:04:52.836649453Z" level=info msg="StartContainer for \"c4703e46f1fbd05ddec1b26655664611c6667a4ad49719451a8d5af713397d10\"" Feb 9 19:04:52.854424 systemd[1]: Started cri-containerd-c4703e46f1fbd05ddec1b26655664611c6667a4ad49719451a8d5af713397d10.scope. Feb 9 19:04:52.889947 env[1316]: time="2024-02-09T19:04:52.889893304Z" level=info msg="StartContainer for \"c4703e46f1fbd05ddec1b26655664611c6667a4ad49719451a8d5af713397d10\" returns successfully" Feb 9 19:04:53.768719 kubelet[2378]: I0209 19:04:53.768508 2378 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-s2lw4" podStartSLOduration=32.768466934 podCreationTimestamp="2024-02-09 19:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:53.767962431 +0000 UTC m=+45.302554837" watchObservedRunningTime="2024-02-09 19:04:53.768466934 +0000 UTC m=+45.303059340" Feb 9 19:04:53.887037 systemd-networkd[1460]: veth0a2b35cf: Gained IPv6LL Feb 9 19:06:31.135966 systemd[1]: Started sshd@5-10.200.8.35:22-10.200.12.6:33284.service. Feb 9 19:06:31.753392 sshd[3655]: Accepted publickey for core from 10.200.12.6 port 33284 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:31.755098 sshd[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:31.760340 systemd[1]: Started session-8.scope. Feb 9 19:06:31.760879 systemd-logind[1295]: New session 8 of user core. Feb 9 19:06:32.366697 sshd[3655]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:32.370056 systemd[1]: sshd@5-10.200.8.35:22-10.200.12.6:33284.service: Deactivated successfully. Feb 9 19:06:32.371048 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:06:32.371753 systemd-logind[1295]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:06:32.372584 systemd-logind[1295]: Removed session 8. Feb 9 19:06:34.646670 update_engine[1297]: I0209 19:06:34.646610 1297 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:06:34.646670 update_engine[1297]: I0209 19:06:34.646659 1297 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:06:34.647279 update_engine[1297]: I0209 19:06:34.646852 1297 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:06:34.647471 update_engine[1297]: I0209 19:06:34.647423 1297 omaha_request_params.cc:62] Current group set to lts Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647611 1297 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647624 1297 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647644 1297 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647678 1297 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647753 1297 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647760 1297 omaha_request_action.cc:271] Request: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: Feb 9 19:06:34.648117 update_engine[1297]: I0209 19:06:34.647768 1297 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:06:34.648761 locksmithd[1388]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:06:34.649501 update_engine[1297]: I0209 19:06:34.649183 1297 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:06:34.649501 update_engine[1297]: I0209 19:06:34.649497 1297 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:06:34.671005 update_engine[1297]: E0209 19:06:34.670953 1297 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:06:34.671194 update_engine[1297]: I0209 19:06:34.671105 1297 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 19:06:37.473588 systemd[1]: Started sshd@6-10.200.8.35:22-10.200.12.6:50342.service. Feb 9 19:06:38.095012 sshd[3692]: Accepted publickey for core from 10.200.12.6 port 50342 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:38.096675 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:38.101614 systemd-logind[1295]: New session 9 of user core. Feb 9 19:06:38.103128 systemd[1]: Started session-9.scope. Feb 9 19:06:38.586692 sshd[3692]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:38.589369 systemd[1]: sshd@6-10.200.8.35:22-10.200.12.6:50342.service: Deactivated successfully. Feb 9 19:06:38.590342 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:06:38.591124 systemd-logind[1295]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:06:38.591958 systemd-logind[1295]: Removed session 9. Feb 9 19:06:43.693269 systemd[1]: Started sshd@7-10.200.8.35:22-10.200.12.6:50356.service. Feb 9 19:06:44.311918 sshd[3726]: Accepted publickey for core from 10.200.12.6 port 50356 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:44.313576 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:44.319562 systemd-logind[1295]: New session 10 of user core. Feb 9 19:06:44.320242 systemd[1]: Started session-10.scope. Feb 9 19:06:44.644905 update_engine[1297]: I0209 19:06:44.644834 1297 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:06:44.645474 update_engine[1297]: I0209 19:06:44.645160 1297 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:06:44.645474 update_engine[1297]: I0209 19:06:44.645408 1297 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:06:44.662558 update_engine[1297]: E0209 19:06:44.662398 1297 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:06:44.662558 update_engine[1297]: I0209 19:06:44.662518 1297 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 19:06:44.812897 sshd[3726]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:44.816160 systemd[1]: sshd@7-10.200.8.35:22-10.200.12.6:50356.service: Deactivated successfully. Feb 9 19:06:44.817273 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:06:44.818241 systemd-logind[1295]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:06:44.819261 systemd-logind[1295]: Removed session 10. Feb 9 19:06:44.916799 systemd[1]: Started sshd@8-10.200.8.35:22-10.200.12.6:50358.service. Feb 9 19:06:45.548751 sshd[3759]: Accepted publickey for core from 10.200.12.6 port 50358 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:45.550201 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:45.555463 systemd[1]: Started session-11.scope. Feb 9 19:06:45.556265 systemd-logind[1295]: New session 11 of user core. Feb 9 19:06:46.193600 sshd[3759]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:46.196841 systemd[1]: sshd@8-10.200.8.35:22-10.200.12.6:50358.service: Deactivated successfully. Feb 9 19:06:46.198375 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:06:46.198570 systemd-logind[1295]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:06:46.199928 systemd-logind[1295]: Removed session 11. Feb 9 19:06:46.298133 systemd[1]: Started sshd@9-10.200.8.35:22-10.200.12.6:50372.service. Feb 9 19:06:46.910380 sshd[3770]: Accepted publickey for core from 10.200.12.6 port 50372 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:46.911732 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:46.916515 systemd-logind[1295]: New session 12 of user core. Feb 9 19:06:46.917422 systemd[1]: Started session-12.scope. Feb 9 19:06:47.400772 sshd[3770]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:47.403639 systemd[1]: sshd@9-10.200.8.35:22-10.200.12.6:50372.service: Deactivated successfully. Feb 9 19:06:47.404736 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:06:47.405482 systemd-logind[1295]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:06:47.406306 systemd-logind[1295]: Removed session 12. Feb 9 19:06:52.507555 systemd[1]: Started sshd@10-10.200.8.35:22-10.200.12.6:46716.service. Feb 9 19:06:53.173670 sshd[3803]: Accepted publickey for core from 10.200.12.6 port 46716 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:53.175156 sshd[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:53.180094 systemd-logind[1295]: New session 13 of user core. Feb 9 19:06:53.180995 systemd[1]: Started session-13.scope. Feb 9 19:06:53.714692 sshd[3803]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:53.718067 systemd[1]: sshd@10-10.200.8.35:22-10.200.12.6:46716.service: Deactivated successfully. Feb 9 19:06:53.719239 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:06:53.720127 systemd-logind[1295]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:06:53.721186 systemd-logind[1295]: Removed session 13. Feb 9 19:06:53.820031 systemd[1]: Started sshd@11-10.200.8.35:22-10.200.12.6:46726.service. Feb 9 19:06:54.644985 update_engine[1297]: I0209 19:06:54.644857 1297 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:06:54.645452 update_engine[1297]: I0209 19:06:54.645174 1297 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:06:54.645452 update_engine[1297]: I0209 19:06:54.645431 1297 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:06:54.667538 update_engine[1297]: E0209 19:06:54.667474 1297 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:06:54.667750 update_engine[1297]: I0209 19:06:54.667639 1297 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 19:06:54.799574 sshd[3815]: Accepted publickey for core from 10.200.12.6 port 46726 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:54.801014 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:54.805871 systemd-logind[1295]: New session 14 of user core. Feb 9 19:06:54.806395 systemd[1]: Started session-14.scope. Feb 9 19:06:55.638214 sshd[3815]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:55.641605 systemd[1]: sshd@11-10.200.8.35:22-10.200.12.6:46726.service: Deactivated successfully. Feb 9 19:06:55.642749 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:06:55.643688 systemd-logind[1295]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:06:55.644677 systemd-logind[1295]: Removed session 14. Feb 9 19:06:55.814855 systemd[1]: Started sshd@12-10.200.8.35:22-10.200.12.6:46734.service. Feb 9 19:06:57.124773 sshd[3845]: Accepted publickey for core from 10.200.12.6 port 46734 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:57.126441 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:57.131395 systemd-logind[1295]: New session 15 of user core. Feb 9 19:06:57.132102 systemd[1]: Started session-15.scope. Feb 9 19:06:58.632375 sshd[3845]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:58.635670 systemd[1]: sshd@12-10.200.8.35:22-10.200.12.6:46734.service: Deactivated successfully. Feb 9 19:06:58.637053 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:06:58.637104 systemd-logind[1295]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:06:58.638725 systemd-logind[1295]: Removed session 15. Feb 9 19:06:58.737594 systemd[1]: Started sshd@13-10.200.8.35:22-10.200.12.6:52264.service. Feb 9 19:06:59.413366 sshd[3865]: Accepted publickey for core from 10.200.12.6 port 52264 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:59.415111 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:59.420955 systemd[1]: Started session-16.scope. Feb 9 19:06:59.421560 systemd-logind[1295]: New session 16 of user core. Feb 9 19:07:00.112106 sshd[3865]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:00.115619 systemd[1]: sshd@13-10.200.8.35:22-10.200.12.6:52264.service: Deactivated successfully. Feb 9 19:07:00.116445 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:07:00.117430 systemd-logind[1295]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:07:00.118287 systemd-logind[1295]: Removed session 16. Feb 9 19:07:00.218088 systemd[1]: Started sshd@14-10.200.8.35:22-10.200.12.6:52270.service. Feb 9 19:07:00.835993 sshd[3896]: Accepted publickey for core from 10.200.12.6 port 52270 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:00.837566 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:00.843295 systemd[1]: Started session-17.scope. Feb 9 19:07:00.843930 systemd-logind[1295]: New session 17 of user core. Feb 9 19:07:01.331515 sshd[3896]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:01.334336 systemd[1]: sshd@14-10.200.8.35:22-10.200.12.6:52270.service: Deactivated successfully. Feb 9 19:07:01.335340 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:07:01.336137 systemd-logind[1295]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:07:01.337024 systemd-logind[1295]: Removed session 17. Feb 9 19:07:04.645027 update_engine[1297]: I0209 19:07:04.644961 1297 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:04.645633 update_engine[1297]: I0209 19:07:04.645288 1297 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:04.645633 update_engine[1297]: I0209 19:07:04.645572 1297 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:04.674614 update_engine[1297]: E0209 19:07:04.674557 1297 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:04.674852 update_engine[1297]: I0209 19:07:04.674697 1297 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:07:04.674852 update_engine[1297]: I0209 19:07:04.674711 1297 omaha_request_action.cc:621] Omaha request response: Feb 9 19:07:04.674852 update_engine[1297]: E0209 19:07:04.674799 1297 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 19:07:04.674852 update_engine[1297]: I0209 19:07:04.674837 1297 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 19:07:04.674852 update_engine[1297]: I0209 19:07:04.674844 1297 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:07:04.674852 update_engine[1297]: I0209 19:07:04.674849 1297 update_attempter.cc:306] Processing Done. Feb 9 19:07:04.675098 update_engine[1297]: E0209 19:07:04.674866 1297 update_attempter.cc:619] Update failed. Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.674872 1297 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.674877 1297 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.674882 1297 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.674973 1297 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.674997 1297 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.675002 1297 omaha_request_action.cc:271] Request: Feb 9 19:07:04.675098 update_engine[1297]: Feb 9 19:07:04.675098 update_engine[1297]: Feb 9 19:07:04.675098 update_engine[1297]: Feb 9 19:07:04.675098 update_engine[1297]: Feb 9 19:07:04.675098 update_engine[1297]: Feb 9 19:07:04.675098 update_engine[1297]: Feb 9 19:07:04.675098 update_engine[1297]: I0209 19:07:04.675009 1297 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:04.675580 update_engine[1297]: I0209 19:07:04.675187 1297 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:04.675580 update_engine[1297]: I0209 19:07:04.675376 1297 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:04.675759 locksmithd[1388]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 19:07:04.745590 update_engine[1297]: E0209 19:07:04.745539 1297 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745689 1297 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745704 1297 omaha_request_action.cc:621] Omaha request response: Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745713 1297 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745719 1297 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745724 1297 update_attempter.cc:306] Processing Done. Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745747 1297 update_attempter.cc:310] Error event sent. Feb 9 19:07:04.745833 update_engine[1297]: I0209 19:07:04.745759 1297 update_check_scheduler.cc:74] Next update check in 44m39s Feb 9 19:07:04.746283 locksmithd[1388]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 19:07:06.437843 systemd[1]: Started sshd@15-10.200.8.35:22-10.200.12.6:52284.service. Feb 9 19:07:07.055694 sshd[3934]: Accepted publickey for core from 10.200.12.6 port 52284 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:07.057373 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:07.061795 systemd-logind[1295]: New session 18 of user core. Feb 9 19:07:07.063085 systemd[1]: Started session-18.scope. Feb 9 19:07:07.562226 sshd[3934]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:07.565723 systemd[1]: sshd@15-10.200.8.35:22-10.200.12.6:52284.service: Deactivated successfully. Feb 9 19:07:07.566974 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:07:07.567845 systemd-logind[1295]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:07:07.568755 systemd-logind[1295]: Removed session 18. Feb 9 19:07:12.690515 systemd[1]: Started sshd@16-10.200.8.35:22-10.200.12.6:38634.service. Feb 9 19:07:13.313108 sshd[3969]: Accepted publickey for core from 10.200.12.6 port 38634 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:13.314806 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:13.319548 systemd-logind[1295]: New session 19 of user core. Feb 9 19:07:13.320437 systemd[1]: Started session-19.scope. Feb 9 19:07:13.812402 sshd[3969]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:13.815889 systemd[1]: sshd@16-10.200.8.35:22-10.200.12.6:38634.service: Deactivated successfully. Feb 9 19:07:13.816970 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:07:13.817931 systemd-logind[1295]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:07:13.818928 systemd-logind[1295]: Removed session 19. Feb 9 19:07:18.920312 systemd[1]: Started sshd@17-10.200.8.35:22-10.200.12.6:34994.service. Feb 9 19:07:19.541198 sshd[4002]: Accepted publickey for core from 10.200.12.6 port 34994 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:07:19.578597 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:07:19.584474 systemd-logind[1295]: New session 20 of user core. Feb 9 19:07:19.585671 systemd[1]: Started session-20.scope. Feb 9 19:07:20.038398 sshd[4002]: pam_unix(sshd:session): session closed for user core Feb 9 19:07:20.041555 systemd[1]: sshd@17-10.200.8.35:22-10.200.12.6:34994.service: Deactivated successfully. Feb 9 19:07:20.042466 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:07:20.042899 systemd-logind[1295]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:07:20.043826 systemd-logind[1295]: Removed session 20. Feb 9 19:07:36.733344 systemd[1]: cri-containerd-daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240.scope: Deactivated successfully. Feb 9 19:07:36.733660 systemd[1]: cri-containerd-daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240.scope: Consumed 3.260s CPU time. Feb 9 19:07:36.755242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240-rootfs.mount: Deactivated successfully. Feb 9 19:07:36.776055 env[1316]: time="2024-02-09T19:07:36.775989380Z" level=info msg="shim disconnected" id=daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240 Feb 9 19:07:36.776055 env[1316]: time="2024-02-09T19:07:36.776051981Z" level=warning msg="cleaning up after shim disconnected" id=daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240 namespace=k8s.io Feb 9 19:07:36.776668 env[1316]: time="2024-02-09T19:07:36.776068581Z" level=info msg="cleaning up dead shim" Feb 9 19:07:36.784689 env[1316]: time="2024-02-09T19:07:36.784636066Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4111 runtime=io.containerd.runc.v2\n" Feb 9 19:07:37.081444 kubelet[2378]: I0209 19:07:37.081308 2378 scope.go:115] "RemoveContainer" containerID="daedef54ab36d1252df8501c16e5737b2b5de88aec2dd2bc693782e45a526240" Feb 9 19:07:37.084481 env[1316]: time="2024-02-09T19:07:37.084403133Z" level=info msg="CreateContainer within sandbox \"6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:07:37.112469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560382078.mount: Deactivated successfully. Feb 9 19:07:37.131583 env[1316]: time="2024-02-09T19:07:37.131518598Z" level=info msg="CreateContainer within sandbox \"6bb257ba0db1d25a5d49214fdec38e8211ef5d475325208debc74043caedea44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"af0c5004c12e725846619bb7726571724db277c4631f58cf11d888a485e0d03b\"" Feb 9 19:07:37.132228 env[1316]: time="2024-02-09T19:07:37.132173004Z" level=info msg="StartContainer for \"af0c5004c12e725846619bb7726571724db277c4631f58cf11d888a485e0d03b\"" Feb 9 19:07:37.152909 systemd[1]: Started cri-containerd-af0c5004c12e725846619bb7726571724db277c4631f58cf11d888a485e0d03b.scope. Feb 9 19:07:37.208903 env[1316]: time="2024-02-09T19:07:37.208795960Z" level=info msg="StartContainer for \"af0c5004c12e725846619bb7726571724db277c4631f58cf11d888a485e0d03b\" returns successfully" Feb 9 19:07:39.857218 systemd[1]: cri-containerd-4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5.scope: Deactivated successfully. Feb 9 19:07:39.857547 systemd[1]: cri-containerd-4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5.scope: Consumed 1.758s CPU time. Feb 9 19:07:39.878120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5-rootfs.mount: Deactivated successfully. Feb 9 19:07:39.921142 env[1316]: time="2024-02-09T19:07:39.921081656Z" level=info msg="shim disconnected" id=4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5 Feb 9 19:07:39.921142 env[1316]: time="2024-02-09T19:07:39.921136756Z" level=warning msg="cleaning up after shim disconnected" id=4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5 namespace=k8s.io Feb 9 19:07:39.921142 env[1316]: time="2024-02-09T19:07:39.921149256Z" level=info msg="cleaning up dead shim" Feb 9 19:07:39.929397 env[1316]: time="2024-02-09T19:07:39.929341736Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4193 runtime=io.containerd.runc.v2\n" Feb 9 19:07:40.089916 kubelet[2378]: I0209 19:07:40.089879 2378 scope.go:115] "RemoveContainer" containerID="4c05ec97af5bc97e37cc103638e0d74d258150ca95feddb2d20ee6b4179bd3e5" Feb 9 19:07:40.092479 env[1316]: time="2024-02-09T19:07:40.092433323Z" level=info msg="CreateContainer within sandbox \"ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:07:40.121169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474798102.mount: Deactivated successfully. Feb 9 19:07:40.129001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922635856.mount: Deactivated successfully. Feb 9 19:07:40.137884 env[1316]: time="2024-02-09T19:07:40.137796063Z" level=info msg="CreateContainer within sandbox \"ff5ad1cc145a82ca5f103820a7c8db6fc6443fa2d8751b67fdd779347108fc4f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5332ab738232078e50c1a64d040a6aba99908d847a6d7dece0f3ac15321e9966\"" Feb 9 19:07:40.138426 env[1316]: time="2024-02-09T19:07:40.138388569Z" level=info msg="StartContainer for \"5332ab738232078e50c1a64d040a6aba99908d847a6d7dece0f3ac15321e9966\"" Feb 9 19:07:40.156171 systemd[1]: Started cri-containerd-5332ab738232078e50c1a64d040a6aba99908d847a6d7dece0f3ac15321e9966.scope. Feb 9 19:07:40.210942 env[1316]: time="2024-02-09T19:07:40.210874572Z" level=info msg="StartContainer for \"5332ab738232078e50c1a64d040a6aba99908d847a6d7dece0f3ac15321e9966\" returns successfully" Feb 9 19:07:40.354089 kubelet[2378]: E0209 19:07:40.353865 2378 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.35:52094->10.200.8.28:2379: read: connection timed out" Feb 9 19:07:41.120193 kubelet[2378]: E0209 19:07:41.120018 2378 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-21fad7fabd.17b24757b0f17343", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-21fad7fabd", UID:"438ad7df7064ff34abe840769ae22bd4", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-21fad7fabd"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 7, 30, 683179843, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 7, 30, 683179843, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.35:51900->10.200.8.28:2379: read: connection timed out' (will not retry!) Feb 9 19:07:47.136147 kubelet[2378]: I0209 19:07:47.136106 2378 status_manager.go:809] "Failed to get status for pod" podUID=438ad7df7064ff34abe840769ae22bd4 pod="kube-system/kube-apiserver-ci-3510.3.2-a-21fad7fabd" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.35:52004->10.200.8.28:2379: read: connection timed out" Feb 9 19:07:50.354949 kubelet[2378]: E0209 19:07:50.354909 2378 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-21fad7fabd?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:07:55.269591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.270046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.280215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.280514 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.285836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.296275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.296515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.306643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.306942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.316022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.316275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.325423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.337876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.338196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.347604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.347963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.357365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.357630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.362672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.367923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.373373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.380267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.385209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.390324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.410482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#8 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.410846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#13 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.422131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.422444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.427818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.433673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.445401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.445687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.456903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.457208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.462418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.468108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.488901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.489221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#7 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.500349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.500667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.506076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.511657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#18 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.518527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:07:55.523590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001