Feb 12 19:40:44.041553 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:40:44.041578 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:40:44.041588 kernel: BIOS-provided physical RAM map: Feb 12 19:40:44.041594 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:40:44.041599 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 12 19:40:44.041607 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 12 19:40:44.041617 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 12 19:40:44.041626 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 12 19:40:44.041631 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 12 19:40:44.041637 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 12 19:40:44.041643 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 12 19:40:44.041648 kernel: printk: bootconsole [earlyser0] enabled Feb 12 19:40:44.041657 kernel: NX (Execute Disable) protection: active Feb 12 19:40:44.041663 kernel: efi: EFI v2.70 by Microsoft Feb 12 19:40:44.041675 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 12 19:40:44.041682 kernel: random: crng init done Feb 12 19:40:44.041688 kernel: SMBIOS 3.1.0 present. Feb 12 19:40:44.041694 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:40:44.041704 kernel: Hypervisor detected: Microsoft Hyper-V Feb 12 19:40:44.041710 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 12 19:40:44.041720 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 12 19:40:44.041726 kernel: Hyper-V: Nested features: 0x1e0101 Feb 12 19:40:44.041734 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 12 19:40:44.041742 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 12 19:40:44.041749 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 12 19:40:44.041758 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 12 19:40:44.041766 kernel: tsc: Detected 2593.906 MHz processor Feb 12 19:40:44.041772 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:40:44.041781 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:40:44.041788 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 12 19:40:44.041796 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:40:44.041804 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 12 19:40:44.041812 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 12 19:40:44.041819 kernel: Using GB pages for direct mapping Feb 12 19:40:44.041828 kernel: Secure boot disabled Feb 12 19:40:44.041835 kernel: ACPI: Early table checksum verification disabled Feb 12 19:40:44.041844 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 12 19:40:44.041850 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041857 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041866 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:40:44.041879 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 12 19:40:44.041888 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041895 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041902 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041911 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041918 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041930 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041936 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:40:44.041943 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 12 19:40:44.041952 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 12 19:40:44.041968 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 12 19:40:44.041974 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 12 19:40:44.041981 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 12 19:40:44.041987 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 12 19:40:44.041996 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 12 19:40:44.042003 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 12 19:40:44.042010 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 12 19:40:44.042016 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 12 19:40:44.042023 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:40:44.042030 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:40:44.042036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 12 19:40:44.042043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 12 19:40:44.042049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 12 19:40:44.042058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 12 19:40:44.042065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 12 19:40:44.042072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 12 19:40:44.042078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 12 19:40:44.042085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 12 19:40:44.042091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 12 19:40:44.042098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 12 19:40:44.042105 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 12 19:40:44.042111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 12 19:40:44.042123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 12 19:40:44.042130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 12 19:40:44.042137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 12 19:40:44.042143 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 12 19:40:44.042153 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 12 19:40:44.042160 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 12 19:40:44.042169 kernel: Zone ranges: Feb 12 19:40:44.042177 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:40:44.042184 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 12 19:40:44.042192 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:40:44.042201 kernel: Movable zone start for each node Feb 12 19:40:44.042209 kernel: Early memory node ranges Feb 12 19:40:44.042218 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:40:44.042225 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 12 19:40:44.042232 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 12 19:40:44.042241 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:40:44.042249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 12 19:40:44.042257 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:40:44.042267 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:40:44.042274 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 12 19:40:44.042281 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 12 19:40:44.042290 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 12 19:40:44.042297 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:40:44.042307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:40:44.042314 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:40:44.042321 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 12 19:40:44.042328 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:40:44.042339 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 12 19:40:44.042348 kernel: Booting paravirtualized kernel on Hyper-V Feb 12 19:40:44.042356 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:40:44.042362 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:40:44.042369 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:40:44.042379 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:40:44.042386 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:40:44.042395 kernel: Hyper-V: PV spinlocks enabled Feb 12 19:40:44.042403 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:40:44.042412 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 12 19:40:44.042419 kernel: Policy zone: Normal Feb 12 19:40:44.042430 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:40:44.042438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:40:44.042447 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 12 19:40:44.042453 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:40:44.042461 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:40:44.042470 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 12 19:40:44.042479 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:40:44.042488 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:40:44.042503 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:40:44.042512 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:40:44.042520 kernel: rcu: RCU event tracing is enabled. Feb 12 19:40:44.042528 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:40:44.042538 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:40:44.042545 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:40:44.042556 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:40:44.042563 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:40:44.042570 kernel: Using NULL legacy PIC Feb 12 19:40:44.042582 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 12 19:40:44.042590 kernel: Console: colour dummy device 80x25 Feb 12 19:40:44.042600 kernel: printk: console [tty1] enabled Feb 12 19:40:44.042607 kernel: printk: console [ttyS0] enabled Feb 12 19:40:44.042614 kernel: printk: bootconsole [earlyser0] disabled Feb 12 19:40:44.042623 kernel: ACPI: Core revision 20210730 Feb 12 19:40:44.042633 kernel: Failed to register legacy timer interrupt Feb 12 19:40:44.042641 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:40:44.042649 kernel: Hyper-V: Using IPI hypercalls Feb 12 19:40:44.042658 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 12 19:40:44.042666 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 19:40:44.042673 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 19:40:44.042682 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:40:44.042690 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:40:44.042699 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:40:44.042709 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:40:44.042717 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 19:40:44.042724 kernel: RETBleed: Vulnerable Feb 12 19:40:44.042733 kernel: Speculative Store Bypass: Vulnerable Feb 12 19:40:44.042741 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:40:44.042750 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:40:44.042759 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 19:40:44.042766 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:40:44.042773 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:40:44.042783 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:40:44.042796 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 19:40:44.042803 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 19:40:44.042810 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 19:40:44.042819 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:40:44.042828 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 12 19:40:44.042838 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 12 19:40:44.042845 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 12 19:40:44.042853 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 12 19:40:44.042862 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:40:44.042870 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:40:44.042879 kernel: LSM: Security Framework initializing Feb 12 19:40:44.042887 kernel: SELinux: Initializing. Feb 12 19:40:44.042897 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:40:44.042907 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:40:44.042915 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 19:40:44.042924 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 19:40:44.042932 kernel: signal: max sigframe size: 3632 Feb 12 19:40:44.042940 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:40:44.042949 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:40:44.042963 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:40:44.042971 kernel: x86: Booting SMP configuration: Feb 12 19:40:44.042978 kernel: .... node #0, CPUs: #1 Feb 12 19:40:44.042991 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 12 19:40:44.042998 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 19:40:44.043009 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:40:44.043016 kernel: smpboot: Max logical packages: 1 Feb 12 19:40:44.043023 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 12 19:40:44.043032 kernel: devtmpfs: initialized Feb 12 19:40:44.043041 kernel: x86/mm: Memory block size: 128MB Feb 12 19:40:44.043050 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 12 19:40:44.043060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:40:44.043067 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:40:44.043078 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:40:44.043085 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:40:44.043096 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:40:44.043104 kernel: audit: type=2000 audit(1707766843.023:1): state=initialized audit_enabled=0 res=1 Feb 12 19:40:44.043111 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:40:44.043119 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:40:44.043128 kernel: cpuidle: using governor menu Feb 12 19:40:44.043141 kernel: ACPI: bus type PCI registered Feb 12 19:40:44.043148 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:40:44.043155 kernel: dca service started, version 1.12.1 Feb 12 19:40:44.043165 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:40:44.043173 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:40:44.043182 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:40:44.043190 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:40:44.043197 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:40:44.043205 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:40:44.043217 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:40:44.043224 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:40:44.043234 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:40:44.043242 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:40:44.043250 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:40:44.043260 kernel: ACPI: Interpreter enabled Feb 12 19:40:44.043268 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:40:44.043277 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:40:44.043284 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:40:44.043297 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 12 19:40:44.043304 kernel: iommu: Default domain type: Translated Feb 12 19:40:44.043314 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:40:44.043322 kernel: vgaarb: loaded Feb 12 19:40:44.043329 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:40:44.043337 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Feb 12 19:40:44.043346 kernel: PTP clock support registered Feb 12 19:40:44.043354 kernel: Registered efivars operations Feb 12 19:40:44.043364 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:40:44.043371 kernel: PCI: System does not support PCI Feb 12 19:40:44.043380 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 12 19:40:44.043391 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:40:44.043399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:40:44.043408 kernel: pnp: PnP ACPI init Feb 12 19:40:44.043415 kernel: pnp: PnP ACPI: found 3 devices Feb 12 19:40:44.043423 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:40:44.043430 kernel: NET: Registered PF_INET protocol family Feb 12 19:40:44.043440 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:40:44.043451 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 12 19:40:44.043460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:40:44.043467 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:40:44.043475 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 12 19:40:44.043485 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 12 19:40:44.043493 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:40:44.043502 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:40:44.043509 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:40:44.043517 kernel: NET: Registered PF_XDP protocol family Feb 12 19:40:44.043530 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:40:44.043539 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 12 19:40:44.043547 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 12 19:40:44.043555 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:40:44.043565 kernel: Initialise system trusted keyrings Feb 12 19:40:44.043573 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 12 19:40:44.043583 kernel: Key type asymmetric registered Feb 12 19:40:44.043590 kernel: Asymmetric key parser 'x509' registered Feb 12 19:40:44.043597 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:40:44.043609 kernel: io scheduler mq-deadline registered Feb 12 19:40:44.043618 kernel: io scheduler kyber registered Feb 12 19:40:44.043627 kernel: io scheduler bfq registered Feb 12 19:40:44.043634 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:40:44.043643 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:40:44.043652 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:40:44.043661 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 12 19:40:44.043670 kernel: i8042: PNP: No PS/2 controller found. Feb 12 19:40:44.043792 kernel: rtc_cmos 00:02: registered as rtc0 Feb 12 19:40:44.043878 kernel: rtc_cmos 00:02: setting system clock to 2024-02-12T19:40:43 UTC (1707766843) Feb 12 19:40:44.043966 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 12 19:40:44.043976 kernel: fail to initialize ptp_kvm Feb 12 19:40:44.043983 kernel: intel_pstate: CPU model not supported Feb 12 19:40:44.043994 kernel: efifb: probing for efifb Feb 12 19:40:44.044002 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:40:44.044011 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:40:44.044018 kernel: efifb: scrolling: redraw Feb 12 19:40:44.044028 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:40:44.044039 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:40:44.044046 kernel: fb0: EFI VGA frame buffer device Feb 12 19:40:44.044055 kernel: pstore: Registered efi as persistent store backend Feb 12 19:40:44.044064 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:40:44.044071 kernel: Segment Routing with IPv6 Feb 12 19:40:44.044080 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:40:44.044089 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:40:44.044098 kernel: Key type dns_resolver registered Feb 12 19:40:44.044109 kernel: IPI shorthand broadcast: enabled Feb 12 19:40:44.044116 kernel: sched_clock: Marking stable (766473900, 23776500)->(994326000, -204075600) Feb 12 19:40:44.044125 kernel: registered taskstats version 1 Feb 12 19:40:44.044133 kernel: Loading compiled-in X.509 certificates Feb 12 19:40:44.044143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:40:44.044151 kernel: Key type .fscrypt registered Feb 12 19:40:44.044158 kernel: Key type fscrypt-provisioning registered Feb 12 19:40:44.044167 kernel: pstore: Using crash dump compression: deflate Feb 12 19:40:44.044178 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:40:44.044188 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:40:44.044195 kernel: ima: No architecture policies found Feb 12 19:40:44.044203 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:40:44.044212 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:40:44.044221 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:40:44.044230 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:40:44.044237 kernel: Run /init as init process Feb 12 19:40:44.044246 kernel: with arguments: Feb 12 19:40:44.044255 kernel: /init Feb 12 19:40:44.044267 kernel: with environment: Feb 12 19:40:44.044274 kernel: HOME=/ Feb 12 19:40:44.044281 kernel: TERM=linux Feb 12 19:40:44.044292 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:40:44.044302 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:40:44.044313 systemd[1]: Detected virtualization microsoft. Feb 12 19:40:44.044320 systemd[1]: Detected architecture x86-64. Feb 12 19:40:44.044333 systemd[1]: Running in initrd. Feb 12 19:40:44.044341 systemd[1]: No hostname configured, using default hostname. Feb 12 19:40:44.044351 systemd[1]: Hostname set to <localhost>. Feb 12 19:40:44.044359 systemd[1]: Initializing machine ID from random generator. Feb 12 19:40:44.044368 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:40:44.044377 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:40:44.044387 systemd[1]: Reached target cryptsetup.target. Feb 12 19:40:44.044396 systemd[1]: Reached target paths.target. Feb 12 19:40:44.044403 systemd[1]: Reached target slices.target. Feb 12 19:40:44.044416 systemd[1]: Reached target swap.target. Feb 12 19:40:44.044424 systemd[1]: Reached target timers.target. Feb 12 19:40:44.044435 systemd[1]: Listening on iscsid.socket. Feb 12 19:40:44.044442 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:40:44.044452 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:40:44.044461 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:40:44.044472 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:40:44.044482 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:40:44.044492 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:40:44.044500 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:40:44.044511 systemd[1]: Reached target sockets.target. Feb 12 19:40:44.044519 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:40:44.044527 systemd[1]: Finished network-cleanup.service. Feb 12 19:40:44.044537 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:40:44.044546 systemd[1]: Starting systemd-journald.service... Feb 12 19:40:44.044557 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:40:44.044566 systemd[1]: Starting systemd-resolved.service... Feb 12 19:40:44.044577 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:40:44.044586 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:40:44.044596 kernel: audit: type=1130 audit(1707766844.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.044607 systemd-journald[183]: Journal started Feb 12 19:40:44.044653 systemd-journald[183]: Runtime Journal (/run/log/journal/9aa90488cbe048948fb68b0065682b31) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:40:44.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.025025 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:40:44.061109 systemd[1]: Started systemd-journald.service. Feb 12 19:40:44.062627 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:40:44.086897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:40:44.086923 kernel: audit: type=1130 audit(1707766844.061:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.095316 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:40:44.096341 kernel: Bridge firewalling registered Feb 12 19:40:44.098017 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:40:44.103747 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:40:44.112135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:40:44.159838 kernel: audit: type=1130 audit(1707766844.097:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.159869 kernel: audit: type=1130 audit(1707766844.102:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.132142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:40:44.139763 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:40:44.196061 kernel: SCSI subsystem initialized Feb 12 19:40:44.196085 kernel: audit: type=1130 audit(1707766844.159:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.196096 kernel: audit: type=1130 audit(1707766844.164:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.196106 kernel: audit: type=1130 audit(1707766844.171:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.139776 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:40:44.139825 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:40:44.241398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:40:44.241427 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:40:44.241444 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:40:44.143493 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:40:44.159981 systemd[1]: Started systemd-resolved.service. Feb 12 19:40:44.165028 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:40:44.171256 systemd[1]: Reached target nss-lookup.target. Feb 12 19:40:44.250459 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:40:44.256619 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:40:44.259892 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:40:44.275511 kernel: audit: type=1130 audit(1707766844.259:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.274098 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:40:44.285541 dracut-cmdline[200]: dracut-dracut-053 Feb 12 19:40:44.289574 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:40:44.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.293512 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:40:44.323102 kernel: audit: type=1130 audit(1707766844.306:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.362982 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:40:44.376979 kernel: iscsi: registered transport (tcp) Feb 12 19:40:44.401599 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:40:44.401662 kernel: QLogic iSCSI HBA Driver Feb 12 19:40:44.430549 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:40:44.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.434009 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:40:44.486981 kernel: raid6: avx512x4 gen() 18867 MB/s Feb 12 19:40:44.506973 kernel: raid6: avx512x4 xor() 8682 MB/s Feb 12 19:40:44.526969 kernel: raid6: avx512x2 gen() 18730 MB/s Feb 12 19:40:44.547977 kernel: raid6: avx512x2 xor() 29829 MB/s Feb 12 19:40:44.567970 kernel: raid6: avx512x1 gen() 18747 MB/s Feb 12 19:40:44.587971 kernel: raid6: avx512x1 xor() 26916 MB/s Feb 12 19:40:44.608972 kernel: raid6: avx2x4 gen() 18682 MB/s Feb 12 19:40:44.628971 kernel: raid6: avx2x4 xor() 8034 MB/s Feb 12 19:40:44.648970 kernel: raid6: avx2x2 gen() 18635 MB/s Feb 12 19:40:44.669975 kernel: raid6: avx2x2 xor() 22267 MB/s Feb 12 19:40:44.689969 kernel: raid6: avx2x1 gen() 14044 MB/s Feb 12 19:40:44.709969 kernel: raid6: avx2x1 xor() 19466 MB/s Feb 12 19:40:44.730972 kernel: raid6: sse2x4 gen() 11742 MB/s Feb 12 19:40:44.751969 kernel: raid6: sse2x4 xor() 7371 MB/s Feb 12 19:40:44.771971 kernel: raid6: sse2x2 gen() 12896 MB/s Feb 12 19:40:44.792965 kernel: raid6: sse2x2 xor() 7524 MB/s Feb 12 19:40:44.812971 kernel: raid6: sse2x1 gen() 11651 MB/s Feb 12 19:40:44.836755 kernel: raid6: sse2x1 xor() 5923 MB/s Feb 12 19:40:44.836785 kernel: raid6: using algorithm avx512x4 gen() 18867 MB/s Feb 12 19:40:44.836796 kernel: raid6: .... xor() 8682 MB/s, rmw enabled Feb 12 19:40:44.843527 kernel: raid6: using avx512x2 recovery algorithm Feb 12 19:40:44.859980 kernel: xor: automatically using best checksumming function avx Feb 12 19:40:44.955983 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:40:44.963923 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:40:44.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.967000 audit: BPF prog-id=7 op=LOAD Feb 12 19:40:44.967000 audit: BPF prog-id=8 op=LOAD Feb 12 19:40:44.968693 systemd[1]: Starting systemd-udevd.service... Feb 12 19:40:44.983859 systemd-udevd[382]: Using default interface naming scheme 'v252'. Feb 12 19:40:44.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:44.990880 systemd[1]: Started systemd-udevd.service. Feb 12 19:40:44.996327 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:40:45.013921 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Feb 12 19:40:45.044013 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:40:45.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:45.050076 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:40:45.084841 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:40:45.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:45.131977 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:40:45.168885 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:40:45.168948 kernel: AES CTR mode by8 optimization enabled Feb 12 19:40:45.168974 kernel: hv_vmbus: Vmbus version:5.2 Feb 12 19:40:45.190978 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:40:45.199529 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:40:45.199581 kernel: scsi host0: storvsc_host_t Feb 12 19:40:45.202774 kernel: scsi host1: storvsc_host_t Feb 12 19:40:45.202828 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:40:45.213141 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:40:45.213190 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:40:45.221979 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:40:45.230476 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:40:45.247536 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:40:45.247582 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:40:45.253415 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:40:45.269061 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:40:45.269310 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:40:45.271985 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:40:45.297847 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:40:45.298105 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:40:45.306778 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:40:45.306937 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:40:45.307059 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:40:45.311974 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:40:45.316762 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:40:45.356983 kernel: hv_netvsc 000d3a66-1520-000d-3a66-1520000d3a66 eth0: VF slot 1 added Feb 12 19:40:45.364996 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:40:45.371979 kernel: hv_pci 857df8a6-c8ee-4f76-b3e7-041652601d71: PCI VMBus probing: Using version 0x10004 Feb 12 19:40:45.383220 kernel: hv_pci 857df8a6-c8ee-4f76-b3e7-041652601d71: PCI host bridge to bus c8ee:00 Feb 12 19:40:45.383366 kernel: pci_bus c8ee:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 12 19:40:45.383492 kernel: pci_bus c8ee:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:40:45.394304 kernel: pci c8ee:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 12 19:40:45.404896 kernel: pci c8ee:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:40:45.421415 kernel: pci c8ee:00:02.0: enabling Extended Tags Feb 12 19:40:45.435976 kernel: pci c8ee:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c8ee:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 12 19:40:45.436182 kernel: pci_bus c8ee:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:40:45.445007 kernel: pci c8ee:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:40:45.537985 kernel: mlx5_core c8ee:00:02.0: firmware version: 14.30.1350 Feb 12 19:40:45.695978 kernel: mlx5_core c8ee:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 12 19:40:45.714727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:40:45.752981 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (438) Feb 12 19:40:45.766894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:40:45.855131 kernel: mlx5_core c8ee:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 12 19:40:45.855323 kernel: mlx5_core c8ee:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Feb 12 19:40:45.867969 kernel: hv_netvsc 000d3a66-1520-000d-3a66-1520000d3a66 eth0: VF registering: eth1 Feb 12 19:40:45.868127 kernel: mlx5_core c8ee:00:02.0 eth1: joined to eth0 Feb 12 19:40:45.881974 kernel: mlx5_core c8ee:00:02.0 enP51438s1: renamed from eth1 Feb 12 19:40:45.919851 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:40:45.935427 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:40:45.938248 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:40:45.944118 systemd[1]: Starting disk-uuid.service... Feb 12 19:40:45.957972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:40:45.966973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:40:46.976989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:40:46.978221 disk-uuid[564]: The operation has completed successfully. Feb 12 19:40:47.046494 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:40:47.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.046598 systemd[1]: Finished disk-uuid.service. Feb 12 19:40:47.054631 systemd[1]: Starting verity-setup.service... Feb 12 19:40:47.093190 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:40:47.382401 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:40:47.387509 systemd[1]: Finished verity-setup.service. Feb 12 19:40:47.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.392441 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:40:47.467010 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:40:47.467084 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:40:47.471006 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:40:47.475286 systemd[1]: Starting ignition-setup.service... Feb 12 19:40:47.479821 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:40:47.503731 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:47.503771 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:40:47.503795 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:40:47.544781 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:40:47.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.549000 audit: BPF prog-id=9 op=LOAD Feb 12 19:40:47.550065 systemd[1]: Starting systemd-networkd.service... Feb 12 19:40:47.574680 systemd-networkd[802]: lo: Link UP Feb 12 19:40:47.574689 systemd-networkd[802]: lo: Gained carrier Feb 12 19:40:47.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.575539 systemd-networkd[802]: Enumeration completed Feb 12 19:40:47.575607 systemd[1]: Started systemd-networkd.service. Feb 12 19:40:47.579133 systemd[1]: Reached target network.target. Feb 12 19:40:47.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.581642 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:40:47.585403 systemd[1]: Starting iscsiuio.service... Feb 12 19:40:47.595104 systemd[1]: Started iscsiuio.service. Feb 12 19:40:47.602208 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:40:47.611949 systemd[1]: Starting iscsid.service... Feb 12 19:40:47.617800 iscsid[814]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:47.617800 iscsid[814]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Feb 12 19:40:47.617800 iscsid[814]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:40:47.617800 iscsid[814]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:40:47.617800 iscsid[814]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:47.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.648504 iscsid[814]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:40:47.638390 systemd[1]: Started iscsid.service. Feb 12 19:40:47.658650 kernel: mlx5_core c8ee:00:02.0 enP51438s1: Link up Feb 12 19:40:47.640864 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:40:47.665142 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:40:47.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.667588 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:40:47.671748 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:40:47.673991 systemd[1]: Reached target remote-fs.target. Feb 12 19:40:47.676715 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:40:47.687947 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:40:47.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.729985 kernel: hv_netvsc 000d3a66-1520-000d-3a66-1520000d3a66 eth0: Data path switched to VF: enP51438s1 Feb 12 19:40:47.730192 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:40:47.733807 systemd-networkd[802]: enP51438s1: Link UP Feb 12 19:40:47.736183 systemd-networkd[802]: eth0: Link UP Feb 12 19:40:47.736385 systemd-networkd[802]: eth0: Gained carrier Feb 12 19:40:47.743119 systemd-networkd[802]: enP51438s1: Gained carrier Feb 12 19:40:47.773027 systemd-networkd[802]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:40:47.791036 systemd[1]: Finished ignition-setup.service. Feb 12 19:40:47.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:47.796501 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:40:48.927159 systemd-networkd[802]: eth0: Gained IPv6LL Feb 12 19:40:51.346950 ignition[829]: Ignition 2.14.0 Feb 12 19:40:51.346988 ignition[829]: Stage: fetch-offline Feb 12 19:40:51.347086 ignition[829]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.347138 ignition[829]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:51.451325 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:51.451527 ignition[829]: parsed url from cmdline: "" Feb 12 19:40:51.452788 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:40:51.478749 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 19:40:51.478780 kernel: audit: type=1130 audit(1707766851.457:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.451532 ignition[829]: no config URL provided Feb 12 19:40:51.459447 systemd[1]: Starting ignition-fetch.service... Feb 12 19:40:51.451538 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:51.451546 ignition[829]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:51.451552 ignition[829]: failed to fetch config: resource requires networking Feb 12 19:40:51.451775 ignition[829]: Ignition finished successfully Feb 12 19:40:51.467679 ignition[835]: Ignition 2.14.0 Feb 12 19:40:51.467685 ignition[835]: Stage: fetch Feb 12 19:40:51.467781 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.467805 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:51.472349 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:51.473546 ignition[835]: parsed url from cmdline: "" Feb 12 19:40:51.473550 ignition[835]: no config URL provided Feb 12 19:40:51.473558 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:51.473586 ignition[835]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:51.473617 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:40:51.500596 ignition[835]: GET result: OK Feb 12 19:40:51.500730 ignition[835]: config has been read from IMDS userdata Feb 12 19:40:51.500767 ignition[835]: parsing config with SHA512: bf575fae8c566965f61d4188f1b0dd5544b95b40ed13480eb6611d3af9d3754812fee686ca4c7605b23c1c04a835d8be7d0c9db22c4660aea4525e341d81f1dd Feb 12 19:40:51.538862 unknown[835]: fetched base config from "system" Feb 12 19:40:51.538874 unknown[835]: fetched base config from "system" Feb 12 19:40:51.539537 ignition[835]: fetch: fetch complete Feb 12 19:40:51.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.538881 unknown[835]: fetched user config from "azure" Feb 12 19:40:51.562151 kernel: audit: type=1130 audit(1707766851.544:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.539544 ignition[835]: fetch: fetch passed Feb 12 19:40:51.543294 systemd[1]: Finished ignition-fetch.service. Feb 12 19:40:51.539585 ignition[835]: Ignition finished successfully Feb 12 19:40:51.546412 systemd[1]: Starting ignition-kargs.service... Feb 12 19:40:51.570473 ignition[841]: Ignition 2.14.0 Feb 12 19:40:51.570478 ignition[841]: Stage: kargs Feb 12 19:40:51.570604 ignition[841]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.570628 ignition[841]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:51.591359 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:51.595367 ignition[841]: kargs: kargs passed Feb 12 19:40:51.595425 ignition[841]: Ignition finished successfully Feb 12 19:40:51.599726 systemd[1]: Finished ignition-kargs.service. Feb 12 19:40:51.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.617860 kernel: audit: type=1130 audit(1707766851.602:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.615457 systemd[1]: Starting ignition-disks.service... Feb 12 19:40:51.622854 ignition[847]: Ignition 2.14.0 Feb 12 19:40:51.622865 ignition[847]: Stage: disks Feb 12 19:40:51.623007 ignition[847]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:51.623042 ignition[847]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:51.626109 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:51.629227 ignition[847]: disks: disks passed Feb 12 19:40:51.629274 ignition[847]: Ignition finished successfully Feb 12 19:40:51.636795 systemd[1]: Finished ignition-disks.service. Feb 12 19:40:51.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.640990 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:40:51.657481 kernel: audit: type=1130 audit(1707766851.640:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.657514 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:40:51.661674 systemd[1]: Reached target local-fs.target. Feb 12 19:40:51.665721 systemd[1]: Reached target sysinit.target. Feb 12 19:40:51.669608 systemd[1]: Reached target basic.target. Feb 12 19:40:51.674202 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:40:51.732642 systemd-fsck[855]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 12 19:40:51.743229 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:40:51.763101 kernel: audit: type=1130 audit(1707766851.744:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:51.747665 systemd[1]: Mounting sysroot.mount... Feb 12 19:40:51.775972 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:40:51.777815 systemd[1]: Mounted sysroot.mount. Feb 12 19:40:51.780016 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:40:51.811916 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:40:51.817908 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:40:51.823727 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:40:51.824703 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:40:51.833401 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:40:51.868428 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:40:51.874207 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:40:51.887985 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (866) Feb 12 19:40:51.896393 initrd-setup-root[871]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:40:51.907484 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:51.907509 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:40:51.907520 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:40:51.904468 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:40:51.938678 initrd-setup-root[897]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:40:51.945302 initrd-setup-root[905]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:40:51.951920 initrd-setup-root[913]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:40:52.367370 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:40:52.387114 kernel: audit: type=1130 audit(1707766852.369:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.382806 systemd[1]: Starting ignition-mount.service... Feb 12 19:40:52.392245 systemd[1]: Starting sysroot-boot.service... Feb 12 19:40:52.396655 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:40:52.396769 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:40:52.427789 ignition[932]: INFO : Ignition 2.14.0 Feb 12 19:40:52.427789 ignition[932]: INFO : Stage: mount Feb 12 19:40:52.434599 ignition[932]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:52.434599 ignition[932]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:52.456663 kernel: audit: type=1130 audit(1707766852.435:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.430917 systemd[1]: Finished sysroot-boot.service. Feb 12 19:40:52.460975 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:52.460975 ignition[932]: INFO : mount: mount passed Feb 12 19:40:52.460975 ignition[932]: INFO : Ignition finished successfully Feb 12 19:40:52.478834 kernel: audit: type=1130 audit(1707766852.459:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:52.458682 systemd[1]: Finished ignition-mount.service. Feb 12 19:40:53.328039 coreos-metadata[865]: Feb 12 19:40:53.327 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:40:53.343974 coreos-metadata[865]: Feb 12 19:40:53.343 INFO Fetch successful Feb 12 19:40:53.378617 coreos-metadata[865]: Feb 12 19:40:53.378 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:40:53.390928 coreos-metadata[865]: Feb 12 19:40:53.390 INFO Fetch successful Feb 12 19:40:53.408780 coreos-metadata[865]: Feb 12 19:40:53.408 INFO wrote hostname ci-3510.3.2-a-48475fc0ad to /sysroot/etc/hostname Feb 12 19:40:53.414323 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:40:53.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.419952 systemd[1]: Starting ignition-files.service... Feb 12 19:40:53.433419 kernel: audit: type=1130 audit(1707766853.418:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.439136 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:40:53.449979 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (944) Feb 12 19:40:53.459188 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:53.459220 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:40:53.459238 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:40:53.466854 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:40:53.479107 ignition[963]: INFO : Ignition 2.14.0 Feb 12 19:40:53.479107 ignition[963]: INFO : Stage: files Feb 12 19:40:53.483105 ignition[963]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:53.483105 ignition[963]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:40:53.496745 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:40:53.514277 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:40:53.540472 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:40:53.540472 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:40:53.585453 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:40:53.589449 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:40:53.612035 unknown[963]: wrote ssh authorized keys file for user: core Feb 12 19:40:53.615199 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:40:53.615199 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:40:53.615199 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:40:54.277214 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:40:54.410772 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:40:54.419378 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:40:54.419378 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:40:54.419378 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:40:54.644988 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:40:54.804746 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:40:54.812141 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:40:54.812141 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:40:55.321456 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:40:55.470613 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:40:55.478953 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:40:55.478953 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:40:55.488236 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:40:56.300824 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:41:18.215854 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 19:41:18.224992 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:41:18.224992 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:41:18.224992 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:41:18.446211 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:41:41.234072 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:41:41.243844 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:41:41.243844 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:41:41.243844 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:41:41.361403 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:41:49.389053 ignition[963]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:41:49.397871 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:41:49.397871 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:41:50.457416 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:41:50.457416 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:41:50.457416 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:41:51.128748 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:41:51.603292 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:41:51.609154 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:41:51.693344 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (963) Feb 12 19:41:51.693378 kernel: audit: type=1130 audit(1707766911.671:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem755719261" Feb 12 19:41:51.693450 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem755719261": device or resource busy Feb 12 19:41:51.693450 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem755719261", trying btrfs: device or resource busy Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem755719261" Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem755719261" Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem755719261" Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem755719261" Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1602086275" Feb 12 19:41:51.693450 ignition[963]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1602086275": device or resource busy Feb 12 19:41:51.693450 ignition[963]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1602086275", trying btrfs: device or resource busy Feb 12 19:41:51.693450 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1602086275" Feb 12 19:41:51.797347 kernel: audit: type=1130 audit(1707766911.709:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.797371 kernel: audit: type=1131 audit(1707766911.709:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.643472 systemd[1]: mnt-oem755719261.mount: Deactivated successfully. Feb 12 19:41:51.800397 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1602086275" Feb 12 19:41:51.800397 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem1602086275" Feb 12 19:41:51.800397 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem1602086275" Feb 12 19:41:51.800397 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:41:51.800397 ignition[963]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:41:51.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.663458 systemd[1]: mnt-oem1602086275.mount: Deactivated successfully. Feb 12 19:41:51.907319 kernel: audit: type=1130 audit(1707766911.886:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(23): [started] setting preset to enabled for "waagent.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(23): [finished] setting preset to enabled for "waagent.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(24): [started] setting preset to enabled for "nvidia.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: op(24): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:41:51.907457 ignition[963]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:41:51.907457 ignition[963]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:41:51.907457 ignition[963]: INFO : files: files passed Feb 12 19:41:51.907457 ignition[963]: INFO : Ignition finished successfully Feb 12 19:41:51.669170 systemd[1]: Finished ignition-files.service. Feb 12 19:41:51.962309 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:41:51.690077 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:41:51.693321 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:41:51.698586 systemd[1]: Starting ignition-quench.service... Feb 12 19:41:51.703162 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:41:51.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.703254 systemd[1]: Finished ignition-quench.service. Feb 12 19:41:52.010855 kernel: audit: type=1130 audit(1707766911.981:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.010879 kernel: audit: type=1131 audit(1707766911.984:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:51.881712 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:41:51.886718 systemd[1]: Reached target ignition-complete.target. Feb 12 19:41:51.966543 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:41:51.981802 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:41:51.981895 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:41:51.984453 systemd[1]: Reached target initrd-fs.target. Feb 12 19:41:52.015572 systemd[1]: Reached target initrd.target. Feb 12 19:41:52.019631 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:41:52.033601 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:41:52.044138 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:41:52.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.048948 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:41:52.064681 kernel: audit: type=1130 audit(1707766912.048:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.070159 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:41:52.074419 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:41:52.079082 systemd[1]: Stopped target timers.target. Feb 12 19:41:52.083076 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:41:52.086658 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:41:52.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.090874 systemd[1]: Stopped target initrd.target. Feb 12 19:41:52.107167 kernel: audit: type=1131 audit(1707766912.090:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.107282 systemd[1]: Stopped target basic.target. Feb 12 19:41:52.111032 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:41:52.115706 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:41:52.120155 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:41:52.124795 systemd[1]: Stopped target remote-fs.target. Feb 12 19:41:52.128784 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:41:52.132973 systemd[1]: Stopped target sysinit.target. Feb 12 19:41:52.136928 systemd[1]: Stopped target local-fs.target. Feb 12 19:41:52.140828 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:41:52.145102 systemd[1]: Stopped target swap.target. Feb 12 19:41:52.148825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:41:52.151473 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:41:52.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.155653 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:41:52.172648 kernel: audit: type=1131 audit(1707766912.155:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.172730 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:41:52.175168 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:41:52.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.179461 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:41:52.197182 kernel: audit: type=1131 audit(1707766912.179:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.179574 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:41:52.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.199914 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:41:52.202412 systemd[1]: Stopped ignition-files.service. Feb 12 19:41:52.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.206502 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:41:52.209369 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:41:52.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.215093 systemd[1]: Stopping ignition-mount.service... Feb 12 19:41:52.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.227122 ignition[1001]: INFO : Ignition 2.14.0 Feb 12 19:41:52.227122 ignition[1001]: INFO : Stage: umount Feb 12 19:41:52.227122 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:41:52.227122 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:41:52.220650 systemd[1]: Stopping iscsiuio.service... Feb 12 19:41:52.247923 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:41:52.247923 ignition[1001]: INFO : umount: umount passed Feb 12 19:41:52.247923 ignition[1001]: INFO : Ignition finished successfully Feb 12 19:41:52.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.222518 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:41:52.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.222679 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:41:52.229824 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:41:52.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.233735 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:41:52.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.233911 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:41:52.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.249046 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:41:52.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.249159 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:41:52.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.266953 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:41:52.267063 systemd[1]: Stopped iscsiuio.service. Feb 12 19:41:52.269954 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:41:52.270060 systemd[1]: Stopped ignition-mount.service. Feb 12 19:41:52.274651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:41:52.274735 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:41:52.278452 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:41:52.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.278497 systemd[1]: Stopped ignition-disks.service. Feb 12 19:41:52.282468 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:41:52.282514 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:41:52.286652 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:41:52.286697 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:41:52.291264 systemd[1]: Stopped target network.target. Feb 12 19:41:52.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.293465 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:41:52.293509 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:41:52.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.297843 systemd[1]: Stopped target paths.target. Feb 12 19:41:52.299712 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:41:52.358000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:41:52.304014 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:41:52.308579 systemd[1]: Stopped target slices.target. Feb 12 19:41:52.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.310638 systemd[1]: Stopped target sockets.target. Feb 12 19:41:52.314588 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:41:52.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.314623 systemd[1]: Closed iscsid.socket. Feb 12 19:41:52.318740 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:41:52.318780 systemd[1]: Closed iscsiuio.socket. Feb 12 19:41:52.322456 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:41:52.322505 systemd[1]: Stopped ignition-setup.service. Feb 12 19:41:52.327696 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:41:52.331559 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:41:52.340304 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:41:52.340401 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:41:52.340997 systemd-networkd[802]: eth0: DHCPv6 lease lost Feb 12 19:41:52.349601 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:41:52.349691 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:41:52.355912 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:41:52.355943 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:41:52.361539 systemd[1]: Stopping network-cleanup.service... Feb 12 19:41:52.364995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:41:52.365052 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:41:52.367528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:41:52.367565 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:41:52.369831 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:41:52.369867 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:41:52.389000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:41:52.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.374242 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:41:52.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.379090 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:41:52.379572 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:41:52.379690 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:41:52.383045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:41:52.383093 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:41:52.393922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:41:52.393979 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:41:52.394321 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:41:52.394356 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:41:52.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.394848 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:41:52.394880 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:41:52.397616 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:41:52.397655 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:41:52.399273 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:41:52.440627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:41:52.440681 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:41:52.441816 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:41:52.441902 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:41:52.487566 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:41:52.489927 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:41:52.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.493723 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:41:52.493776 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:41:52.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.560027 kernel: hv_netvsc 000d3a66-1520-000d-3a66-1520000d3a66 eth0: Data path switched from VF: enP51438s1 Feb 12 19:41:52.579393 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:41:52.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:52.579485 systemd[1]: Stopped network-cleanup.service. Feb 12 19:41:52.582129 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:41:52.587671 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:41:52.601482 systemd[1]: Switching root. Feb 12 19:41:52.628477 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 12 19:41:52.628544 iscsid[814]: iscsid shutting down. Feb 12 19:41:52.635453 systemd-journald[183]: Journal stopped Feb 12 19:42:17.523534 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:42:17.523572 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:42:17.523588 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:42:17.523601 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:42:17.523614 kernel: SELinux: policy capability open_perms=1 Feb 12 19:42:17.523627 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:42:17.523643 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:42:17.523659 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:42:17.523673 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:42:17.523686 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:42:17.523700 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:42:17.523714 kernel: kauditd_printk_skb: 32 callbacks suppressed Feb 12 19:42:17.523728 kernel: audit: type=1403 audit(1707766918.167:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:42:17.523744 systemd[1]: Successfully loaded SELinux policy in 527.625ms. Feb 12 19:42:17.523765 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.010ms. Feb 12 19:42:17.523782 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:42:17.523798 systemd[1]: Detected virtualization microsoft. Feb 12 19:42:17.523813 systemd[1]: Detected architecture x86-64. Feb 12 19:42:17.523828 systemd[1]: Detected first boot. Feb 12 19:42:17.523846 systemd[1]: Hostname set to <ci-3510.3.2-a-48475fc0ad>. Feb 12 19:42:17.523861 systemd[1]: Initializing machine ID from random generator. Feb 12 19:42:17.523877 kernel: audit: type=1400 audit(1707766919.639:81): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:42:17.523892 kernel: audit: type=1400 audit(1707766919.659:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:42:17.523907 kernel: audit: type=1400 audit(1707766919.659:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:42:17.523922 kernel: audit: type=1334 audit(1707766919.672:84): prog-id=10 op=LOAD Feb 12 19:42:17.523938 kernel: audit: type=1334 audit(1707766919.672:85): prog-id=10 op=UNLOAD Feb 12 19:42:17.523952 kernel: audit: type=1334 audit(1707766919.689:86): prog-id=11 op=LOAD Feb 12 19:42:17.523977 kernel: audit: type=1334 audit(1707766919.689:87): prog-id=11 op=UNLOAD Feb 12 19:42:17.523992 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:42:17.524007 kernel: audit: type=1400 audit(1707766923.030:88): avc: denied { associate } for pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:42:17.524023 kernel: audit: type=1300 audit(1707766923.030:88): arch=c000003e syscall=188 success=yes exit=0 a0=c0001058cc a1=c00002ae58 a2=c000029b00 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:17.524038 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:42:17.524057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:42:17.524073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:42:17.524090 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:42:17.524105 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 12 19:42:17.524119 kernel: audit: type=1334 audit(1707766936.942:90): prog-id=12 op=LOAD Feb 12 19:42:17.524134 kernel: audit: type=1334 audit(1707766936.942:91): prog-id=3 op=UNLOAD Feb 12 19:42:17.524148 kernel: audit: type=1334 audit(1707766936.947:92): prog-id=13 op=LOAD Feb 12 19:42:17.524165 kernel: audit: type=1334 audit(1707766936.952:93): prog-id=14 op=LOAD Feb 12 19:42:17.524182 kernel: audit: type=1334 audit(1707766936.952:94): prog-id=4 op=UNLOAD Feb 12 19:42:17.524198 kernel: audit: type=1334 audit(1707766936.952:95): prog-id=5 op=UNLOAD Feb 12 19:42:17.524213 kernel: audit: type=1334 audit(1707766936.957:96): prog-id=15 op=LOAD Feb 12 19:42:17.524228 kernel: audit: type=1334 audit(1707766936.957:97): prog-id=12 op=UNLOAD Feb 12 19:42:17.524243 kernel: audit: type=1334 audit(1707766936.978:98): prog-id=16 op=LOAD Feb 12 19:42:17.524258 kernel: audit: type=1334 audit(1707766936.982:99): prog-id=17 op=LOAD Feb 12 19:42:17.524273 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:42:17.524289 systemd[1]: Stopped iscsid.service. Feb 12 19:42:17.524308 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:42:17.524323 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:42:17.524339 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:42:17.524355 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:42:17.524372 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:42:17.524390 systemd[1]: Created slice system-getty.slice. Feb 12 19:42:17.524406 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:42:17.524422 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:42:17.524440 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:42:17.524456 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:42:17.524472 systemd[1]: Created slice user.slice. Feb 12 19:42:17.524488 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:42:17.524504 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:42:17.524520 systemd[1]: Set up automount boot.automount. Feb 12 19:42:17.524536 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:42:17.524552 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:42:17.524568 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:42:17.524586 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:42:17.524602 systemd[1]: Reached target integritysetup.target. Feb 12 19:42:17.524618 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:42:17.524634 systemd[1]: Reached target remote-fs.target. Feb 12 19:42:17.524650 systemd[1]: Reached target slices.target. Feb 12 19:42:17.524666 systemd[1]: Reached target swap.target. Feb 12 19:42:17.524682 systemd[1]: Reached target torcx.target. Feb 12 19:42:17.524698 systemd[1]: Reached target veritysetup.target. Feb 12 19:42:17.524716 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:42:17.524734 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:42:17.524751 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:42:17.524767 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:42:17.524785 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:42:17.524802 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:42:17.524818 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:42:17.524834 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:42:17.524850 systemd[1]: Mounting media.mount... Feb 12 19:42:17.524866 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:42:17.524883 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:42:17.524899 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:42:17.524915 systemd[1]: Mounting tmp.mount... Feb 12 19:42:17.524933 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:42:17.524946 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:42:17.524968 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:42:17.524985 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:42:17.525002 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:42:17.525018 systemd[1]: Starting modprobe@drm.service... Feb 12 19:42:17.525035 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:42:17.525051 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:42:17.525068 systemd[1]: Starting modprobe@loop.service... Feb 12 19:42:17.525088 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:42:17.525107 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:42:17.525145 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:42:17.525164 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:42:17.525179 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:42:17.525195 systemd[1]: Stopped systemd-journald.service. Feb 12 19:42:17.525211 systemd[1]: Starting systemd-journald.service... Feb 12 19:42:17.525227 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:42:17.525243 kernel: loop: module loaded Feb 12 19:42:17.525262 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:42:17.525275 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:42:17.525289 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:42:17.525305 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:42:17.525322 systemd[1]: Stopped verity-setup.service. Feb 12 19:42:17.525337 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:42:17.525349 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:42:17.525362 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:42:17.525375 systemd[1]: Mounted media.mount. Feb 12 19:42:17.525390 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:42:17.525401 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:42:17.525412 systemd[1]: Mounted tmp.mount. Feb 12 19:42:17.525425 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:42:17.525438 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:42:17.525455 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:42:17.525470 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:42:17.525480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:42:17.525490 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:42:17.525501 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:42:17.525513 systemd[1]: Finished modprobe@drm.service. Feb 12 19:42:17.525525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:42:17.525537 kernel: fuse: init (API version 7.34) Feb 12 19:42:17.525554 systemd-journald[1124]: Journal started Feb 12 19:42:17.525603 systemd-journald[1124]: Runtime Journal (/run/log/journal/515fdd59f0a64d6f8732fe1988f9137a) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:41:58.167000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:41:59.639000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:41:59.659000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:41:59.659000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:41:59.672000 audit: BPF prog-id=10 op=LOAD Feb 12 19:41:59.672000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:41:59.689000 audit: BPF prog-id=11 op=LOAD Feb 12 19:41:59.689000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:42:03.030000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:42:03.030000 audit[1035]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058cc a1=c00002ae58 a2=c000029b00 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:03.030000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:42:17.531898 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:42:03.039000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:42:03.039000 audit[1035]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a5 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:03.039000 audit: CWD cwd="/" Feb 12 19:42:03.039000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:03.039000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:03.039000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:42:16.942000 audit: BPF prog-id=12 op=LOAD Feb 12 19:42:16.942000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:42:16.947000 audit: BPF prog-id=13 op=LOAD Feb 12 19:42:16.952000 audit: BPF prog-id=14 op=LOAD Feb 12 19:42:16.952000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:42:16.952000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:42:16.957000 audit: BPF prog-id=15 op=LOAD Feb 12 19:42:16.957000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:42:16.978000 audit: BPF prog-id=16 op=LOAD Feb 12 19:42:16.982000 audit: BPF prog-id=17 op=LOAD Feb 12 19:42:16.982000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:42:16.982000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:42:16.987000 audit: BPF prog-id=18 op=LOAD Feb 12 19:42:16.987000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:42:16.996000 audit: BPF prog-id=19 op=LOAD Feb 12 19:42:16.996000 audit: BPF prog-id=20 op=LOAD Feb 12 19:42:16.996000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:42:16.996000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:42:16.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.008000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:42:17.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.345000 audit: BPF prog-id=21 op=LOAD Feb 12 19:42:17.345000 audit: BPF prog-id=22 op=LOAD Feb 12 19:42:17.345000 audit: BPF prog-id=23 op=LOAD Feb 12 19:42:17.345000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:42:17.345000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:42:17.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.519000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:42:17.519000 audit[1124]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffbdfc9830 a2=4000 a3=7fffbdfc98cc items=0 ppid=1 pid=1124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:17.519000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:42:16.942376 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:42:03.013364 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:42:16.998150 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:42:03.014209 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:42:03.014232 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:42:03.014271 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:42:03.014283 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:42:03.014329 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:42:03.014344 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:42:03.014549 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:42:03.014603 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:42:03.014619 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:42:03.015336 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:42:03.015375 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:42:03.015396 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:42:03.015412 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:42:03.015431 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:42:03.015447 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:42:15.381626 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:15Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:42:15.381854 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:15Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:42:15.381982 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:15Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:42:15.382157 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:15Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:42:15.382203 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:15Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:42:15.382254 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-12T19:42:15Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:42:17.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.539345 systemd[1]: Started systemd-journald.service. Feb 12 19:42:17.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.541701 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:42:17.541843 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:42:17.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.544350 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:42:17.544482 systemd[1]: Finished modprobe@loop.service. Feb 12 19:42:17.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.546757 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:42:17.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.549305 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:42:17.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.551923 systemd[1]: Reached target network-pre.target. Feb 12 19:42:17.555116 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:42:17.558732 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:42:17.563053 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:42:17.565035 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:42:17.568137 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:42:17.570423 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:42:17.571446 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:42:17.576520 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:42:17.577584 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:42:17.582157 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:42:17.584893 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:42:17.629422 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:42:17.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.633338 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:42:17.686290 systemd-journald[1124]: Time spent on flushing to /var/log/journal/515fdd59f0a64d6f8732fe1988f9137a is 21.411ms for 1203 entries. Feb 12 19:42:17.686290 systemd-journald[1124]: System Journal (/var/log/journal/515fdd59f0a64d6f8732fe1988f9137a) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:42:18.170054 systemd-journald[1124]: Received client request to flush runtime journal. Feb 12 19:42:17.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:17.711847 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:42:18.170825 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:42:17.715131 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:42:17.766058 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:42:17.770096 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:42:17.941643 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:42:18.171442 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:42:18.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:18.787055 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:42:18.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:19.679612 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:42:19.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:19.682000 audit: BPF prog-id=24 op=LOAD Feb 12 19:42:19.682000 audit: BPF prog-id=25 op=LOAD Feb 12 19:42:19.682000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:42:19.682000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:42:19.683673 systemd[1]: Starting systemd-udevd.service... Feb 12 19:42:19.701810 systemd-udevd[1161]: Using default interface naming scheme 'v252'. Feb 12 19:42:20.024897 systemd[1]: Started systemd-udevd.service. Feb 12 19:42:20.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:20.028000 audit: BPF prog-id=26 op=LOAD Feb 12 19:42:20.030430 systemd[1]: Starting systemd-networkd.service... Feb 12 19:42:20.069005 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:42:20.131000 audit: BPF prog-id=27 op=LOAD Feb 12 19:42:20.132000 audit: BPF prog-id=28 op=LOAD Feb 12 19:42:20.132000 audit: BPF prog-id=29 op=LOAD Feb 12 19:42:20.133881 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:42:20.186997 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:42:20.194840 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:42:20.194931 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:42:20.204577 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:42:20.204646 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:42:20.204681 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:42:20.178000 audit[1174]: AVC avc: denied { confidentiality } for pid=1174 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:42:20.204988 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:42:20.557539 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:42:20.178000 audit[1174]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564165df3680 a1=f884 a2=7fc2e2ed8bc5 a3=5 items=12 ppid=1161 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:20.178000 audit: CWD cwd="/" Feb 12 19:42:20.178000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=1 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=2 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=3 name=(null) inode=15453 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=4 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=5 name=(null) inode=15454 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=6 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=7 name=(null) inode=15455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=8 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=9 name=(null) inode=15456 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=10 name=(null) inode=15452 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PATH item=11 name=(null) inode=15457 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:42:20.178000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:42:20.588510 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:42:20.598097 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:42:20.598171 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:42:20.602380 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:42:20.602041 systemd[1]: Started systemd-userdbd.service. Feb 12 19:42:20.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:20.605466 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:42:20.914468 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 12 19:42:20.964466 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1171) Feb 12 19:42:20.994610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:42:21.100826 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:42:21.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:21.104642 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:42:21.244800 systemd-networkd[1167]: lo: Link UP Feb 12 19:42:21.244811 systemd-networkd[1167]: lo: Gained carrier Feb 12 19:42:21.245384 systemd-networkd[1167]: Enumeration completed Feb 12 19:42:21.245532 systemd[1]: Started systemd-networkd.service. Feb 12 19:42:21.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:21.249332 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:42:21.330804 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:42:21.386461 kernel: mlx5_core c8ee:00:02.0 enP51438s1: Link up Feb 12 19:42:21.424468 kernel: hv_netvsc 000d3a66-1520-000d-3a66-1520000d3a66 eth0: Data path switched to VF: enP51438s1 Feb 12 19:42:21.427305 lvm[1237]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:42:21.427632 systemd-networkd[1167]: enP51438s1: Link UP Feb 12 19:42:21.427912 systemd-networkd[1167]: eth0: Link UP Feb 12 19:42:21.428009 systemd-networkd[1167]: eth0: Gained carrier Feb 12 19:42:21.434268 systemd-networkd[1167]: enP51438s1: Gained carrier Feb 12 19:42:21.453532 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:42:21.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:21.456767 systemd[1]: Reached target cryptsetup.target. Feb 12 19:42:21.460403 systemd[1]: Starting lvm2-activation.service... Feb 12 19:42:21.464348 lvm[1239]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:42:21.469599 systemd-networkd[1167]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:42:21.484232 systemd[1]: Finished lvm2-activation.service. Feb 12 19:42:21.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:21.486816 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:42:21.489125 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:42:21.489159 systemd[1]: Reached target local-fs.target. Feb 12 19:42:21.491453 systemd[1]: Reached target machines.target. Feb 12 19:42:21.494614 systemd[1]: Starting ldconfig.service... Feb 12 19:42:21.510021 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:42:21.510138 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:42:21.511606 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:42:21.514957 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:42:21.519312 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:42:21.522125 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:42:21.522228 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:42:21.523524 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:42:21.568389 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1241 (bootctl) Feb 12 19:42:21.569426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:42:21.624587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:42:21.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:23.163687 systemd-networkd[1167]: eth0: Gained IPv6LL Feb 12 19:42:23.169395 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:42:23.186703 kernel: kauditd_printk_skb: 81 callbacks suppressed Feb 12 19:42:23.186735 kernel: audit: type=1130 audit(1707766943.170:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:23.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:23.215499 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:42:23.917224 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:42:24.524935 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:42:26.214495 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:42:26.215298 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:42:26.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.232463 kernel: audit: type=1130 audit(1707766946.218:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.376158 systemd-fsck[1249]: fsck.fat 4.2 (2021-01-31) Feb 12 19:42:26.376158 systemd-fsck[1249]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 19:42:26.378658 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:42:26.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.383522 systemd[1]: Mounting boot.mount... Feb 12 19:42:26.396652 kernel: audit: type=1130 audit(1707766946.381:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.417929 systemd[1]: Mounted boot.mount. Feb 12 19:42:26.431121 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:42:26.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.446476 kernel: audit: type=1130 audit(1707766946.433:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.915630 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:42:26.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.919398 systemd[1]: Starting audit-rules.service... Feb 12 19:42:26.931602 kernel: audit: type=1130 audit(1707766946.917:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.932833 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:42:26.936786 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:42:26.941386 systemd[1]: Starting systemd-resolved.service... Feb 12 19:42:26.955718 kernel: audit: type=1334 audit(1707766946.939:169): prog-id=30 op=LOAD Feb 12 19:42:26.955785 kernel: audit: type=1334 audit(1707766946.945:170): prog-id=31 op=LOAD Feb 12 19:42:26.939000 audit: BPF prog-id=30 op=LOAD Feb 12 19:42:26.945000 audit: BPF prog-id=31 op=LOAD Feb 12 19:42:26.950661 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:42:26.957484 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:42:26.979000 audit[1262]: SYSTEM_BOOT pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.990788 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:42:26.995469 kernel: audit: type=1127 audit(1707766946.979:171): pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:42:26.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.015481 kernel: audit: type=1130 audit(1707766946.996:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.102336 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:42:27.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.105044 systemd[1]: Reached target time-set.target. Feb 12 19:42:27.120285 kernel: audit: type=1130 audit(1707766947.104:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.130226 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:42:27.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.133226 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:42:27.174872 systemd-resolved[1260]: Positive Trust Anchors: Feb 12 19:42:27.174889 systemd-resolved[1260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:42:27.174936 systemd-resolved[1260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:42:27.232224 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:42:27.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.352780 systemd-timesyncd[1261]: Contacted time server 85.91.1.164:123 (0.flatcar.pool.ntp.org). Feb 12 19:42:27.352959 systemd-timesyncd[1261]: Initial clock synchronization to Mon 2024-02-12 19:42:27.352357 UTC. Feb 12 19:42:27.443359 systemd-resolved[1260]: Using system hostname 'ci-3510.3.2-a-48475fc0ad'. Feb 12 19:42:27.445239 systemd[1]: Started systemd-resolved.service. Feb 12 19:42:27.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:42:27.448017 systemd[1]: Reached target network.target. Feb 12 19:42:27.450101 systemd[1]: Reached target network-online.target. Feb 12 19:42:27.452355 systemd[1]: Reached target nss-lookup.target. Feb 12 19:42:27.535000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:42:27.535000 audit[1277]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf53990e0 a2=420 a3=0 items=0 ppid=1256 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:42:27.535000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:42:27.536692 augenrules[1277]: No rules Feb 12 19:42:27.537179 systemd[1]: Finished audit-rules.service. Feb 12 19:42:33.031619 ldconfig[1240]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:42:33.039048 systemd[1]: Finished ldconfig.service. Feb 12 19:42:33.042727 systemd[1]: Starting systemd-update-done.service... Feb 12 19:42:33.083722 systemd[1]: Finished systemd-update-done.service. Feb 12 19:42:33.086211 systemd[1]: Reached target sysinit.target. Feb 12 19:42:33.088429 systemd[1]: Started motdgen.path. Feb 12 19:42:33.090573 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:42:33.094117 systemd[1]: Started logrotate.timer. Feb 12 19:42:33.096265 systemd[1]: Started mdadm.timer. Feb 12 19:42:33.098224 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:42:33.100572 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:42:33.100610 systemd[1]: Reached target paths.target. Feb 12 19:42:33.102805 systemd[1]: Reached target timers.target. Feb 12 19:42:33.105339 systemd[1]: Listening on dbus.socket. Feb 12 19:42:33.108308 systemd[1]: Starting docker.socket... Feb 12 19:42:33.126304 systemd[1]: Listening on sshd.socket. Feb 12 19:42:33.129027 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:42:33.129605 systemd[1]: Listening on docker.socket. Feb 12 19:42:33.132321 systemd[1]: Reached target sockets.target. Feb 12 19:42:33.134865 systemd[1]: Reached target basic.target. Feb 12 19:42:33.137119 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:42:33.137153 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:42:33.138107 systemd[1]: Starting containerd.service... Feb 12 19:42:33.141205 systemd[1]: Starting dbus.service... Feb 12 19:42:33.143890 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:42:33.147006 systemd[1]: Starting extend-filesystems.service... Feb 12 19:42:33.149285 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:42:33.151089 systemd[1]: Starting motdgen.service... Feb 12 19:42:33.154392 systemd[1]: Started nvidia.service. Feb 12 19:42:33.157830 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:42:33.161106 systemd[1]: Starting prepare-critools.service... Feb 12 19:42:33.164558 systemd[1]: Starting prepare-helm.service... Feb 12 19:42:33.167664 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:42:33.171160 systemd[1]: Starting sshd-keygen.service... Feb 12 19:42:33.176162 systemd[1]: Starting systemd-logind.service... Feb 12 19:42:33.180532 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:42:33.180614 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:42:33.181111 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:42:33.181951 systemd[1]: Starting update-engine.service... Feb 12 19:42:33.185595 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:42:33.196409 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:42:33.196674 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:42:33.279740 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:42:33.279963 systemd[1]: Finished motdgen.service. Feb 12 19:42:33.337111 extend-filesystems[1288]: Found sda Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda1 Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda2 Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda3 Feb 12 19:42:33.339408 extend-filesystems[1288]: Found usr Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda4 Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda6 Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda7 Feb 12 19:42:33.339408 extend-filesystems[1288]: Found sda9 Feb 12 19:42:33.339408 extend-filesystems[1288]: Checking size of /dev/sda9 Feb 12 19:42:33.447067 env[1312]: time="2024-02-12T19:42:33.447014332Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:42:33.462317 env[1312]: time="2024-02-12T19:42:33.462280479Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:42:33.462529 env[1312]: time="2024-02-12T19:42:33.462514677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:33.463644 env[1312]: time="2024-02-12T19:42:33.463620266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:42:33.463733 env[1312]: time="2024-02-12T19:42:33.463721765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:33.463932 env[1312]: time="2024-02-12T19:42:33.463917063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:42:33.463989 env[1312]: time="2024-02-12T19:42:33.463980362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:33.464035 env[1312]: time="2024-02-12T19:42:33.464025562Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:42:33.464073 env[1312]: time="2024-02-12T19:42:33.464065261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:33.464183 env[1312]: time="2024-02-12T19:42:33.464172860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:33.464401 env[1312]: time="2024-02-12T19:42:33.464386758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:42:33.464599 env[1312]: time="2024-02-12T19:42:33.464582856Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:42:33.464680 env[1312]: time="2024-02-12T19:42:33.464669755Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:42:33.464758 env[1312]: time="2024-02-12T19:42:33.464747154Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:42:33.464809 env[1312]: time="2024-02-12T19:42:33.464800654Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:42:33.620602 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:42:33.621251 systemd-logind[1301]: New seat seat0. Feb 12 19:42:33.632211 jq[1287]: false Feb 12 19:42:33.633162 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:42:33.633361 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:42:33.633808 jq[1305]: true Feb 12 19:42:33.641408 jq[1340]: true Feb 12 19:42:33.694266 tar[1308]: ./ Feb 12 19:42:33.694266 tar[1308]: ./macvlan Feb 12 19:42:33.710757 tar[1310]: linux-amd64/helm Feb 12 19:42:33.710997 tar[1309]: crictl Feb 12 19:42:33.830489 tar[1308]: ./static Feb 12 19:42:33.916770 extend-filesystems[1288]: Old size kept for /dev/sda9 Feb 12 19:42:33.916770 extend-filesystems[1288]: Found sr0 Feb 12 19:42:33.879188 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:42:33.879367 systemd[1]: Finished extend-filesystems.service. Feb 12 19:42:33.970451 tar[1308]: ./vlan Feb 12 19:42:34.006134 env[1312]: time="2024-02-12T19:42:34.006061631Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:42:34.006233 env[1312]: time="2024-02-12T19:42:34.006145130Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:42:34.006233 env[1312]: time="2024-02-12T19:42:34.006164330Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:42:34.006317 env[1312]: time="2024-02-12T19:42:34.006228829Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006317 env[1312]: time="2024-02-12T19:42:34.006304728Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006396 env[1312]: time="2024-02-12T19:42:34.006344028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006396 env[1312]: time="2024-02-12T19:42:34.006371728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006499 env[1312]: time="2024-02-12T19:42:34.006392827Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006499 env[1312]: time="2024-02-12T19:42:34.006423527Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006499 env[1312]: time="2024-02-12T19:42:34.006464127Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006499 env[1312]: time="2024-02-12T19:42:34.006485127Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.006639 env[1312]: time="2024-02-12T19:42:34.006503126Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:42:34.006699 env[1312]: time="2024-02-12T19:42:34.006665425Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:42:34.006827 env[1312]: time="2024-02-12T19:42:34.006808024Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:42:34.007241 env[1312]: time="2024-02-12T19:42:34.007220120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:42:34.007298 env[1312]: time="2024-02-12T19:42:34.007258519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007298 env[1312]: time="2024-02-12T19:42:34.007292319Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:42:34.007467 env[1312]: time="2024-02-12T19:42:34.007428518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007527 env[1312]: time="2024-02-12T19:42:34.007477417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007527 env[1312]: time="2024-02-12T19:42:34.007495717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007597 env[1312]: time="2024-02-12T19:42:34.007517217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007597 env[1312]: time="2024-02-12T19:42:34.007549717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007597 env[1312]: time="2024-02-12T19:42:34.007567916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007597 env[1312]: time="2024-02-12T19:42:34.007585516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007742 env[1312]: time="2024-02-12T19:42:34.007616516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007742 env[1312]: time="2024-02-12T19:42:34.007637716Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:42:34.007839 env[1312]: time="2024-02-12T19:42:34.007816614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007885 env[1312]: time="2024-02-12T19:42:34.007862714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007929 env[1312]: time="2024-02-12T19:42:34.007882413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.007929 env[1312]: time="2024-02-12T19:42:34.007899413Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:42:34.008001 env[1312]: time="2024-02-12T19:42:34.007934613Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:42:34.008001 env[1312]: time="2024-02-12T19:42:34.007951913Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:42:34.008001 env[1312]: time="2024-02-12T19:42:34.007975613Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:42:34.008107 env[1312]: time="2024-02-12T19:42:34.008032612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:42:34.008414 env[1312]: time="2024-02-12T19:42:34.008338009Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.008429508Z" level=info msg="Connect containerd service" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.008484108Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.009731896Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.009906394Z" level=info msg="Start subscribing containerd event" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.009970194Z" level=info msg="Start recovering state" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.010043293Z" level=info msg="Start event monitor" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.010067293Z" level=info msg="Start snapshots syncer" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.010077893Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.010087393Z" level=info msg="Start streaming server" Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.010476289Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.010586888Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:42:34.087153 env[1312]: time="2024-02-12T19:42:34.017325425Z" level=info msg="containerd successfully booted in 0.570993s" Feb 12 19:42:34.016108 systemd[1]: Started containerd.service. Feb 12 19:42:34.098153 bash[1355]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:42:34.094315 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:42:34.127472 tar[1308]: ./portmap Feb 12 19:42:34.138988 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:42:34.208738 dbus-daemon[1286]: [system] SELinux support is enabled Feb 12 19:42:34.208940 systemd[1]: Started dbus.service. Feb 12 19:42:34.213943 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:42:34.213976 systemd[1]: Reached target system-config.target. Feb 12 19:42:34.216759 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:42:34.216782 systemd[1]: Reached target user-config.target. Feb 12 19:42:34.219623 systemd[1]: Started systemd-logind.service. Feb 12 19:42:34.222153 dbus-daemon[1286]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:42:34.237643 tar[1308]: ./host-local Feb 12 19:42:34.306528 tar[1308]: ./vrf Feb 12 19:42:34.378105 tar[1308]: ./bridge Feb 12 19:42:34.465055 tar[1308]: ./tuning Feb 12 19:42:34.538932 tar[1308]: ./firewall Feb 12 19:42:34.628030 tar[1308]: ./host-device Feb 12 19:42:34.671270 update_engine[1304]: I0212 19:42:34.670934 1304 main.cc:92] Flatcar Update Engine starting Feb 12 19:42:34.710774 tar[1308]: ./sbr Feb 12 19:42:34.719970 systemd[1]: Started update-engine.service. Feb 12 19:42:34.727716 update_engine[1304]: I0212 19:42:34.720000 1304 update_check_scheduler.cc:74] Next update check in 5m21s Feb 12 19:42:34.725122 systemd[1]: Started locksmithd.service. Feb 12 19:42:34.770698 systemd[1]: Finished prepare-critools.service. Feb 12 19:42:34.786053 tar[1308]: ./loopback Feb 12 19:42:34.820965 tar[1308]: ./dhcp Feb 12 19:42:34.910341 tar[1310]: linux-amd64/LICENSE Feb 12 19:42:34.910341 tar[1310]: linux-amd64/README.md Feb 12 19:42:34.915895 systemd[1]: Finished prepare-helm.service. Feb 12 19:42:34.931024 tar[1308]: ./ptp Feb 12 19:42:34.974102 tar[1308]: ./ipvlan Feb 12 19:42:35.015207 tar[1308]: ./bandwidth Feb 12 19:42:35.038492 sshd_keygen[1306]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:42:35.067841 systemd[1]: Finished sshd-keygen.service. Feb 12 19:42:35.072264 systemd[1]: Starting issuegen.service... Feb 12 19:42:35.076068 systemd[1]: Started waagent.service. Feb 12 19:42:35.080780 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:42:35.080948 systemd[1]: Finished issuegen.service. Feb 12 19:42:35.084795 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:42:35.102358 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:42:35.110773 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:42:35.114607 systemd[1]: Started getty@tty1.service. Feb 12 19:42:35.118402 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:42:35.121081 systemd[1]: Reached target getty.target. Feb 12 19:42:35.123857 systemd[1]: Reached target multi-user.target. Feb 12 19:42:35.127276 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:42:35.136854 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:42:35.137014 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:42:35.139987 systemd[1]: Startup finished in 853ms (firmware) + 26.632s (loader) + 926ms (kernel) + 1min 13.508s (initrd) + 37.568s (userspace) = 2min 19.489s. Feb 12 19:42:35.538663 login[1408]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:42:35.539555 login[1407]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:42:35.608423 systemd[1]: Created slice user-500.slice. Feb 12 19:42:35.609945 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:42:35.613497 systemd-logind[1301]: New session 1 of user core. Feb 12 19:42:35.619735 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:42:35.621273 systemd[1]: Starting user@500.service... Feb 12 19:42:35.638711 (systemd)[1415]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:42:35.876637 systemd[1415]: Queued start job for default target default.target. Feb 12 19:42:35.877237 systemd[1415]: Reached target paths.target. Feb 12 19:42:35.877268 systemd[1415]: Reached target sockets.target. Feb 12 19:42:35.877286 systemd[1415]: Reached target timers.target. Feb 12 19:42:35.877302 systemd[1415]: Reached target basic.target. Feb 12 19:42:35.877425 systemd[1]: Started user@500.service. Feb 12 19:42:35.878690 systemd[1]: Started session-1.scope. Feb 12 19:42:35.879236 systemd[1415]: Reached target default.target. Feb 12 19:42:35.879431 systemd[1415]: Startup finished in 233ms. Feb 12 19:42:36.284769 locksmithd[1389]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:42:36.540770 login[1408]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:42:36.545610 systemd-logind[1301]: New session 2 of user core. Feb 12 19:42:36.546199 systemd[1]: Started session-2.scope. Feb 12 19:42:41.440014 waagent[1401]: 2024-02-12T19:42:41.439877Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:42:41.468314 waagent[1401]: 2024-02-12T19:42:41.455546Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:42:41.468314 waagent[1401]: 2024-02-12T19:42:41.456806Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:42:41.468314 waagent[1401]: 2024-02-12T19:42:41.458267Z INFO Daemon Daemon Run daemon Feb 12 19:42:41.468314 waagent[1401]: 2024-02-12T19:42:41.459610Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:42:41.474313 waagent[1401]: 2024-02-12T19:42:41.474178Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:42:41.482918 waagent[1401]: 2024-02-12T19:42:41.482790Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:42:41.488138 waagent[1401]: 2024-02-12T19:42:41.488059Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.488342Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.489856Z INFO Daemon Daemon Activate resource disk Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.490690Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.498490Z INFO Daemon Daemon Found device: None Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.499054Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.499946Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.501722Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.502699Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.512116Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.514771Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.515747Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:42:41.529648 waagent[1401]: 2024-02-12T19:42:41.516601Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:42:41.590613 waagent[1401]: 2024-02-12T19:42:41.589975Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:42:41.684675 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:42:41.702842 waagent[1401]: 2024-02-12T19:42:41.702667Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:42:41.706239 waagent[1401]: 2024-02-12T19:42:41.706164Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:42:41.710029 waagent[1401]: 2024-02-12T19:42:41.709967Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:42:41.713960 waagent[1401]: 2024-02-12T19:42:41.713902Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:42:41.717043 waagent[1401]: 2024-02-12T19:42:41.716979Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:42:41.720112 waagent[1401]: 2024-02-12T19:42:41.720054Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:42:41.865083 waagent[1401]: 2024-02-12T19:42:41.864993Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:42:41.869335 waagent[1401]: 2024-02-12T19:42:41.869287Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:42:41.872758 waagent[1401]: 2024-02-12T19:42:41.872697Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:42:42.734968 waagent[1401]: 2024-02-12T19:42:42.734818Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:42:42.747167 waagent[1401]: 2024-02-12T19:42:42.747089Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:42:42.750584 waagent[1401]: 2024-02-12T19:42:42.750514Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:42:42.830573 waagent[1401]: 2024-02-12T19:42:42.830471Z INFO Daemon Daemon Found private key matching thumbprint 7B15B499A7D51388C87A7842FC5BB0BA49F41717 Feb 12 19:42:42.842787 waagent[1401]: 2024-02-12T19:42:42.830935Z INFO Daemon Daemon Certificate with thumbprint DE60E2F2C9EB54835BA618CEF82E719396E29136 has no matching private key. Feb 12 19:42:42.842787 waagent[1401]: 2024-02-12T19:42:42.832229Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:42:42.877725 waagent[1401]: 2024-02-12T19:42:42.877650Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 79911d13-47f1-4f65-baa1-40f20878d16c New eTag: 6619808969148464605] Feb 12 19:42:42.886662 waagent[1401]: 2024-02-12T19:42:42.878627Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:42:42.892527 waagent[1401]: 2024-02-12T19:42:42.892431Z INFO Daemon Daemon Starting provisioning Feb 12 19:42:42.907035 waagent[1401]: 2024-02-12T19:42:42.892740Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:42:42.907035 waagent[1401]: 2024-02-12T19:42:42.893741Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-48475fc0ad] Feb 12 19:42:42.907035 waagent[1401]: 2024-02-12T19:42:42.897817Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-48475fc0ad] Feb 12 19:42:42.907035 waagent[1401]: 2024-02-12T19:42:42.899055Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:42:42.907035 waagent[1401]: 2024-02-12T19:42:42.900071Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:42:42.912869 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:42:42.913113 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:42:42.913186 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:42:42.913578 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:42:42.919490 systemd-networkd[1167]: eth0: DHCPv6 lease lost Feb 12 19:42:42.920764 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:42:42.920960 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:42:42.923427 systemd[1]: Starting systemd-networkd.service... Feb 12 19:42:42.953627 systemd-networkd[1460]: enP51438s1: Link UP Feb 12 19:42:42.953637 systemd-networkd[1460]: enP51438s1: Gained carrier Feb 12 19:42:42.954935 systemd-networkd[1460]: eth0: Link UP Feb 12 19:42:42.954944 systemd-networkd[1460]: eth0: Gained carrier Feb 12 19:42:42.955370 systemd-networkd[1460]: lo: Link UP Feb 12 19:42:42.955380 systemd-networkd[1460]: lo: Gained carrier Feb 12 19:42:42.955698 systemd-networkd[1460]: eth0: Gained IPv6LL Feb 12 19:42:42.956374 systemd-networkd[1460]: Enumeration completed Feb 12 19:42:42.956495 systemd[1]: Started systemd-networkd.service. Feb 12 19:42:42.958836 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:42:42.964829 waagent[1401]: 2024-02-12T19:42:42.961425Z INFO Daemon Daemon Create user account if not exists Feb 12 19:42:42.965229 waagent[1401]: 2024-02-12T19:42:42.965127Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:42:42.966308 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:42:42.968321 waagent[1401]: 2024-02-12T19:42:42.968194Z INFO Daemon Daemon Configure sudoer Feb 12 19:42:42.974362 waagent[1401]: 2024-02-12T19:42:42.969333Z INFO Daemon Daemon Configure sshd Feb 12 19:42:42.974362 waagent[1401]: 2024-02-12T19:42:42.970304Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:42:42.996560 systemd-networkd[1460]: eth0: DHCPv4 address 10.200.8.37/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:42:42.999506 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:42:44.214910 waagent[1401]: 2024-02-12T19:42:44.214791Z INFO Daemon Daemon Provisioning complete Feb 12 19:42:44.232703 waagent[1401]: 2024-02-12T19:42:44.232626Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:42:44.236348 waagent[1401]: 2024-02-12T19:42:44.236276Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:42:44.242155 waagent[1401]: 2024-02-12T19:42:44.242087Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:42:44.503372 waagent[1469]: 2024-02-12T19:42:44.503197Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:42:44.504104 waagent[1469]: 2024-02-12T19:42:44.504035Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:44.504247 waagent[1469]: 2024-02-12T19:42:44.504194Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:44.515457 waagent[1469]: 2024-02-12T19:42:44.515363Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:42:44.515601 waagent[1469]: 2024-02-12T19:42:44.515546Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:42:44.574382 waagent[1469]: 2024-02-12T19:42:44.574264Z INFO ExtHandler ExtHandler Found private key matching thumbprint 7B15B499A7D51388C87A7842FC5BB0BA49F41717 Feb 12 19:42:44.574616 waagent[1469]: 2024-02-12T19:42:44.574552Z INFO ExtHandler ExtHandler Certificate with thumbprint DE60E2F2C9EB54835BA618CEF82E719396E29136 has no matching private key. Feb 12 19:42:44.574843 waagent[1469]: 2024-02-12T19:42:44.574792Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:42:44.588079 waagent[1469]: 2024-02-12T19:42:44.588022Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 00620a8d-6f11-439c-86da-65f3be37cd12 New eTag: 6619808969148464605] Feb 12 19:42:44.588635 waagent[1469]: 2024-02-12T19:42:44.588578Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:42:44.629639 waagent[1469]: 2024-02-12T19:42:44.629537Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:42:44.638165 waagent[1469]: 2024-02-12T19:42:44.638099Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1469 Feb 12 19:42:44.641364 waagent[1469]: 2024-02-12T19:42:44.641299Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:42:44.642604 waagent[1469]: 2024-02-12T19:42:44.642548Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:42:44.714548 waagent[1469]: 2024-02-12T19:42:44.714460Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:42:44.715019 waagent[1469]: 2024-02-12T19:42:44.714944Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:42:44.722734 waagent[1469]: 2024-02-12T19:42:44.722678Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:42:44.723191 waagent[1469]: 2024-02-12T19:42:44.723132Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:42:44.724241 waagent[1469]: 2024-02-12T19:42:44.724175Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:42:44.725531 waagent[1469]: 2024-02-12T19:42:44.725472Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:42:44.725753 waagent[1469]: 2024-02-12T19:42:44.725698Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:44.726353 waagent[1469]: 2024-02-12T19:42:44.726296Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:42:44.726793 waagent[1469]: 2024-02-12T19:42:44.726734Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:42:44.726959 waagent[1469]: 2024-02-12T19:42:44.726891Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:44.727304 waagent[1469]: 2024-02-12T19:42:44.727251Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:44.728013 waagent[1469]: 2024-02-12T19:42:44.727961Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:42:44.728166 waagent[1469]: 2024-02-12T19:42:44.728117Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:44.728725 waagent[1469]: 2024-02-12T19:42:44.728672Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:42:44.729018 waagent[1469]: 2024-02-12T19:42:44.728969Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:42:44.729216 waagent[1469]: 2024-02-12T19:42:44.729167Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:42:44.729470 waagent[1469]: 2024-02-12T19:42:44.729404Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:42:44.729659 waagent[1469]: 2024-02-12T19:42:44.729599Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:42:44.729659 waagent[1469]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:42:44.729659 waagent[1469]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:42:44.729659 waagent[1469]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:42:44.729659 waagent[1469]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:44.729659 waagent[1469]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:44.729659 waagent[1469]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:44.733397 waagent[1469]: 2024-02-12T19:42:44.733323Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:42:44.733803 waagent[1469]: 2024-02-12T19:42:44.733751Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:42:44.734239 waagent[1469]: 2024-02-12T19:42:44.734173Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:42:44.747174 waagent[1469]: 2024-02-12T19:42:44.747116Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:42:44.747813 waagent[1469]: 2024-02-12T19:42:44.747769Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:42:44.748707 waagent[1469]: 2024-02-12T19:42:44.748647Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:42:44.772288 waagent[1469]: 2024-02-12T19:42:44.772139Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1460' Feb 12 19:42:44.793625 waagent[1469]: 2024-02-12T19:42:44.793544Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:42:44.862342 waagent[1469]: 2024-02-12T19:42:44.862227Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:42:44.862342 waagent[1469]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:42:44.862342 waagent[1469]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:42:44.862342 waagent[1469]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:15:20 brd ff:ff:ff:ff:ff:ff Feb 12 19:42:44.862342 waagent[1469]: 3: enP51438s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:15:20 brd ff:ff:ff:ff:ff:ff\ altname enP51438p0s2 Feb 12 19:42:44.862342 waagent[1469]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:42:44.862342 waagent[1469]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:42:44.862342 waagent[1469]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:42:44.862342 waagent[1469]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:42:44.862342 waagent[1469]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:42:44.862342 waagent[1469]: 2: eth0 inet6 fe80::20d:3aff:fe66:1520/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:42:45.065643 waagent[1469]: 2024-02-12T19:42:45.065390Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 12 19:42:45.068640 waagent[1469]: 2024-02-12T19:42:45.068536Z INFO EnvHandler ExtHandler Firewall rules: Feb 12 19:42:45.068640 waagent[1469]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:45.068640 waagent[1469]: pkts bytes target prot opt in out source destination Feb 12 19:42:45.068640 waagent[1469]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:45.068640 waagent[1469]: pkts bytes target prot opt in out source destination Feb 12 19:42:45.068640 waagent[1469]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:45.068640 waagent[1469]: pkts bytes target prot opt in out source destination Feb 12 19:42:45.068640 waagent[1469]: 4 208 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:42:45.068640 waagent[1469]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:42:45.070012 waagent[1469]: 2024-02-12T19:42:45.069954Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:42:45.109035 waagent[1469]: 2024-02-12T19:42:45.108957Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:42:45.245761 waagent[1401]: 2024-02-12T19:42:45.245599Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:42:45.251990 waagent[1401]: 2024-02-12T19:42:45.251917Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:42:46.225195 waagent[1508]: 2024-02-12T19:42:46.225085Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:42:46.225888 waagent[1508]: 2024-02-12T19:42:46.225820Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:42:46.226029 waagent[1508]: 2024-02-12T19:42:46.225976Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:42:46.235522 waagent[1508]: 2024-02-12T19:42:46.235407Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:42:46.235891 waagent[1508]: 2024-02-12T19:42:46.235833Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:46.236051 waagent[1508]: 2024-02-12T19:42:46.236002Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:46.247768 waagent[1508]: 2024-02-12T19:42:46.247695Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:42:46.256154 waagent[1508]: 2024-02-12T19:42:46.256094Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:42:46.257036 waagent[1508]: 2024-02-12T19:42:46.256975Z INFO ExtHandler Feb 12 19:42:46.257193 waagent[1508]: 2024-02-12T19:42:46.257143Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c372b10b-e280-4187-8865-3cc9460ef652 eTag: 6619808969148464605 source: Fabric] Feb 12 19:42:46.257867 waagent[1508]: 2024-02-12T19:42:46.257813Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:42:46.258863 waagent[1508]: 2024-02-12T19:42:46.258806Z INFO ExtHandler Feb 12 19:42:46.258989 waagent[1508]: 2024-02-12T19:42:46.258938Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:42:46.266433 waagent[1508]: 2024-02-12T19:42:46.266381Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:42:46.266849 waagent[1508]: 2024-02-12T19:42:46.266801Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:42:46.287714 waagent[1508]: 2024-02-12T19:42:46.287651Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:42:46.349765 waagent[1508]: 2024-02-12T19:42:46.349645Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DE60E2F2C9EB54835BA618CEF82E719396E29136', 'hasPrivateKey': False} Feb 12 19:42:46.350707 waagent[1508]: 2024-02-12T19:42:46.350642Z INFO ExtHandler Downloaded certificate {'thumbprint': '7B15B499A7D51388C87A7842FC5BB0BA49F41717', 'hasPrivateKey': True} Feb 12 19:42:46.351654 waagent[1508]: 2024-02-12T19:42:46.351582Z INFO ExtHandler Fetch goal state completed Feb 12 19:42:46.373254 waagent[1508]: 2024-02-12T19:42:46.373180Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1508 Feb 12 19:42:46.376427 waagent[1508]: 2024-02-12T19:42:46.376363Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:42:46.377867 waagent[1508]: 2024-02-12T19:42:46.377808Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:42:46.382626 waagent[1508]: 2024-02-12T19:42:46.382577Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:42:46.382942 waagent[1508]: 2024-02-12T19:42:46.382889Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:42:46.390613 waagent[1508]: 2024-02-12T19:42:46.390561Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:42:46.391047 waagent[1508]: 2024-02-12T19:42:46.390991Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:42:46.416040 waagent[1508]: 2024-02-12T19:42:46.415927Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 12 19:42:46.419181 waagent[1508]: 2024-02-12T19:42:46.419066Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 12 19:42:46.424762 waagent[1508]: 2024-02-12T19:42:46.424691Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:42:46.426631 waagent[1508]: 2024-02-12T19:42:46.426571Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:42:46.426958 waagent[1508]: 2024-02-12T19:42:46.426899Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:46.427416 waagent[1508]: 2024-02-12T19:42:46.427362Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:46.427947 waagent[1508]: 2024-02-12T19:42:46.427889Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:42:46.428216 waagent[1508]: 2024-02-12T19:42:46.428159Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:42:46.428216 waagent[1508]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:42:46.428216 waagent[1508]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:42:46.428216 waagent[1508]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:42:46.428216 waagent[1508]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:46.428216 waagent[1508]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:46.428216 waagent[1508]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:42:46.430869 waagent[1508]: 2024-02-12T19:42:46.430776Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:42:46.432100 waagent[1508]: 2024-02-12T19:42:46.432041Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:42:46.432528 waagent[1508]: 2024-02-12T19:42:46.432469Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:42:46.434754 waagent[1508]: 2024-02-12T19:42:46.434627Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:42:46.436094 waagent[1508]: 2024-02-12T19:42:46.435993Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:42:46.436218 waagent[1508]: 2024-02-12T19:42:46.436159Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:42:46.437057 waagent[1508]: 2024-02-12T19:42:46.436999Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:42:46.437435 waagent[1508]: 2024-02-12T19:42:46.437381Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:42:46.437616 waagent[1508]: 2024-02-12T19:42:46.437548Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:42:46.441206 waagent[1508]: 2024-02-12T19:42:46.441148Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:42:46.441206 waagent[1508]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:42:46.441206 waagent[1508]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:42:46.441206 waagent[1508]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:15:20 brd ff:ff:ff:ff:ff:ff Feb 12 19:42:46.441206 waagent[1508]: 3: enP51438s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:66:15:20 brd ff:ff:ff:ff:ff:ff\ altname enP51438p0s2 Feb 12 19:42:46.441206 waagent[1508]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:42:46.441206 waagent[1508]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:42:46.441206 waagent[1508]: 2: eth0 inet 10.200.8.37/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:42:46.441206 waagent[1508]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:42:46.441206 waagent[1508]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:42:46.441206 waagent[1508]: 2: eth0 inet6 fe80::20d:3aff:fe66:1520/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:42:46.441662 waagent[1508]: 2024-02-12T19:42:46.441437Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:42:46.442328 waagent[1508]: 2024-02-12T19:42:46.442274Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:42:46.457491 waagent[1508]: 2024-02-12T19:42:46.457409Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:42:46.459012 waagent[1508]: 2024-02-12T19:42:46.458952Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:42:46.529602 waagent[1508]: 2024-02-12T19:42:46.529486Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:42:46.529602 waagent[1508]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:46.529602 waagent[1508]: pkts bytes target prot opt in out source destination Feb 12 19:42:46.529602 waagent[1508]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:46.529602 waagent[1508]: pkts bytes target prot opt in out source destination Feb 12 19:42:46.529602 waagent[1508]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:42:46.529602 waagent[1508]: pkts bytes target prot opt in out source destination Feb 12 19:42:46.529602 waagent[1508]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:42:46.529602 waagent[1508]: 118 13849 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:42:46.529602 waagent[1508]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:42:46.533394 waagent[1508]: 2024-02-12T19:42:46.533338Z INFO ExtHandler ExtHandler Feb 12 19:42:46.533563 waagent[1508]: 2024-02-12T19:42:46.533510Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 464dade1-5e26-42ba-9ce1-c26d1c2d25c2 correlation d8595458-f8c5-4693-b9f5-1991ca33deea created: 2024-02-12T19:40:06.077667Z] Feb 12 19:42:46.534350 waagent[1508]: 2024-02-12T19:42:46.534291Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:42:46.536045 waagent[1508]: 2024-02-12T19:42:46.535989Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 12 19:42:46.556591 waagent[1508]: 2024-02-12T19:42:46.556525Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:42:46.566684 waagent[1508]: 2024-02-12T19:42:46.566610Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EE665F46-F0D2-41EE-9944-1E98CE14D8B2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:43:08.661228 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 12 19:43:14.418700 systemd[1]: Created slice system-sshd.slice. Feb 12 19:43:14.420851 systemd[1]: Started sshd@0-10.200.8.37:22-10.200.12.6:54692.service. Feb 12 19:43:15.268634 sshd[1547]: Accepted publickey for core from 10.200.12.6 port 54692 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:15.270241 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:15.274362 systemd-logind[1301]: New session 3 of user core. Feb 12 19:43:15.275183 systemd[1]: Started session-3.scope. Feb 12 19:43:15.849563 systemd[1]: Started sshd@1-10.200.8.37:22-10.200.12.6:54696.service. Feb 12 19:43:16.482505 sshd[1552]: Accepted publickey for core from 10.200.12.6 port 54696 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:16.484015 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:16.489169 systemd[1]: Started session-4.scope. Feb 12 19:43:16.489623 systemd-logind[1301]: New session 4 of user core. Feb 12 19:43:16.927628 sshd[1552]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:16.930466 systemd[1]: sshd@1-10.200.8.37:22-10.200.12.6:54696.service: Deactivated successfully. Feb 12 19:43:16.931515 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:43:16.931599 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:43:16.932648 systemd-logind[1301]: Removed session 4. Feb 12 19:43:17.031953 systemd[1]: Started sshd@2-10.200.8.37:22-10.200.12.6:48466.service. Feb 12 19:43:17.650686 sshd[1558]: Accepted publickey for core from 10.200.12.6 port 48466 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:17.652146 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:17.656815 systemd[1]: Started session-5.scope. Feb 12 19:43:17.657251 systemd-logind[1301]: New session 5 of user core. Feb 12 19:43:18.085880 sshd[1558]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:18.089414 systemd[1]: sshd@2-10.200.8.37:22-10.200.12.6:48466.service: Deactivated successfully. Feb 12 19:43:18.090389 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:43:18.091124 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:43:18.091853 systemd-logind[1301]: Removed session 5. Feb 12 19:43:18.190997 systemd[1]: Started sshd@3-10.200.8.37:22-10.200.12.6:48482.service. Feb 12 19:43:18.824965 sshd[1564]: Accepted publickey for core from 10.200.12.6 port 48482 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:18.826435 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:18.831065 systemd[1]: Started session-6.scope. Feb 12 19:43:18.831679 systemd-logind[1301]: New session 6 of user core. Feb 12 19:43:19.267962 sshd[1564]: pam_unix(sshd:session): session closed for user core Feb 12 19:43:19.271154 systemd[1]: sshd@3-10.200.8.37:22-10.200.12.6:48482.service: Deactivated successfully. Feb 12 19:43:19.272141 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:43:19.272924 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:43:19.273732 systemd-logind[1301]: Removed session 6. Feb 12 19:43:19.390893 systemd[1]: Started sshd@4-10.200.8.37:22-10.200.12.6:48492.service. Feb 12 19:43:20.010341 sshd[1570]: Accepted publickey for core from 10.200.12.6 port 48492 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:43:20.011993 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:43:20.017575 systemd[1]: Started session-7.scope. Feb 12 19:43:20.018124 systemd-logind[1301]: New session 7 of user core. Feb 12 19:43:20.137293 update_engine[1304]: I0212 19:43:20.137219 1304 update_attempter.cc:509] Updating boot flags... Feb 12 19:43:20.591903 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:43:20.592238 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:43:21.477909 systemd[1]: Starting docker.service... Feb 12 19:43:21.528095 env[1654]: time="2024-02-12T19:43:21.528033266Z" level=info msg="Starting up" Feb 12 19:43:21.529670 env[1654]: time="2024-02-12T19:43:21.529545465Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:21.529670 env[1654]: time="2024-02-12T19:43:21.529568165Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:21.529670 env[1654]: time="2024-02-12T19:43:21.529588465Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 12 19:43:21.529670 env[1654]: time="2024-02-12T19:43:21.529601865Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:21.531461 env[1654]: time="2024-02-12T19:43:21.531420764Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:43:21.531461 env[1654]: time="2024-02-12T19:43:21.531436164Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:43:21.531620 env[1654]: time="2024-02-12T19:43:21.531470964Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Feb 12 19:43:21.531620 env[1654]: time="2024-02-12T19:43:21.531481764Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:43:21.543013 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2893674373-merged.mount: Deactivated successfully. Feb 12 19:43:21.643229 env[1654]: time="2024-02-12T19:43:21.643191213Z" level=info msg="Loading containers: start." Feb 12 19:43:21.792465 kernel: Initializing XFRM netlink socket Feb 12 19:43:21.828019 env[1654]: time="2024-02-12T19:43:21.827972230Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:43:21.954736 systemd-networkd[1460]: docker0: Link UP Feb 12 19:43:21.973194 env[1654]: time="2024-02-12T19:43:21.973154664Z" level=info msg="Loading containers: done." Feb 12 19:43:21.990630 env[1654]: time="2024-02-12T19:43:21.990594656Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:43:21.990816 env[1654]: time="2024-02-12T19:43:21.990783056Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:43:21.990946 env[1654]: time="2024-02-12T19:43:21.990902956Z" level=info msg="Daemon has completed initialization" Feb 12 19:43:22.017827 systemd[1]: Started docker.service. Feb 12 19:43:22.024646 env[1654]: time="2024-02-12T19:43:22.024606241Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:43:22.042270 systemd[1]: Reloading. Feb 12 19:43:22.127947 /usr/lib/systemd/system-generators/torcx-generator[1782]: time="2024-02-12T19:43:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:22.129513 /usr/lib/systemd/system-generators/torcx-generator[1782]: time="2024-02-12T19:43:22Z" level=info msg="torcx already run" Feb 12 19:43:22.209820 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:22.209839 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:22.225694 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:22.312989 systemd[1]: Started kubelet.service. Feb 12 19:43:22.381705 kubelet[1844]: E0212 19:43:22.381355 1844 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:22.383321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:22.383473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:26.212411 env[1312]: time="2024-02-12T19:43:26.212368037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 19:43:26.942006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213482236.mount: Deactivated successfully. Feb 12 19:43:28.893248 env[1312]: time="2024-02-12T19:43:28.893192714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.898500 env[1312]: time="2024-02-12T19:43:28.898462113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.901390 env[1312]: time="2024-02-12T19:43:28.901357812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.906050 env[1312]: time="2024-02-12T19:43:28.905992911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:28.906654 env[1312]: time="2024-02-12T19:43:28.906624410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 19:43:28.916230 env[1312]: time="2024-02-12T19:43:28.916203008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 19:43:30.941392 env[1312]: time="2024-02-12T19:43:30.941330075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.946342 env[1312]: time="2024-02-12T19:43:30.946303674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.950098 env[1312]: time="2024-02-12T19:43:30.950066273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.953854 env[1312]: time="2024-02-12T19:43:30.953825072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:30.954420 env[1312]: time="2024-02-12T19:43:30.954389472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 19:43:30.965014 env[1312]: time="2024-02-12T19:43:30.964988869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 19:43:32.170702 env[1312]: time="2024-02-12T19:43:32.170648669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.177213 env[1312]: time="2024-02-12T19:43:32.177113607Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.182129 env[1312]: time="2024-02-12T19:43:32.182044888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.186595 env[1312]: time="2024-02-12T19:43:32.186512353Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:32.187361 env[1312]: time="2024-02-12T19:43:32.187321182Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 19:43:32.197189 env[1312]: time="2024-02-12T19:43:32.197159144Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:43:32.606044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:43:32.606360 systemd[1]: Stopped kubelet.service. Feb 12 19:43:32.608218 systemd[1]: Started kubelet.service. Feb 12 19:43:32.653719 kubelet[1879]: E0212 19:43:32.653661 1879 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:32.656952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:32.657112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:33.329114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027967233.mount: Deactivated successfully. Feb 12 19:43:33.789689 env[1312]: time="2024-02-12T19:43:33.789636681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.793691 env[1312]: time="2024-02-12T19:43:33.793653525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.797130 env[1312]: time="2024-02-12T19:43:33.797050446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.799776 env[1312]: time="2024-02-12T19:43:33.799747543Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:33.800127 env[1312]: time="2024-02-12T19:43:33.800098055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:43:33.809557 env[1312]: time="2024-02-12T19:43:33.809525992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:43:34.276964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839435583.mount: Deactivated successfully. Feb 12 19:43:34.297400 env[1312]: time="2024-02-12T19:43:34.297361237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.304273 env[1312]: time="2024-02-12T19:43:34.304242776Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.308259 env[1312]: time="2024-02-12T19:43:34.308184013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.313068 env[1312]: time="2024-02-12T19:43:34.313034882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:34.313469 env[1312]: time="2024-02-12T19:43:34.313425795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:43:34.322877 env[1312]: time="2024-02-12T19:43:34.322849923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 19:43:35.113410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206854555.mount: Deactivated successfully. Feb 12 19:43:39.189318 env[1312]: time="2024-02-12T19:43:39.189261708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:39.196730 env[1312]: time="2024-02-12T19:43:39.196686333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:39.200868 env[1312]: time="2024-02-12T19:43:39.200832558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:39.209228 env[1312]: time="2024-02-12T19:43:39.209149410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:39.209907 env[1312]: time="2024-02-12T19:43:39.209876232Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 19:43:39.219829 env[1312]: time="2024-02-12T19:43:39.219803032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 19:43:39.754472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923148343.mount: Deactivated successfully. Feb 12 19:43:40.368386 env[1312]: time="2024-02-12T19:43:40.368299973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.377034 env[1312]: time="2024-02-12T19:43:40.376990329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.381560 env[1312]: time="2024-02-12T19:43:40.381398959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.386283 env[1312]: time="2024-02-12T19:43:40.386246501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:40.386749 env[1312]: time="2024-02-12T19:43:40.386713515Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 19:43:42.855917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:43:42.856175 systemd[1]: Stopped kubelet.service. Feb 12 19:43:42.861560 systemd[1]: Started kubelet.service. Feb 12 19:43:42.941723 kubelet[1954]: E0212 19:43:42.941660 1954 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:43:42.944007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:43:42.944169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:43:43.045949 systemd[1]: Stopped kubelet.service. Feb 12 19:43:43.060733 systemd[1]: Reloading. Feb 12 19:43:43.139274 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-02-12T19:43:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:43:43.139313 /usr/lib/systemd/system-generators/torcx-generator[1983]: time="2024-02-12T19:43:43Z" level=info msg="torcx already run" Feb 12 19:43:43.224946 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:43:43.224965 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:43:43.240666 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:43:43.337018 systemd[1]: Started kubelet.service. Feb 12 19:43:43.386892 kubelet[2046]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:43.386892 kubelet[2046]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:43.387324 kubelet[2046]: I0212 19:43:43.386948 2046 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:43:43.388250 kubelet[2046]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:43:43.388250 kubelet[2046]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:43:43.738358 kubelet[2046]: I0212 19:43:43.738317 2046 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:43:43.738358 kubelet[2046]: I0212 19:43:43.738343 2046 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:43:43.738650 kubelet[2046]: I0212 19:43:43.738629 2046 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:43:43.741664 kubelet[2046]: E0212 19:43:43.741639 2046 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.741855 kubelet[2046]: I0212 19:43:43.741840 2046 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:43:43.744487 kubelet[2046]: I0212 19:43:43.744465 2046 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:43:43.744706 kubelet[2046]: I0212 19:43:43.744688 2046 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:43:43.744799 kubelet[2046]: I0212 19:43:43.744778 2046 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:43:43.744938 kubelet[2046]: I0212 19:43:43.744809 2046 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:43:43.744938 kubelet[2046]: I0212 19:43:43.744824 2046 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:43:43.744938 kubelet[2046]: I0212 19:43:43.744935 2046 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:43.747625 kubelet[2046]: I0212 19:43:43.747605 2046 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:43:43.747625 kubelet[2046]: I0212 19:43:43.747626 2046 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:43:43.747779 kubelet[2046]: I0212 19:43:43.747671 2046 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:43:43.747779 kubelet[2046]: I0212 19:43:43.747689 2046 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:43:43.748454 kubelet[2046]: W0212 19:43:43.748268 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.748454 kubelet[2046]: E0212 19:43:43.748318 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.748454 kubelet[2046]: W0212 19:43:43.748385 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.748626 kubelet[2046]: E0212 19:43:43.748435 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.748626 kubelet[2046]: I0212 19:43:43.748540 2046 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:43:43.748814 kubelet[2046]: W0212 19:43:43.748794 2046 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:43:43.749232 kubelet[2046]: I0212 19:43:43.749210 2046 server.go:1186] "Started kubelet" Feb 12 19:43:43.753637 kubelet[2046]: E0212 19:43:43.753624 2046 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:43:43.753729 kubelet[2046]: E0212 19:43:43.753722 2046 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:43:43.753994 kubelet[2046]: E0212 19:43:43.753926 2046 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-48475fc0ad.17b3350f59b243cb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-48475fc0ad", UID:"ci-3510.3.2-a-48475fc0ad", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-48475fc0ad"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 749186507, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 749186507, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.37:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:43:43.755069 kubelet[2046]: I0212 19:43:43.755056 2046 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:43:43.755663 kubelet[2046]: I0212 19:43:43.755649 2046 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:43:43.756111 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:43:43.756282 kubelet[2046]: I0212 19:43:43.756248 2046 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:43:43.758642 kubelet[2046]: I0212 19:43:43.758627 2046 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:43:43.758831 kubelet[2046]: I0212 19:43:43.758814 2046 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:43:43.759095 kubelet[2046]: W0212 19:43:43.759062 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.759095 kubelet[2046]: E0212 19:43:43.759096 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.759667 kubelet[2046]: E0212 19:43:43.759586 2046 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.818053 kubelet[2046]: I0212 19:43:43.818015 2046 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:43:43.818053 kubelet[2046]: I0212 19:43:43.818036 2046 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:43:43.818053 kubelet[2046]: I0212 19:43:43.818055 2046 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:43:43.824196 kubelet[2046]: I0212 19:43:43.824170 2046 policy_none.go:49] "None policy: Start" Feb 12 19:43:43.826841 kubelet[2046]: I0212 19:43:43.826818 2046 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:43:43.826841 kubelet[2046]: I0212 19:43:43.826844 2046 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:43:43.835383 systemd[1]: Created slice kubepods.slice. Feb 12 19:43:43.839286 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:43:43.842148 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:43:43.848282 kubelet[2046]: I0212 19:43:43.848244 2046 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:43:43.849729 kubelet[2046]: I0212 19:43:43.849676 2046 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:43:43.850526 kubelet[2046]: E0212 19:43:43.850508 2046 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-48475fc0ad\" not found" Feb 12 19:43:43.859920 kubelet[2046]: I0212 19:43:43.859901 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:43.860256 kubelet[2046]: E0212 19:43:43.860231 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:43.867405 kubelet[2046]: I0212 19:43:43.867391 2046 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:43:43.903252 kubelet[2046]: I0212 19:43:43.903235 2046 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:43:43.903330 kubelet[2046]: I0212 19:43:43.903278 2046 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:43:43.903330 kubelet[2046]: I0212 19:43:43.903298 2046 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:43:43.903422 kubelet[2046]: E0212 19:43:43.903344 2046 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:43:43.903932 kubelet[2046]: W0212 19:43:43.903890 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.904005 kubelet[2046]: E0212 19:43:43.903942 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:43.960519 kubelet[2046]: E0212 19:43:43.960467 2046 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:44.004104 kubelet[2046]: I0212 19:43:44.003869 2046 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:44.006741 kubelet[2046]: I0212 19:43:44.006715 2046 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:44.009203 kubelet[2046]: I0212 19:43:44.009015 2046 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:43:44.009203 kubelet[2046]: I0212 19:43:44.009165 2046 status_manager.go:698] "Failed to get status for pod" podUID=becad30702caa05e9ab401e4836c5705 pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:44.014015 systemd[1]: Created slice kubepods-burstable-podbecad30702caa05e9ab401e4836c5705.slice. Feb 12 19:43:44.018881 kubelet[2046]: I0212 19:43:44.018865 2046 status_manager.go:698] "Failed to get status for pod" podUID=8c28aa4c00fa0ced62e4bf6991838f95 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:44.019383 kubelet[2046]: I0212 19:43:44.019346 2046 status_manager.go:698] "Failed to get status for pod" podUID=f20864326a2406242f4fa16c46ddcfae pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:44.027302 systemd[1]: Created slice kubepods-burstable-pod8c28aa4c00fa0ced62e4bf6991838f95.slice. Feb 12 19:43:44.036199 systemd[1]: Created slice kubepods-burstable-podf20864326a2406242f4fa16c46ddcfae.slice. Feb 12 19:43:44.061162 kubelet[2046]: I0212 19:43:44.061134 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/becad30702caa05e9ab401e4836c5705-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-48475fc0ad\" (UID: \"becad30702caa05e9ab401e4836c5705\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.061352 kubelet[2046]: I0212 19:43:44.061335 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.061516 kubelet[2046]: I0212 19:43:44.061500 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/becad30702caa05e9ab401e4836c5705-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-48475fc0ad\" (UID: \"becad30702caa05e9ab401e4836c5705\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.061682 kubelet[2046]: I0212 19:43:44.061667 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/becad30702caa05e9ab401e4836c5705-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-48475fc0ad\" (UID: \"becad30702caa05e9ab401e4836c5705\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.062156 kubelet[2046]: I0212 19:43:44.062138 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.062405 kubelet[2046]: I0212 19:43:44.062380 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.062528 kubelet[2046]: I0212 19:43:44.062464 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.062528 kubelet[2046]: I0212 19:43:44.062509 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.062640 kubelet[2046]: I0212 19:43:44.062548 2046 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f20864326a2406242f4fa16c46ddcfae-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-48475fc0ad\" (UID: \"f20864326a2406242f4fa16c46ddcfae\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.062880 kubelet[2046]: I0212 19:43:44.062852 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.063243 kubelet[2046]: E0212 19:43:44.063219 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.325671 env[1312]: time="2024-02-12T19:43:44.325537111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-48475fc0ad,Uid:becad30702caa05e9ab401e4836c5705,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:44.330508 env[1312]: time="2024-02-12T19:43:44.330224535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-48475fc0ad,Uid:8c28aa4c00fa0ced62e4bf6991838f95,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:44.339629 env[1312]: time="2024-02-12T19:43:44.339592282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-48475fc0ad,Uid:f20864326a2406242f4fa16c46ddcfae,Namespace:kube-system,Attempt:0,}" Feb 12 19:43:44.362177 kubelet[2046]: E0212 19:43:44.362073 2046 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:44.464958 kubelet[2046]: I0212 19:43:44.464929 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.465393 kubelet[2046]: E0212 19:43:44.465268 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.689981 env[1312]: time="2024-02-12T19:43:44.689888033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-48475fc0ad,Uid:becad30702caa05e9ab401e4836c5705,Namespace:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Feb 12 19:43:44.690984 kubelet[2046]: E0212 19:43:44.690583 2046 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" Feb 12 19:43:44.690984 kubelet[2046]: E0212 19:43:44.690723 2046 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.691223 kubelet[2046]: E0212 19:43:44.690768 2046 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.6\": failed to pull image \"registry.k8s.io/pause:3.6\": failed to pull and unpack image \"registry.k8s.io/pause:3.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host" pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:44.691223 kubelet[2046]: E0212 19:43:44.690915 2046 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-ci-3510.3.2-a-48475fc0ad_kube-system(becad30702caa05e9ab401e4836c5705)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-ci-3510.3.2-a-48475fc0ad_kube-system(becad30702caa05e9ab401e4836c5705)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.6\\\": failed to pull image \\\"registry.k8s.io/pause:3.6\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.6\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee\\\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host\"" pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" podUID=becad30702caa05e9ab401e4836c5705 Feb 12 19:43:44.862245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012633534.mount: Deactivated successfully. Feb 12 19:43:44.889301 env[1312]: time="2024-02-12T19:43:44.889236598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.892483 env[1312]: time="2024-02-12T19:43:44.892430082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.903120 env[1312]: time="2024-02-12T19:43:44.903083764Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.908983 env[1312]: time="2024-02-12T19:43:44.908944018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.910940 env[1312]: time="2024-02-12T19:43:44.910911070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.917960 env[1312]: time="2024-02-12T19:43:44.917924156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.920132 env[1312]: time="2024-02-12T19:43:44.920098713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.925308 env[1312]: time="2024-02-12T19:43:44.925274150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:43:44.941900 kubelet[2046]: W0212 19:43:44.941637 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:44.941900 kubelet[2046]: E0212 19:43:44.941773 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:44.981705 env[1312]: time="2024-02-12T19:43:44.981029222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:44.981705 env[1312]: time="2024-02-12T19:43:44.981067023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:44.981705 env[1312]: time="2024-02-12T19:43:44.981097424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:44.981705 env[1312]: time="2024-02-12T19:43:44.981263428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d9fda010c2684ea592122064052ed39b63c47b4478c68862c7246dfb3f42320 pid=2122 runtime=io.containerd.runc.v2 Feb 12 19:43:44.988500 env[1312]: time="2024-02-12T19:43:44.988423217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:43:44.988707 env[1312]: time="2024-02-12T19:43:44.988669624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:43:44.988830 env[1312]: time="2024-02-12T19:43:44.988805128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:43:44.989653 env[1312]: time="2024-02-12T19:43:44.989599649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d644beae8460fb1dfbd0e44943c33d9d4cff7a3c2f9726dba3e37d456a47f2b8 pid=2139 runtime=io.containerd.runc.v2 Feb 12 19:43:45.002093 systemd[1]: Started cri-containerd-2d9fda010c2684ea592122064052ed39b63c47b4478c68862c7246dfb3f42320.scope. Feb 12 19:43:45.011021 systemd[1]: Started cri-containerd-d644beae8460fb1dfbd0e44943c33d9d4cff7a3c2f9726dba3e37d456a47f2b8.scope. Feb 12 19:43:45.031096 kubelet[2046]: W0212 19:43:45.031025 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.031238 kubelet[2046]: E0212 19:43:45.031104 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.074090 env[1312]: time="2024-02-12T19:43:45.072863597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-48475fc0ad,Uid:8c28aa4c00fa0ced62e4bf6991838f95,Namespace:kube-system,Attempt:0,} returns sandbox id \"d644beae8460fb1dfbd0e44943c33d9d4cff7a3c2f9726dba3e37d456a47f2b8\"" Feb 12 19:43:45.074090 env[1312]: time="2024-02-12T19:43:45.072997100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-48475fc0ad,Uid:f20864326a2406242f4fa16c46ddcfae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d9fda010c2684ea592122064052ed39b63c47b4478c68862c7246dfb3f42320\"" Feb 12 19:43:45.077263 env[1312]: time="2024-02-12T19:43:45.077228909Z" level=info msg="CreateContainer within sandbox \"2d9fda010c2684ea592122064052ed39b63c47b4478c68862c7246dfb3f42320\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:43:45.077491 env[1312]: time="2024-02-12T19:43:45.077293111Z" level=info msg="CreateContainer within sandbox \"d644beae8460fb1dfbd0e44943c33d9d4cff7a3c2f9726dba3e37d456a47f2b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:43:45.115246 env[1312]: time="2024-02-12T19:43:45.115196685Z" level=info msg="CreateContainer within sandbox \"d644beae8460fb1dfbd0e44943c33d9d4cff7a3c2f9726dba3e37d456a47f2b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7061bd818a20bdc71d4852e2827647a30673fac47cdab2fdf81d892e311355ed\"" Feb 12 19:43:45.115882 env[1312]: time="2024-02-12T19:43:45.115849802Z" level=info msg="StartContainer for \"7061bd818a20bdc71d4852e2827647a30673fac47cdab2fdf81d892e311355ed\"" Feb 12 19:43:45.125451 kubelet[2046]: W0212 19:43:45.125396 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.125574 kubelet[2046]: E0212 19:43:45.125470 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.125632 env[1312]: time="2024-02-12T19:43:45.125573152Z" level=info msg="CreateContainer within sandbox \"2d9fda010c2684ea592122064052ed39b63c47b4478c68862c7246dfb3f42320\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5511b385e4b5932782a1718e936f6306f8cf69bc95cf13b75f8f20a8d6ea3c9b\"" Feb 12 19:43:45.126047 env[1312]: time="2024-02-12T19:43:45.126015564Z" level=info msg="StartContainer for \"5511b385e4b5932782a1718e936f6306f8cf69bc95cf13b75f8f20a8d6ea3c9b\"" Feb 12 19:43:45.137194 systemd[1]: Started cri-containerd-7061bd818a20bdc71d4852e2827647a30673fac47cdab2fdf81d892e311355ed.scope. Feb 12 19:43:45.159170 systemd[1]: Started cri-containerd-5511b385e4b5932782a1718e936f6306f8cf69bc95cf13b75f8f20a8d6ea3c9b.scope. Feb 12 19:43:45.162827 kubelet[2046]: E0212 19:43:45.162778 2046 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.211464 kubelet[2046]: W0212 19:43:45.211289 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.211464 kubelet[2046]: E0212 19:43:45.211345 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:45.212983 env[1312]: time="2024-02-12T19:43:45.212937999Z" level=info msg="StartContainer for \"7061bd818a20bdc71d4852e2827647a30673fac47cdab2fdf81d892e311355ed\" returns successfully" Feb 12 19:43:45.229843 env[1312]: time="2024-02-12T19:43:45.229801332Z" level=info msg="StartContainer for \"5511b385e4b5932782a1718e936f6306f8cf69bc95cf13b75f8f20a8d6ea3c9b\" returns successfully" Feb 12 19:43:45.267431 kubelet[2046]: I0212 19:43:45.267400 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:45.267813 kubelet[2046]: E0212 19:43:45.267783 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:45.911511 kubelet[2046]: I0212 19:43:45.911485 2046 status_manager.go:698] "Failed to get status for pod" podUID=8c28aa4c00fa0ced62e4bf6991838f95 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:45.913585 kubelet[2046]: I0212 19:43:45.913559 2046 status_manager.go:698] "Failed to get status for pod" podUID=f20864326a2406242f4fa16c46ddcfae pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:45.940584 kubelet[2046]: E0212 19:43:45.940540 2046 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:46.763316 kubelet[2046]: E0212 19:43:46.763268 2046 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:46.869598 kubelet[2046]: I0212 19:43:46.869560 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:46.870504 kubelet[2046]: E0212 19:43:46.870462 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:46.892832 kubelet[2046]: W0212 19:43:46.892782 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:46.892832 kubelet[2046]: E0212 19:43:46.892837 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:47.696675 kubelet[2046]: W0212 19:43:47.696609 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:47.696675 kubelet[2046]: E0212 19:43:47.696676 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:47.724530 kubelet[2046]: W0212 19:43:47.724428 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:47.724530 kubelet[2046]: E0212 19:43:47.724534 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:48.117286 kubelet[2046]: W0212 19:43:48.117232 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:48.117434 kubelet[2046]: E0212 19:43:48.117296 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:49.964787 kubelet[2046]: E0212 19:43:49.964740 2046 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:50.073229 kubelet[2046]: I0212 19:43:50.072862 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:50.073229 kubelet[2046]: E0212 19:43:50.073191 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:50.320898 kubelet[2046]: E0212 19:43:50.320790 2046 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:50.397657 kubelet[2046]: E0212 19:43:50.397554 2046 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-48475fc0ad.17b3350f59b243cb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-48475fc0ad", UID:"ci-3510.3.2-a-48475fc0ad", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-48475fc0ad"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 749186507, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 749186507, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.37:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:43:52.932373 kubelet[2046]: W0212 19:43:52.932333 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:52.932373 kubelet[2046]: E0212 19:43:52.932372 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.014252 kubelet[2046]: W0212 19:43:53.014182 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.014252 kubelet[2046]: E0212 19:43:53.014255 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.095070 kubelet[2046]: W0212 19:43:53.095027 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.095070 kubelet[2046]: E0212 19:43:53.095074 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-48475fc0ad&limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.106357 kubelet[2046]: W0212 19:43:53.106332 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.106463 kubelet[2046]: E0212 19:43:53.106364 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:53.851064 kubelet[2046]: E0212 19:43:53.851026 2046 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-48475fc0ad\" not found" Feb 12 19:43:56.366536 kubelet[2046]: E0212 19:43:56.366412 2046 controller.go:146] failed to ensure lease exists, will retry in 7s, error: Get "https://10.200.8.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-48475fc0ad?timeout=10s": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:56.475601 kubelet[2046]: I0212 19:43:56.475562 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:56.475920 kubelet[2046]: E0212 19:43:56.475895 2046 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.37:6443/api/v1/nodes\": dial tcp 10.200.8.37:6443: connect: connection refused" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:43:56.681939 kubelet[2046]: I0212 19:43:56.681836 2046 status_manager.go:698] "Failed to get status for pod" podUID=8c28aa4c00fa0ced62e4bf6991838f95 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:56.683530 kubelet[2046]: I0212 19:43:56.683503 2046 status_manager.go:698] "Failed to get status for pod" podUID=8c28aa4c00fa0ced62e4bf6991838f95 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:58.342557 kubelet[2046]: E0212 19:43:58.342519 2046 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:43:58.501781 kubelet[2046]: I0212 19:43:58.501583 2046 status_manager.go:698] "Failed to get status for pod" podUID=f20864326a2406242f4fa16c46ddcfae pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:58.503215 kubelet[2046]: I0212 19:43:58.503192 2046 status_manager.go:698] "Failed to get status for pod" podUID=f20864326a2406242f4fa16c46ddcfae pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" err="Get \"https://10.200.8.37:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-48475fc0ad\": dial tcp 10.200.8.37:6443: connect: connection refused" Feb 12 19:43:59.905957 env[1312]: time="2024-02-12T19:43:59.905908326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-48475fc0ad,Uid:becad30702caa05e9ab401e4836c5705,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:00.119756 env[1312]: time="2024-02-12T19:44:00.119688913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:00.119756 env[1312]: time="2024-02-12T19:44:00.119723913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:00.120049 env[1312]: time="2024-02-12T19:44:00.119737013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:00.120153 env[1312]: time="2024-02-12T19:44:00.120090020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/581ba4d3f3038d03598e7fffb709a1fc6a2add6b12cbfdf8ed65a298a100d370 pid=2277 runtime=io.containerd.runc.v2 Feb 12 19:44:00.143103 systemd[1]: Started cri-containerd-581ba4d3f3038d03598e7fffb709a1fc6a2add6b12cbfdf8ed65a298a100d370.scope. Feb 12 19:44:00.179726 env[1312]: time="2024-02-12T19:44:00.179388958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-48475fc0ad,Uid:becad30702caa05e9ab401e4836c5705,Namespace:kube-system,Attempt:0,} returns sandbox id \"581ba4d3f3038d03598e7fffb709a1fc6a2add6b12cbfdf8ed65a298a100d370\"" Feb 12 19:44:00.182054 env[1312]: time="2024-02-12T19:44:00.182022305Z" level=info msg="CreateContainer within sandbox \"581ba4d3f3038d03598e7fffb709a1fc6a2add6b12cbfdf8ed65a298a100d370\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:44:00.303837 kubelet[2046]: W0212 19:44:00.303800 2046 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:44:00.303837 kubelet[2046]: E0212 19:44:00.303838 2046 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.37:6443: connect: connection refused Feb 12 19:44:00.399138 kubelet[2046]: E0212 19:44:00.399024 2046 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-48475fc0ad.17b3350f59b243cb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-48475fc0ad", UID:"ci-3510.3.2-a-48475fc0ad", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-48475fc0ad"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 749186507, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 43, 43, 749186507, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.37:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:44:00.411781 env[1312]: time="2024-02-12T19:44:00.411732729Z" level=info msg="CreateContainer within sandbox \"581ba4d3f3038d03598e7fffb709a1fc6a2add6b12cbfdf8ed65a298a100d370\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1ad51d1e74bd5d105a15e8fa8bf4a6dfba5705e15ddb238d9e47d31b91e45c81\"" Feb 12 19:44:00.412299 env[1312]: time="2024-02-12T19:44:00.412269338Z" level=info msg="StartContainer for \"1ad51d1e74bd5d105a15e8fa8bf4a6dfba5705e15ddb238d9e47d31b91e45c81\"" Feb 12 19:44:00.428310 systemd[1]: Started cri-containerd-1ad51d1e74bd5d105a15e8fa8bf4a6dfba5705e15ddb238d9e47d31b91e45c81.scope. Feb 12 19:44:00.474208 env[1312]: time="2024-02-12T19:44:00.474100521Z" level=info msg="StartContainer for \"1ad51d1e74bd5d105a15e8fa8bf4a6dfba5705e15ddb238d9e47d31b91e45c81\" returns successfully" Feb 12 19:44:02.346323 kubelet[2046]: E0212 19:44:02.346282 2046 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.2-a-48475fc0ad" not found Feb 12 19:44:02.761640 kubelet[2046]: I0212 19:44:02.761534 2046 apiserver.go:52] "Watching apiserver" Feb 12 19:44:03.059174 kubelet[2046]: I0212 19:44:03.059058 2046 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:44:03.078211 kubelet[2046]: I0212 19:44:03.078175 2046 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:44:03.370943 kubelet[2046]: E0212 19:44:03.370906 2046 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-48475fc0ad\" not found" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:03.478648 kubelet[2046]: I0212 19:44:03.478605 2046 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:03.575631 kubelet[2046]: E0212 19:44:03.575587 2046 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510.3.2-a-48475fc0ad" not found Feb 12 19:44:03.778280 kubelet[2046]: I0212 19:44:03.778093 2046 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:04.436319 systemd[1]: Reloading. Feb 12 19:44:04.521607 /usr/lib/systemd/system-generators/torcx-generator[2374]: time="2024-02-12T19:44:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:44:04.522068 /usr/lib/systemd/system-generators/torcx-generator[2374]: time="2024-02-12T19:44:04Z" level=info msg="torcx already run" Feb 12 19:44:04.612343 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:44:04.612360 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:44:04.628651 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:44:04.736311 kubelet[2046]: I0212 19:44:04.736216 2046 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:44:04.736250 systemd[1]: Stopping kubelet.service... Feb 12 19:44:04.752872 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:44:04.753096 systemd[1]: Stopped kubelet.service. Feb 12 19:44:04.757413 systemd[1]: Started kubelet.service. Feb 12 19:44:04.836452 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:44:04.836766 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:44:04.836917 kubelet[2436]: I0212 19:44:04.836890 2436 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:44:04.838143 kubelet[2436]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:44:04.838228 kubelet[2436]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:44:04.841479 kubelet[2436]: I0212 19:44:04.841437 2436 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:44:04.841479 kubelet[2436]: I0212 19:44:04.841474 2436 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:44:04.841679 kubelet[2436]: I0212 19:44:04.841660 2436 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:44:04.842777 kubelet[2436]: I0212 19:44:04.842751 2436 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:44:04.843636 kubelet[2436]: I0212 19:44:04.843618 2436 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:44:04.846827 kubelet[2436]: I0212 19:44:04.846799 2436 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:44:04.847021 kubelet[2436]: I0212 19:44:04.847006 2436 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:44:04.847088 kubelet[2436]: I0212 19:44:04.847082 2436 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:44:04.847221 kubelet[2436]: I0212 19:44:04.847106 2436 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:44:04.847221 kubelet[2436]: I0212 19:44:04.847121 2436 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:44:04.847221 kubelet[2436]: I0212 19:44:04.847161 2436 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:44:04.851893 kubelet[2436]: I0212 19:44:04.851869 2436 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:44:04.851997 kubelet[2436]: I0212 19:44:04.851898 2436 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:44:04.852370 kubelet[2436]: I0212 19:44:04.852348 2436 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:44:04.852426 kubelet[2436]: I0212 19:44:04.852389 2436 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:44:04.868845 kubelet[2436]: I0212 19:44:04.865290 2436 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:44:04.868845 kubelet[2436]: I0212 19:44:04.865869 2436 server.go:1186] "Started kubelet" Feb 12 19:44:04.868845 kubelet[2436]: I0212 19:44:04.867747 2436 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:44:04.874505 kubelet[2436]: I0212 19:44:04.871479 2436 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:44:04.874505 kubelet[2436]: I0212 19:44:04.874329 2436 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:44:04.880919 sudo[2451]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:44:04.881222 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:44:04.888046 kubelet[2436]: I0212 19:44:04.886490 2436 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:44:04.888046 kubelet[2436]: I0212 19:44:04.887989 2436 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:44:04.892174 kubelet[2436]: E0212 19:44:04.892157 2436 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:44:04.892289 kubelet[2436]: E0212 19:44:04.892280 2436 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:44:04.962702 kubelet[2436]: I0212 19:44:04.962408 2436 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:44:04.991853 kubelet[2436]: I0212 19:44:04.991771 2436 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:44:04.992021 kubelet[2436]: I0212 19:44:04.992011 2436 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:44:04.992087 kubelet[2436]: I0212 19:44:04.992080 2436 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:44:04.993673 kubelet[2436]: I0212 19:44:04.993655 2436 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:44:04.993805 kubelet[2436]: I0212 19:44:04.993796 2436 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:44:04.993872 kubelet[2436]: I0212 19:44:04.993865 2436 policy_none.go:49] "None policy: Start" Feb 12 19:44:04.994583 kubelet[2436]: I0212 19:44:04.994569 2436 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:44:04.994696 kubelet[2436]: I0212 19:44:04.994686 2436 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:44:04.994902 kubelet[2436]: I0212 19:44:04.994890 2436 state_mem.go:75] "Updated machine memory state" Feb 12 19:44:04.999903 kubelet[2436]: I0212 19:44:04.999888 2436 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:44:05.000203 kubelet[2436]: I0212 19:44:05.000190 2436 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:44:05.013467 kubelet[2436]: I0212 19:44:05.013433 2436 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.033259 kubelet[2436]: I0212 19:44:05.033163 2436 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.033961 kubelet[2436]: I0212 19:44:05.033943 2436 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.050748 kubelet[2436]: I0212 19:44:05.050727 2436 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:44:05.050910 kubelet[2436]: I0212 19:44:05.050899 2436 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:44:05.050990 kubelet[2436]: I0212 19:44:05.050983 2436 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:44:05.051104 kubelet[2436]: E0212 19:44:05.051091 2436 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:44:05.151636 kubelet[2436]: I0212 19:44:05.151590 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:05.151829 kubelet[2436]: I0212 19:44:05.151713 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:05.151829 kubelet[2436]: I0212 19:44:05.151750 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:05.191830 kubelet[2436]: I0212 19:44:05.191794 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.191993 kubelet[2436]: I0212 19:44:05.191854 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/becad30702caa05e9ab401e4836c5705-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-48475fc0ad\" (UID: \"becad30702caa05e9ab401e4836c5705\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.191993 kubelet[2436]: I0212 19:44:05.191883 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/becad30702caa05e9ab401e4836c5705-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-48475fc0ad\" (UID: \"becad30702caa05e9ab401e4836c5705\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.191993 kubelet[2436]: I0212 19:44:05.191918 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/becad30702caa05e9ab401e4836c5705-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-48475fc0ad\" (UID: \"becad30702caa05e9ab401e4836c5705\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.191993 kubelet[2436]: I0212 19:44:05.191946 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.191993 kubelet[2436]: I0212 19:44:05.191971 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.192197 kubelet[2436]: I0212 19:44:05.192007 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.192197 kubelet[2436]: I0212 19:44:05.192037 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c28aa4c00fa0ced62e4bf6991838f95-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" (UID: \"8c28aa4c00fa0ced62e4bf6991838f95\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.192197 kubelet[2436]: I0212 19:44:05.192080 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f20864326a2406242f4fa16c46ddcfae-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-48475fc0ad\" (UID: \"f20864326a2406242f4fa16c46ddcfae\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:05.465521 sudo[2451]: pam_unix(sudo:session): session closed for user root Feb 12 19:44:05.864145 kubelet[2436]: I0212 19:44:05.864113 2436 apiserver.go:52] "Watching apiserver" Feb 12 19:44:05.888212 kubelet[2436]: I0212 19:44:05.888191 2436 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:44:05.896660 kubelet[2436]: I0212 19:44:05.896643 2436 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:44:06.264963 kubelet[2436]: I0212 19:44:06.264832 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-48475fc0ad" podStartSLOduration=1.26477741 pod.CreationTimestamp="2024-02-12 19:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:06.264571107 +0000 UTC m=+1.500068915" watchObservedRunningTime="2024-02-12 19:44:06.26477741 +0000 UTC m=+1.500275118" Feb 12 19:44:06.465192 kubelet[2436]: E0212 19:44:06.465146 2436 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-48475fc0ad\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:06.661790 kubelet[2436]: E0212 19:44:06.661751 2436 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-48475fc0ad\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" Feb 12 19:44:06.695431 sudo[1639]: pam_unix(sudo:session): session closed for user root Feb 12 19:44:06.794163 sshd[1570]: pam_unix(sshd:session): session closed for user core Feb 12 19:44:06.797535 systemd[1]: sshd@4-10.200.8.37:22-10.200.12.6:48492.service: Deactivated successfully. Feb 12 19:44:06.798414 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:44:06.798655 systemd[1]: session-7.scope: Consumed 3.839s CPU time. Feb 12 19:44:06.799212 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:44:06.800051 systemd-logind[1301]: Removed session 7. Feb 12 19:44:07.858165 kubelet[2436]: I0212 19:44:07.858129 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-48475fc0ad" podStartSLOduration=2.858076211 pod.CreationTimestamp="2024-02-12 19:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:07.459484496 +0000 UTC m=+2.694982304" watchObservedRunningTime="2024-02-12 19:44:07.858076211 +0000 UTC m=+3.093573919" Feb 12 19:44:08.262419 kubelet[2436]: I0212 19:44:08.262308 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-48475fc0ad" podStartSLOduration=3.262267722 pod.CreationTimestamp="2024-02-12 19:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:07.858493018 +0000 UTC m=+3.093990726" watchObservedRunningTime="2024-02-12 19:44:08.262267722 +0000 UTC m=+3.497765430" Feb 12 19:44:18.527148 kubelet[2436]: I0212 19:44:18.527112 2436 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:44:18.527944 env[1312]: time="2024-02-12T19:44:18.527913436Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:44:18.528517 kubelet[2436]: I0212 19:44:18.528493 2436 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:44:27.112470 kubelet[2436]: I0212 19:44:27.112423 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:27.118050 systemd[1]: Created slice kubepods-besteffort-pod26103147_0f61_4d10_a5cd_c8596482a964.slice. Feb 12 19:44:27.119211 kubelet[2436]: W0212 19:44:27.119116 2436 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.119211 kubelet[2436]: E0212 19:44:27.119149 2436 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.121130 kubelet[2436]: W0212 19:44:27.121079 2436 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.121130 kubelet[2436]: E0212 19:44:27.121114 2436 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.121770 kubelet[2436]: I0212 19:44:27.121745 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:27.129834 systemd[1]: Created slice kubepods-burstable-podef7c71bc_bf38_4deb_b1d0_6347f99929b0.slice. Feb 12 19:44:27.134653 kubelet[2436]: W0212 19:44:27.134630 2436 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.134757 kubelet[2436]: E0212 19:44:27.134661 2436 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.134818 kubelet[2436]: W0212 19:44:27.134800 2436 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.134818 kubelet[2436]: E0212 19:44:27.134816 2436 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-48475fc0ad" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-48475fc0ad' and this object Feb 12 19:44:27.135574 kubelet[2436]: I0212 19:44:27.135544 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-lib-modules\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135684 kubelet[2436]: I0212 19:44:27.135586 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hubble-tls\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135684 kubelet[2436]: I0212 19:44:27.135616 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-cgroup\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135684 kubelet[2436]: I0212 19:44:27.135646 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26103147-0f61-4d10-a5cd-c8596482a964-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-qhdb4\" (UID: \"26103147-0f61-4d10-a5cd-c8596482a964\") " pod="kube-system/cilium-operator-f59cbd8c6-qhdb4" Feb 12 19:44:27.135684 kubelet[2436]: I0212 19:44:27.135675 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pntf\" (UniqueName: \"kubernetes.io/projected/26103147-0f61-4d10-a5cd-c8596482a964-kube-api-access-8pntf\") pod \"cilium-operator-f59cbd8c6-qhdb4\" (UID: \"26103147-0f61-4d10-a5cd-c8596482a964\") " pod="kube-system/cilium-operator-f59cbd8c6-qhdb4" Feb 12 19:44:27.135859 kubelet[2436]: I0212 19:44:27.135701 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-run\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135859 kubelet[2436]: I0212 19:44:27.135732 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-xtables-lock\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135859 kubelet[2436]: I0212 19:44:27.135763 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrxz\" (UniqueName: \"kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-kube-api-access-pqrxz\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135859 kubelet[2436]: I0212 19:44:27.135792 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hostproc\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135859 kubelet[2436]: I0212 19:44:27.135828 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cni-path\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.135859 kubelet[2436]: I0212 19:44:27.135860 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-etc-cni-netd\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.136119 kubelet[2436]: I0212 19:44:27.135889 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-clustermesh-secrets\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.136119 kubelet[2436]: I0212 19:44:27.135919 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-config-path\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.136119 kubelet[2436]: I0212 19:44:27.135948 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-bpf-maps\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.136119 kubelet[2436]: I0212 19:44:27.135979 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-net\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.136119 kubelet[2436]: I0212 19:44:27.136009 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-kernel\") pod \"cilium-8m926\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " pod="kube-system/cilium-8m926" Feb 12 19:44:27.136351 kubelet[2436]: I0212 19:44:27.136328 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:44:27.141196 systemd[1]: Created slice kubepods-besteffort-pod12d043d1_c7dd_4c75_b967_86119baa997e.slice. Feb 12 19:44:27.237236 kubelet[2436]: I0212 19:44:27.237163 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d043d1-c7dd-4c75-b967-86119baa997e-lib-modules\") pod \"kube-proxy-scgtl\" (UID: \"12d043d1-c7dd-4c75-b967-86119baa997e\") " pod="kube-system/kube-proxy-scgtl" Feb 12 19:44:27.237656 kubelet[2436]: I0212 19:44:27.237331 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx2vp\" (UniqueName: \"kubernetes.io/projected/12d043d1-c7dd-4c75-b967-86119baa997e-kube-api-access-vx2vp\") pod \"kube-proxy-scgtl\" (UID: \"12d043d1-c7dd-4c75-b967-86119baa997e\") " pod="kube-system/kube-proxy-scgtl" Feb 12 19:44:27.237846 kubelet[2436]: I0212 19:44:27.237823 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12d043d1-c7dd-4c75-b967-86119baa997e-kube-proxy\") pod \"kube-proxy-scgtl\" (UID: \"12d043d1-c7dd-4c75-b967-86119baa997e\") " pod="kube-system/kube-proxy-scgtl" Feb 12 19:44:27.237919 kubelet[2436]: I0212 19:44:27.237876 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d043d1-c7dd-4c75-b967-86119baa997e-xtables-lock\") pod \"kube-proxy-scgtl\" (UID: \"12d043d1-c7dd-4c75-b967-86119baa997e\") " pod="kube-system/kube-proxy-scgtl" Feb 12 19:44:28.238570 kubelet[2436]: E0212 19:44:28.238493 2436 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.239408 kubelet[2436]: E0212 19:44:28.238700 2436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-config-path podName:ef7c71bc-bf38-4deb-b1d0-6347f99929b0 nodeName:}" failed. No retries permitted until 2024-02-12 19:44:28.738646751 +0000 UTC m=+23.974144559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-config-path") pod "cilium-8m926" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.239408 kubelet[2436]: E0212 19:44:28.238808 2436 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 12 19:44:28.239408 kubelet[2436]: E0212 19:44:28.238858 2436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-clustermesh-secrets podName:ef7c71bc-bf38-4deb-b1d0-6347f99929b0 nodeName:}" failed. No retries permitted until 2024-02-12 19:44:28.738843053 +0000 UTC m=+23.974340861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-clustermesh-secrets") pod "cilium-8m926" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0") : failed to sync secret cache: timed out waiting for the condition Feb 12 19:44:28.239408 kubelet[2436]: E0212 19:44:28.238493 2436 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 12 19:44:28.239408 kubelet[2436]: E0212 19:44:28.238873 2436 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8m926: failed to sync secret cache: timed out waiting for the condition Feb 12 19:44:28.242255 kubelet[2436]: E0212 19:44:28.238917 2436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hubble-tls podName:ef7c71bc-bf38-4deb-b1d0-6347f99929b0 nodeName:}" failed. No retries permitted until 2024-02-12 19:44:28.738905253 +0000 UTC m=+23.974402961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hubble-tls") pod "cilium-8m926" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0") : failed to sync secret cache: timed out waiting for the condition Feb 12 19:44:28.242255 kubelet[2436]: E0212 19:44:28.238538 2436 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.242255 kubelet[2436]: E0212 19:44:28.239283 2436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26103147-0f61-4d10-a5cd-c8596482a964-cilium-config-path podName:26103147-0f61-4d10-a5cd-c8596482a964 nodeName:}" failed. No retries permitted until 2024-02-12 19:44:28.739260957 +0000 UTC m=+23.974758765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/26103147-0f61-4d10-a5cd-c8596482a964-cilium-config-path") pod "cilium-operator-f59cbd8c6-qhdb4" (UID: "26103147-0f61-4d10-a5cd-c8596482a964") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.520648 kubelet[2436]: E0212 19:44:28.520539 2436 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.520648 kubelet[2436]: E0212 19:44:28.520565 2436 projected.go:198] Error preparing data for projected volume kube-api-access-8pntf for pod kube-system/cilium-operator-f59cbd8c6-qhdb4: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.520648 kubelet[2436]: E0212 19:44:28.520626 2436 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26103147-0f61-4d10-a5cd-c8596482a964-kube-api-access-8pntf podName:26103147-0f61-4d10-a5cd-c8596482a964 nodeName:}" failed. No retries permitted until 2024-02-12 19:44:29.020606659 +0000 UTC m=+24.256104367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8pntf" (UniqueName: "kubernetes.io/projected/26103147-0f61-4d10-a5cd-c8596482a964-kube-api-access-8pntf") pod "cilium-operator-f59cbd8c6-qhdb4" (UID: "26103147-0f61-4d10-a5cd-c8596482a964") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:44:28.934021 env[1312]: time="2024-02-12T19:44:28.933970531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8m926,Uid:ef7c71bc-bf38-4deb-b1d0-6347f99929b0,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:28.946250 env[1312]: time="2024-02-12T19:44:28.946000346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scgtl,Uid:12d043d1-c7dd-4c75-b967-86119baa997e,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:28.983710 env[1312]: time="2024-02-12T19:44:28.983634208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:28.983975 env[1312]: time="2024-02-12T19:44:28.983667908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:28.983975 env[1312]: time="2024-02-12T19:44:28.983682108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:28.983975 env[1312]: time="2024-02-12T19:44:28.983809909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322 pid=2538 runtime=io.containerd.runc.v2 Feb 12 19:44:29.004122 systemd[1]: Started cri-containerd-ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322.scope. Feb 12 19:44:29.018532 env[1312]: time="2024-02-12T19:44:29.018084036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:29.018532 env[1312]: time="2024-02-12T19:44:29.018146936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:29.018532 env[1312]: time="2024-02-12T19:44:29.018162737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:29.018764 env[1312]: time="2024-02-12T19:44:29.018695942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ef0681bc3e9fc6f92e848a33dedbcc929dc0e9cd547e11985668da9a18323fa pid=2567 runtime=io.containerd.runc.v2 Feb 12 19:44:29.042667 env[1312]: time="2024-02-12T19:44:29.042624567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8m926,Uid:ef7c71bc-bf38-4deb-b1d0-6347f99929b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\"" Feb 12 19:44:29.046379 systemd[1]: Started cri-containerd-1ef0681bc3e9fc6f92e848a33dedbcc929dc0e9cd547e11985668da9a18323fa.scope. Feb 12 19:44:29.055855 env[1312]: time="2024-02-12T19:44:29.055818492Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:44:29.078903 env[1312]: time="2024-02-12T19:44:29.078425705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scgtl,Uid:12d043d1-c7dd-4c75-b967-86119baa997e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ef0681bc3e9fc6f92e848a33dedbcc929dc0e9cd547e11985668da9a18323fa\"" Feb 12 19:44:29.082002 env[1312]: time="2024-02-12T19:44:29.081201631Z" level=info msg="CreateContainer within sandbox \"1ef0681bc3e9fc6f92e848a33dedbcc929dc0e9cd547e11985668da9a18323fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:44:29.111555 env[1312]: time="2024-02-12T19:44:29.111513117Z" level=info msg="CreateContainer within sandbox \"1ef0681bc3e9fc6f92e848a33dedbcc929dc0e9cd547e11985668da9a18323fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcbfa589f06219ebad04c0c58285a60428f4e597542d5b509d55a487c2bd2670\"" Feb 12 19:44:29.112408 env[1312]: time="2024-02-12T19:44:29.112361425Z" level=info msg="StartContainer for \"bcbfa589f06219ebad04c0c58285a60428f4e597542d5b509d55a487c2bd2670\"" Feb 12 19:44:29.130954 systemd[1]: Started cri-containerd-bcbfa589f06219ebad04c0c58285a60428f4e597542d5b509d55a487c2bd2670.scope. Feb 12 19:44:29.170759 env[1312]: time="2024-02-12T19:44:29.170708376Z" level=info msg="StartContainer for \"bcbfa589f06219ebad04c0c58285a60428f4e597542d5b509d55a487c2bd2670\" returns successfully" Feb 12 19:44:29.225914 env[1312]: time="2024-02-12T19:44:29.225821496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-qhdb4,Uid:26103147-0f61-4d10-a5cd-c8596482a964,Namespace:kube-system,Attempt:0,}" Feb 12 19:44:29.257987 env[1312]: time="2024-02-12T19:44:29.257918198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:44:29.258201 env[1312]: time="2024-02-12T19:44:29.257957899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:44:29.258201 env[1312]: time="2024-02-12T19:44:29.257971499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:44:29.258201 env[1312]: time="2024-02-12T19:44:29.258090000Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a pid=2686 runtime=io.containerd.runc.v2 Feb 12 19:44:29.271168 systemd[1]: Started cri-containerd-f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a.scope. Feb 12 19:44:29.311378 env[1312]: time="2024-02-12T19:44:29.311343302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-qhdb4,Uid:26103147-0f61-4d10-a5cd-c8596482a964,Namespace:kube-system,Attempt:0,} returns sandbox id \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\"" Feb 12 19:44:34.502407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004785785.mount: Deactivated successfully. Feb 12 19:44:38.416685 env[1312]: time="2024-02-12T19:44:38.416579153Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:38.507107 env[1312]: time="2024-02-12T19:44:38.507046486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:38.554759 env[1312]: time="2024-02-12T19:44:38.554707372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:38.555563 env[1312]: time="2024-02-12T19:44:38.555520279Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:44:38.558244 env[1312]: time="2024-02-12T19:44:38.558210301Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:44:38.559334 env[1312]: time="2024-02-12T19:44:38.559227409Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:44:38.771203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865044672.mount: Deactivated successfully. Feb 12 19:44:38.859825 env[1312]: time="2024-02-12T19:44:38.859698444Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\"" Feb 12 19:44:38.860566 env[1312]: time="2024-02-12T19:44:38.860512551Z" level=info msg="StartContainer for \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\"" Feb 12 19:44:38.888588 systemd[1]: Started cri-containerd-c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9.scope. Feb 12 19:44:38.920872 env[1312]: time="2024-02-12T19:44:38.920829740Z" level=info msg="StartContainer for \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\" returns successfully" Feb 12 19:44:38.960707 systemd[1]: cri-containerd-c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9.scope: Deactivated successfully. Feb 12 19:44:39.148245 kubelet[2436]: I0212 19:44:39.148199 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-scgtl" podStartSLOduration=20.148168764 pod.CreationTimestamp="2024-02-12 19:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:30.12309054 +0000 UTC m=+25.358588348" watchObservedRunningTime="2024-02-12 19:44:39.148168764 +0000 UTC m=+34.383666472" Feb 12 19:44:39.767463 systemd[1]: run-containerd-runc-k8s.io-c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9-runc.lTmXRZ.mount: Deactivated successfully. Feb 12 19:44:39.767621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9-rootfs.mount: Deactivated successfully. Feb 12 19:44:48.964803 env[1312]: time="2024-02-12T19:44:48.964695583Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9,ID:c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9,Pid:2815,ExitStatus:0,ExitedAt:2024-02-12 19:44:38.963224183 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Feb 12 19:44:50.010216 env[1312]: time="2024-02-12T19:44:50.010167920Z" level=info msg="TaskExit event &TaskExit{ContainerID:c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9,ID:c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9,Pid:2815,ExitStatus:0,ExitedAt:2024-02-12 19:44:38.963224183 +0000 UTC,XXX_unrecognized:[],}" Feb 12 19:44:52.011172 env[1312]: time="2024-02-12T19:44:52.011092401Z" level=error msg="get state for c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9" error="context deadline exceeded: unknown" Feb 12 19:44:52.011172 env[1312]: time="2024-02-12T19:44:52.011148102Z" level=warning msg="unknown status" status=0 Feb 12 19:44:53.159595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036261776.mount: Deactivated successfully. Feb 12 19:44:53.168527 env[1312]: time="2024-02-12T19:44:53.166830088Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:44:53.228749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986589668.mount: Deactivated successfully. Feb 12 19:44:53.258969 env[1312]: time="2024-02-12T19:44:53.258915494Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\"" Feb 12 19:44:53.260969 env[1312]: time="2024-02-12T19:44:53.260209502Z" level=info msg="StartContainer for \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\"" Feb 12 19:44:53.289799 systemd[1]: Started cri-containerd-5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e.scope. Feb 12 19:44:53.329217 env[1312]: time="2024-02-12T19:44:53.329175756Z" level=info msg="StartContainer for \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\" returns successfully" Feb 12 19:44:53.340318 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:44:53.340644 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:44:53.340847 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:44:53.343860 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:44:53.345171 systemd[1]: cri-containerd-5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e.scope: Deactivated successfully. Feb 12 19:44:53.361730 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:44:53.412083 env[1312]: time="2024-02-12T19:44:53.411496498Z" level=info msg="shim disconnected" id=5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e Feb 12 19:44:53.412083 env[1312]: time="2024-02-12T19:44:53.411549498Z" level=warning msg="cleaning up after shim disconnected" id=5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e namespace=k8s.io Feb 12 19:44:53.412083 env[1312]: time="2024-02-12T19:44:53.411561699Z" level=info msg="cleaning up dead shim" Feb 12 19:44:53.421158 env[1312]: time="2024-02-12T19:44:53.421120061Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2899 runtime=io.containerd.runc.v2\n" Feb 12 19:44:53.994916 env[1312]: time="2024-02-12T19:44:53.994861538Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:54.002187 env[1312]: time="2024-02-12T19:44:54.002144185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:54.007364 env[1312]: time="2024-02-12T19:44:54.007278319Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:44:54.008197 env[1312]: time="2024-02-12T19:44:54.008164025Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:44:54.011707 env[1312]: time="2024-02-12T19:44:54.011670747Z" level=info msg="CreateContainer within sandbox \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:44:54.043317 env[1312]: time="2024-02-12T19:44:54.043277353Z" level=info msg="CreateContainer within sandbox \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\"" Feb 12 19:44:54.045355 env[1312]: time="2024-02-12T19:44:54.043848957Z" level=info msg="StartContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\"" Feb 12 19:44:54.061140 systemd[1]: Started cri-containerd-af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415.scope. Feb 12 19:44:54.091672 env[1312]: time="2024-02-12T19:44:54.091611167Z" level=info msg="StartContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" returns successfully" Feb 12 19:44:54.166922 env[1312]: time="2024-02-12T19:44:54.166867857Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:44:54.410063 env[1312]: time="2024-02-12T19:44:54.410014238Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\"" Feb 12 19:44:54.411107 env[1312]: time="2024-02-12T19:44:54.411076145Z" level=info msg="StartContainer for \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\"" Feb 12 19:44:54.465554 systemd[1]: Started cri-containerd-2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f.scope. Feb 12 19:44:54.559535 env[1312]: time="2024-02-12T19:44:54.559432210Z" level=info msg="StartContainer for \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\" returns successfully" Feb 12 19:44:54.573041 systemd[1]: cri-containerd-2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f.scope: Deactivated successfully. Feb 12 19:44:55.155431 systemd[1]: run-containerd-runc-k8s.io-2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f-runc.DQ5k9I.mount: Deactivated successfully. Feb 12 19:44:57.312334 kubelet[2436]: I0212 19:44:55.197927 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-qhdb4" podStartSLOduration=-9.223372000656883e+09 pod.CreationTimestamp="2024-02-12 19:44:19 +0000 UTC" firstStartedPulling="2024-02-12 19:44:29.312506713 +0000 UTC m=+24.548004421" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:44:54.229736566 +0000 UTC m=+49.465234274" watchObservedRunningTime="2024-02-12 19:44:55.197891847 +0000 UTC m=+50.433389655" Feb 12 19:44:55.155547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f-rootfs.mount: Deactivated successfully. Feb 12 19:44:57.815169 env[1312]: time="2024-02-12T19:44:57.815117980Z" level=info msg="shim disconnected" id=2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f Feb 12 19:44:57.815662 env[1312]: time="2024-02-12T19:44:57.815163181Z" level=warning msg="cleaning up after shim disconnected" id=2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f namespace=k8s.io Feb 12 19:44:57.815662 env[1312]: time="2024-02-12T19:44:57.815194281Z" level=info msg="cleaning up dead shim" Feb 12 19:44:57.823061 env[1312]: time="2024-02-12T19:44:57.823021930Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2999 runtime=io.containerd.runc.v2\n" Feb 12 19:44:58.187572 env[1312]: time="2024-02-12T19:44:58.187526308Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:44:58.464608 env[1312]: time="2024-02-12T19:44:58.464479029Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\"" Feb 12 19:44:58.465244 env[1312]: time="2024-02-12T19:44:58.465163333Z" level=info msg="StartContainer for \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\"" Feb 12 19:44:58.487815 systemd[1]: run-containerd-runc-k8s.io-e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164-runc.2Jskne.mount: Deactivated successfully. Feb 12 19:44:58.491324 systemd[1]: Started cri-containerd-e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164.scope. Feb 12 19:44:58.515356 systemd[1]: cri-containerd-e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164.scope: Deactivated successfully. Feb 12 19:44:58.516838 env[1312]: time="2024-02-12T19:44:58.516746454Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef7c71bc_bf38_4deb_b1d0_6347f99929b0.slice/cri-containerd-e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164.scope/memory.events\": no such file or directory" Feb 12 19:44:58.522354 env[1312]: time="2024-02-12T19:44:58.522316388Z" level=info msg="StartContainer for \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\" returns successfully" Feb 12 19:44:59.312627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164-rootfs.mount: Deactivated successfully. Feb 12 19:44:59.458665 env[1312]: time="2024-02-12T19:44:59.458600477Z" level=info msg="shim disconnected" id=e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164 Feb 12 19:44:59.458665 env[1312]: time="2024-02-12T19:44:59.458662578Z" level=warning msg="cleaning up after shim disconnected" id=e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164 namespace=k8s.io Feb 12 19:44:59.458665 env[1312]: time="2024-02-12T19:44:59.458678078Z" level=info msg="cleaning up dead shim" Feb 12 19:44:59.466191 env[1312]: time="2024-02-12T19:44:59.466141924Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:44:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3056 runtime=io.containerd.runc.v2\n" Feb 12 19:45:00.196668 env[1312]: time="2024-02-12T19:45:00.196615903Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:45:00.520192 env[1312]: time="2024-02-12T19:45:00.519946470Z" level=info msg="CreateContainer within sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\"" Feb 12 19:45:00.521128 env[1312]: time="2024-02-12T19:45:00.521084977Z" level=info msg="StartContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\"" Feb 12 19:45:00.544565 systemd[1]: Started cri-containerd-9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847.scope. Feb 12 19:45:00.575107 env[1312]: time="2024-02-12T19:45:00.575049505Z" level=info msg="StartContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" returns successfully" Feb 12 19:45:00.693027 kubelet[2436]: I0212 19:45:00.692543 2436 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:45:00.717631 kubelet[2436]: I0212 19:45:00.717595 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:00.723794 systemd[1]: Created slice kubepods-burstable-pod872fe606_c634_4c80_910b_fe6e1e0ef852.slice. Feb 12 19:45:00.726717 kubelet[2436]: I0212 19:45:00.726683 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:45:00.732068 systemd[1]: Created slice kubepods-burstable-poddc7d67ba_8a5f_41a4_8229_4662a7c8775a.slice. Feb 12 19:45:00.864635 kubelet[2436]: I0212 19:45:00.864596 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbhr\" (UniqueName: \"kubernetes.io/projected/872fe606-c634-4c80-910b-fe6e1e0ef852-kube-api-access-pbbhr\") pod \"coredns-787d4945fb-7rpfv\" (UID: \"872fe606-c634-4c80-910b-fe6e1e0ef852\") " pod="kube-system/coredns-787d4945fb-7rpfv" Feb 12 19:45:00.864635 kubelet[2436]: I0212 19:45:00.864642 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7d67ba-8a5f-41a4-8229-4662a7c8775a-config-volume\") pod \"coredns-787d4945fb-4pgzj\" (UID: \"dc7d67ba-8a5f-41a4-8229-4662a7c8775a\") " pod="kube-system/coredns-787d4945fb-4pgzj" Feb 12 19:45:00.864909 kubelet[2436]: I0212 19:45:00.864671 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/872fe606-c634-4c80-910b-fe6e1e0ef852-config-volume\") pod \"coredns-787d4945fb-7rpfv\" (UID: \"872fe606-c634-4c80-910b-fe6e1e0ef852\") " pod="kube-system/coredns-787d4945fb-7rpfv" Feb 12 19:45:00.864909 kubelet[2436]: I0212 19:45:00.864700 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwrh\" (UniqueName: \"kubernetes.io/projected/dc7d67ba-8a5f-41a4-8229-4662a7c8775a-kube-api-access-chwrh\") pod \"coredns-787d4945fb-4pgzj\" (UID: \"dc7d67ba-8a5f-41a4-8229-4662a7c8775a\") " pod="kube-system/coredns-787d4945fb-4pgzj" Feb 12 19:45:01.029062 env[1312]: time="2024-02-12T19:45:01.029007265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7rpfv,Uid:872fe606-c634-4c80-910b-fe6e1e0ef852,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:01.037415 env[1312]: time="2024-02-12T19:45:01.037375216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4pgzj,Uid:dc7d67ba-8a5f-41a4-8229-4662a7c8775a,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:01.413591 systemd[1]: run-containerd-runc-k8s.io-9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847-runc.hJ3my0.mount: Deactivated successfully. Feb 12 19:45:02.747554 systemd-networkd[1460]: cilium_host: Link UP Feb 12 19:45:02.757459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:45:02.757565 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:45:02.757712 systemd-networkd[1460]: cilium_net: Link UP Feb 12 19:45:02.758424 systemd-networkd[1460]: cilium_net: Gained carrier Feb 12 19:45:02.758659 systemd-networkd[1460]: cilium_host: Gained carrier Feb 12 19:45:02.954689 systemd-networkd[1460]: cilium_vxlan: Link UP Feb 12 19:45:02.954699 systemd-networkd[1460]: cilium_vxlan: Gained carrier Feb 12 19:45:03.035627 systemd-networkd[1460]: cilium_net: Gained IPv6LL Feb 12 19:45:03.186466 kernel: NET: Registered PF_ALG protocol family Feb 12 19:45:03.187549 systemd-networkd[1460]: cilium_host: Gained IPv6LL Feb 12 19:45:03.864123 systemd-networkd[1460]: lxc_health: Link UP Feb 12 19:45:03.877114 systemd-networkd[1460]: lxc_health: Gained carrier Feb 12 19:45:03.877474 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:45:04.128973 systemd-networkd[1460]: lxce2997101f663: Link UP Feb 12 19:45:04.136845 kernel: eth0: renamed from tmp9ba2a Feb 12 19:45:04.146540 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce2997101f663: link becomes ready Feb 12 19:45:04.145899 systemd-networkd[1460]: lxce2997101f663: Gained carrier Feb 12 19:45:04.242948 systemd-networkd[1460]: lxc0a867deea258: Link UP Feb 12 19:45:04.260547 kernel: eth0: renamed from tmp61be1 Feb 12 19:45:04.268498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0a867deea258: link becomes ready Feb 12 19:45:04.268547 systemd-networkd[1460]: lxc0a867deea258: Gained carrier Feb 12 19:45:04.956065 kubelet[2436]: I0212 19:45:04.956029 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8m926" podStartSLOduration=-9.223371990898785e+09 pod.CreationTimestamp="2024-02-12 19:44:19 +0000 UTC" firstStartedPulling="2024-02-12 19:44:29.054975184 +0000 UTC m=+24.290472892" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:01.21078156 +0000 UTC m=+56.446279368" watchObservedRunningTime="2024-02-12 19:45:04.955990964 +0000 UTC m=+60.191488672" Feb 12 19:45:05.020258 systemd-networkd[1460]: cilium_vxlan: Gained IPv6LL Feb 12 19:45:05.084747 systemd-networkd[1460]: lxc_health: Gained IPv6LL Feb 12 19:45:05.467707 systemd-networkd[1460]: lxc0a867deea258: Gained IPv6LL Feb 12 19:45:05.723691 systemd-networkd[1460]: lxce2997101f663: Gained IPv6LL Feb 12 19:45:08.055057 env[1312]: time="2024-02-12T19:45:08.054978645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:08.055057 env[1312]: time="2024-02-12T19:45:08.055014245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:08.055057 env[1312]: time="2024-02-12T19:45:08.055028145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:08.055778 env[1312]: time="2024-02-12T19:45:08.055725049Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ba2a9ee867f0cbdeb260339fdbef280217cefdc7ad0da7bcc465c658ca09d28 pid=3606 runtime=io.containerd.runc.v2 Feb 12 19:45:08.079387 systemd[1]: Started cri-containerd-9ba2a9ee867f0cbdeb260339fdbef280217cefdc7ad0da7bcc465c658ca09d28.scope. Feb 12 19:45:08.123252 env[1312]: time="2024-02-12T19:45:08.123175929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:08.123407 env[1312]: time="2024-02-12T19:45:08.123266730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:08.123407 env[1312]: time="2024-02-12T19:45:08.123294130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:08.124972 env[1312]: time="2024-02-12T19:45:08.123574132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61be16cec911ddadc258412de9b6e0553fd95a788473b335c244bc54dcf2e695 pid=3640 runtime=io.containerd.runc.v2 Feb 12 19:45:08.159810 systemd[1]: run-containerd-runc-k8s.io-61be16cec911ddadc258412de9b6e0553fd95a788473b335c244bc54dcf2e695-runc.Vbj7yR.mount: Deactivated successfully. Feb 12 19:45:08.168034 systemd[1]: Started cri-containerd-61be16cec911ddadc258412de9b6e0553fd95a788473b335c244bc54dcf2e695.scope. Feb 12 19:45:08.193986 env[1312]: time="2024-02-12T19:45:08.193752227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7rpfv,Uid:872fe606-c634-4c80-910b-fe6e1e0ef852,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ba2a9ee867f0cbdeb260339fdbef280217cefdc7ad0da7bcc465c658ca09d28\"" Feb 12 19:45:08.207642 env[1312]: time="2024-02-12T19:45:08.207594305Z" level=info msg="CreateContainer within sandbox \"9ba2a9ee867f0cbdeb260339fdbef280217cefdc7ad0da7bcc465c658ca09d28\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:45:08.247157 env[1312]: time="2024-02-12T19:45:08.247101728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4pgzj,Uid:dc7d67ba-8a5f-41a4-8229-4662a7c8775a,Namespace:kube-system,Attempt:0,} returns sandbox id \"61be16cec911ddadc258412de9b6e0553fd95a788473b335c244bc54dcf2e695\"" Feb 12 19:45:08.252737 env[1312]: time="2024-02-12T19:45:08.252699059Z" level=info msg="CreateContainer within sandbox \"61be16cec911ddadc258412de9b6e0553fd95a788473b335c244bc54dcf2e695\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:45:08.757184 env[1312]: time="2024-02-12T19:45:08.757118402Z" level=info msg="CreateContainer within sandbox \"9ba2a9ee867f0cbdeb260339fdbef280217cefdc7ad0da7bcc465c658ca09d28\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7fd22a3463ac26aefd1b9816d0a945a5fac98624aff012a1fbb6955df027900\"" Feb 12 19:45:08.757831 env[1312]: time="2024-02-12T19:45:08.757774306Z" level=info msg="StartContainer for \"f7fd22a3463ac26aefd1b9816d0a945a5fac98624aff012a1fbb6955df027900\"" Feb 12 19:45:08.779273 systemd[1]: Started cri-containerd-f7fd22a3463ac26aefd1b9816d0a945a5fac98624aff012a1fbb6955df027900.scope. Feb 12 19:45:08.818016 env[1312]: time="2024-02-12T19:45:08.817963145Z" level=info msg="CreateContainer within sandbox \"61be16cec911ddadc258412de9b6e0553fd95a788473b335c244bc54dcf2e695\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ad45bdfb61fd961a696ef9ce51ae9bb73cd50770194ca03f367bb3606c26d19\"" Feb 12 19:45:08.820489 env[1312]: time="2024-02-12T19:45:08.818786249Z" level=info msg="StartContainer for \"7ad45bdfb61fd961a696ef9ce51ae9bb73cd50770194ca03f367bb3606c26d19\"" Feb 12 19:45:08.822837 env[1312]: time="2024-02-12T19:45:08.822796072Z" level=info msg="StartContainer for \"f7fd22a3463ac26aefd1b9816d0a945a5fac98624aff012a1fbb6955df027900\" returns successfully" Feb 12 19:45:08.841468 systemd[1]: Started cri-containerd-7ad45bdfb61fd961a696ef9ce51ae9bb73cd50770194ca03f367bb3606c26d19.scope. Feb 12 19:45:08.876528 env[1312]: time="2024-02-12T19:45:08.876476075Z" level=info msg="StartContainer for \"7ad45bdfb61fd961a696ef9ce51ae9bb73cd50770194ca03f367bb3606c26d19\" returns successfully" Feb 12 19:45:09.232336 kubelet[2436]: I0212 19:45:09.232290 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4pgzj" podStartSLOduration=50.232251668 pod.CreationTimestamp="2024-02-12 19:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:09.230863161 +0000 UTC m=+64.466360869" watchObservedRunningTime="2024-02-12 19:45:09.232251668 +0000 UTC m=+64.467749476" Feb 12 19:45:09.289394 kubelet[2436]: I0212 19:45:09.289348 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-7rpfv" podStartSLOduration=50.289320487 pod.CreationTimestamp="2024-02-12 19:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:09.268756672 +0000 UTC m=+64.504254380" watchObservedRunningTime="2024-02-12 19:45:09.289320487 +0000 UTC m=+64.524818195" Feb 12 19:47:13.734215 systemd[1]: Started sshd@5-10.200.8.37:22-10.200.12.6:42896.service. Feb 12 19:47:14.365733 sshd[3827]: Accepted publickey for core from 10.200.12.6 port 42896 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:14.367502 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:14.373619 systemd-logind[1301]: New session 8 of user core. Feb 12 19:47:14.374292 systemd[1]: Started session-8.scope. Feb 12 19:47:14.906890 sshd[3827]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:14.909838 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:47:14.910051 systemd[1]: sshd@5-10.200.8.37:22-10.200.12.6:42896.service: Deactivated successfully. Feb 12 19:47:14.911013 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:47:14.911850 systemd-logind[1301]: Removed session 8. Feb 12 19:47:20.013339 systemd[1]: Started sshd@6-10.200.8.37:22-10.200.12.6:33372.service. Feb 12 19:47:20.636291 sshd[3842]: Accepted publickey for core from 10.200.12.6 port 33372 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:20.637689 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:20.642593 systemd[1]: Started session-9.scope. Feb 12 19:47:20.643071 systemd-logind[1301]: New session 9 of user core. Feb 12 19:47:21.130790 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:21.134138 systemd[1]: sshd@6-10.200.8.37:22-10.200.12.6:33372.service: Deactivated successfully. Feb 12 19:47:21.135312 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:47:21.136255 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:47:21.137140 systemd-logind[1301]: Removed session 9. Feb 12 19:47:26.237828 systemd[1]: Started sshd@7-10.200.8.37:22-10.200.12.6:33380.service. Feb 12 19:47:26.861535 sshd[3855]: Accepted publickey for core from 10.200.12.6 port 33380 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:26.862956 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:26.869304 systemd[1]: Started session-10.scope. Feb 12 19:47:26.871080 systemd-logind[1301]: New session 10 of user core. Feb 12 19:47:27.361611 sshd[3855]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:27.364946 systemd[1]: sshd@7-10.200.8.37:22-10.200.12.6:33380.service: Deactivated successfully. Feb 12 19:47:27.366085 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:47:27.367008 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:47:27.367991 systemd-logind[1301]: Removed session 10. Feb 12 19:47:32.468185 systemd[1]: Started sshd@8-10.200.8.37:22-10.200.12.6:43242.service. Feb 12 19:47:33.086474 sshd[3874]: Accepted publickey for core from 10.200.12.6 port 43242 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:33.088271 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:33.094685 systemd[1]: Started session-11.scope. Feb 12 19:47:33.095118 systemd-logind[1301]: New session 11 of user core. Feb 12 19:47:33.579345 sshd[3874]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:33.582464 systemd[1]: sshd@8-10.200.8.37:22-10.200.12.6:43242.service: Deactivated successfully. Feb 12 19:47:33.583437 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:47:33.584165 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:47:33.584927 systemd-logind[1301]: Removed session 11. Feb 12 19:47:38.684842 systemd[1]: Started sshd@9-10.200.8.37:22-10.200.12.6:46110.service. Feb 12 19:47:39.305800 sshd[3889]: Accepted publickey for core from 10.200.12.6 port 46110 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:39.307319 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:39.312547 systemd-logind[1301]: New session 12 of user core. Feb 12 19:47:39.313078 systemd[1]: Started session-12.scope. Feb 12 19:47:39.802511 sshd[3889]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:39.805509 systemd[1]: sshd@9-10.200.8.37:22-10.200.12.6:46110.service: Deactivated successfully. Feb 12 19:47:39.806292 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:47:39.807534 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:47:39.808298 systemd-logind[1301]: Removed session 12. Feb 12 19:47:44.909518 systemd[1]: Started sshd@10-10.200.8.37:22-10.200.12.6:46118.service. Feb 12 19:47:45.534768 sshd[3901]: Accepted publickey for core from 10.200.12.6 port 46118 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:45.536464 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:45.542883 systemd-logind[1301]: New session 13 of user core. Feb 12 19:47:45.543647 systemd[1]: Started session-13.scope. Feb 12 19:47:46.034596 sshd[3901]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:46.037798 systemd[1]: sshd@10-10.200.8.37:22-10.200.12.6:46118.service: Deactivated successfully. Feb 12 19:47:46.038754 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:47:46.039414 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:47:46.040260 systemd-logind[1301]: Removed session 13. Feb 12 19:47:51.138708 systemd[1]: Started sshd@11-10.200.8.37:22-10.200.12.6:58842.service. Feb 12 19:47:51.756747 sshd[3918]: Accepted publickey for core from 10.200.12.6 port 58842 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:51.758542 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:51.763493 systemd-logind[1301]: New session 14 of user core. Feb 12 19:47:51.763991 systemd[1]: Started session-14.scope. Feb 12 19:47:52.255984 sshd[3918]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:52.258763 systemd[1]: sshd@11-10.200.8.37:22-10.200.12.6:58842.service: Deactivated successfully. Feb 12 19:47:52.259938 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:47:52.260033 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:47:52.261232 systemd-logind[1301]: Removed session 14. Feb 12 19:47:52.361964 systemd[1]: Started sshd@12-10.200.8.37:22-10.200.12.6:58852.service. Feb 12 19:47:52.990257 sshd[3931]: Accepted publickey for core from 10.200.12.6 port 58852 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:52.992063 sshd[3931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:52.998088 systemd[1]: Started session-15.scope. Feb 12 19:47:52.998706 systemd-logind[1301]: New session 15 of user core. Feb 12 19:47:54.247847 sshd[3931]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:54.251082 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:47:54.251322 systemd[1]: sshd@12-10.200.8.37:22-10.200.12.6:58852.service: Deactivated successfully. Feb 12 19:47:54.252316 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:47:54.253270 systemd-logind[1301]: Removed session 15. Feb 12 19:47:54.350825 systemd[1]: Started sshd@13-10.200.8.37:22-10.200.12.6:58856.service. Feb 12 19:47:54.964955 sshd[3942]: Accepted publickey for core from 10.200.12.6 port 58856 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:47:54.966672 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:54.972367 systemd-logind[1301]: New session 16 of user core. Feb 12 19:47:54.973188 systemd[1]: Started session-16.scope. Feb 12 19:47:55.452433 sshd[3942]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:55.455861 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:47:55.456098 systemd[1]: sshd@13-10.200.8.37:22-10.200.12.6:58856.service: Deactivated successfully. Feb 12 19:47:55.457320 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:47:55.458364 systemd-logind[1301]: Removed session 16. Feb 12 19:47:56.137529 update_engine[1304]: I0212 19:47:56.137488 1304 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 12 19:47:56.137529 update_engine[1304]: I0212 19:47:56.137524 1304 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 12 19:47:56.137997 update_engine[1304]: I0212 19:47:56.137648 1304 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 12 19:47:56.138137 update_engine[1304]: I0212 19:47:56.138111 1304 omaha_request_params.cc:62] Current group set to lts Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138342 1304 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138356 1304 update_attempter.cc:643] Scheduling an action processor start. Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138374 1304 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138406 1304 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138497 1304 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138505 1304 omaha_request_action.cc:271] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 12 19:47:56.139669 update_engine[1304]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 12 19:47:56.139669 update_engine[1304]: <os version="Chateau" platform="CoreOS" sp="3510.3.2_x86_64"></os> Feb 12 19:47:56.139669 update_engine[1304]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="3510.3.2" track="lts" bootid="{e516f8d0-5643-4214-bd32-7438c82cee71}" oem="azure" oemversion="2.6.0.2-r1" alephversion="3510.3.2" machineid="515fdd59f0a64d6f8732fe1988f9137a" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Feb 12 19:47:56.139669 update_engine[1304]: <ping active="1"></ping> Feb 12 19:47:56.139669 update_engine[1304]: <updatecheck></updatecheck> Feb 12 19:47:56.139669 update_engine[1304]: <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event> Feb 12 19:47:56.139669 update_engine[1304]: </app> Feb 12 19:47:56.139669 update_engine[1304]: </request> Feb 12 19:47:56.139669 update_engine[1304]: I0212 19:47:56.138511 1304 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:47:56.140292 locksmithd[1389]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 12 19:47:56.140455 update_engine[1304]: I0212 19:47:56.139918 1304 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:47:56.140455 update_engine[1304]: I0212 19:47:56.140139 1304 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:47:56.162304 update_engine[1304]: E0212 19:47:56.162256 1304 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:47:56.162433 update_engine[1304]: I0212 19:47:56.162379 1304 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 12 19:48:00.558279 systemd[1]: Started sshd@14-10.200.8.37:22-10.200.12.6:54800.service. Feb 12 19:48:01.181153 sshd[3955]: Accepted publickey for core from 10.200.12.6 port 54800 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:01.186201 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:01.191356 systemd-logind[1301]: New session 17 of user core. Feb 12 19:48:01.191608 systemd[1]: Started session-17.scope. Feb 12 19:48:01.677556 sshd[3955]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:01.680809 systemd[1]: sshd@14-10.200.8.37:22-10.200.12.6:54800.service: Deactivated successfully. Feb 12 19:48:01.681924 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:48:01.682775 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:48:01.683840 systemd-logind[1301]: Removed session 17. Feb 12 19:48:06.140475 update_engine[1304]: I0212 19:48:06.140417 1304 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:06.140916 update_engine[1304]: I0212 19:48:06.140676 1304 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:06.140916 update_engine[1304]: I0212 19:48:06.140892 1304 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:48:06.169590 update_engine[1304]: E0212 19:48:06.169549 1304 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:06.169738 update_engine[1304]: I0212 19:48:06.169673 1304 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 12 19:48:06.782250 systemd[1]: Started sshd@15-10.200.8.37:22-10.200.12.6:54806.service. Feb 12 19:48:07.405066 sshd[3969]: Accepted publickey for core from 10.200.12.6 port 54806 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:07.406684 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:07.412339 systemd[1]: Started session-18.scope. Feb 12 19:48:07.413395 systemd-logind[1301]: New session 18 of user core. Feb 12 19:48:07.918810 sshd[3969]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:07.922182 systemd[1]: sshd@15-10.200.8.37:22-10.200.12.6:54806.service: Deactivated successfully. Feb 12 19:48:07.923338 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:48:07.924186 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:48:07.925075 systemd-logind[1301]: Removed session 18. Feb 12 19:48:08.024913 systemd[1]: Started sshd@16-10.200.8.37:22-10.200.12.6:51002.service. Feb 12 19:48:08.647686 sshd[3980]: Accepted publickey for core from 10.200.12.6 port 51002 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:08.649227 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:08.654282 systemd-logind[1301]: New session 19 of user core. Feb 12 19:48:08.654794 systemd[1]: Started session-19.scope. Feb 12 19:48:09.202695 sshd[3980]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:09.206105 systemd[1]: sshd@16-10.200.8.37:22-10.200.12.6:51002.service: Deactivated successfully. Feb 12 19:48:09.207257 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:48:09.208128 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:48:09.208997 systemd-logind[1301]: Removed session 19. Feb 12 19:48:09.309083 systemd[1]: Started sshd@17-10.200.8.37:22-10.200.12.6:51010.service. Feb 12 19:48:09.949816 sshd[3989]: Accepted publickey for core from 10.200.12.6 port 51010 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:09.951495 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:09.956768 systemd[1]: Started session-20.scope. Feb 12 19:48:09.957365 systemd-logind[1301]: New session 20 of user core. Feb 12 19:48:11.446753 sshd[3989]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:11.449601 systemd[1]: sshd@17-10.200.8.37:22-10.200.12.6:51010.service: Deactivated successfully. Feb 12 19:48:11.450489 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:48:11.451725 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:48:11.452582 systemd-logind[1301]: Removed session 20. Feb 12 19:48:11.550979 systemd[1]: Started sshd@18-10.200.8.37:22-10.200.12.6:51018.service. Feb 12 19:48:12.171789 sshd[4054]: Accepted publickey for core from 10.200.12.6 port 51018 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:12.173197 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:12.178122 systemd[1]: Started session-21.scope. Feb 12 19:48:12.178775 systemd-logind[1301]: New session 21 of user core. Feb 12 19:48:12.774316 sshd[4054]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:12.777815 systemd[1]: sshd@18-10.200.8.37:22-10.200.12.6:51018.service: Deactivated successfully. Feb 12 19:48:12.779207 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:48:12.779332 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:48:12.780818 systemd-logind[1301]: Removed session 21. Feb 12 19:48:12.886552 systemd[1]: Started sshd@19-10.200.8.37:22-10.200.12.6:51026.service. Feb 12 19:48:13.511313 sshd[4065]: Accepted publickey for core from 10.200.12.6 port 51026 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:13.512816 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:13.518045 systemd[1]: Started session-22.scope. Feb 12 19:48:13.518714 systemd-logind[1301]: New session 22 of user core. Feb 12 19:48:14.013730 sshd[4065]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:14.017707 systemd[1]: sshd@19-10.200.8.37:22-10.200.12.6:51026.service: Deactivated successfully. Feb 12 19:48:14.018625 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:48:14.019385 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:48:14.020171 systemd-logind[1301]: Removed session 22. Feb 12 19:48:16.138623 update_engine[1304]: I0212 19:48:16.138532 1304 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:16.139303 update_engine[1304]: I0212 19:48:16.139022 1304 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:16.139303 update_engine[1304]: I0212 19:48:16.139227 1304 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:48:16.162082 update_engine[1304]: E0212 19:48:16.161974 1304 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:16.162344 update_engine[1304]: I0212 19:48:16.162245 1304 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 12 19:48:19.118654 systemd[1]: Started sshd@20-10.200.8.37:22-10.200.12.6:32836.service. Feb 12 19:48:19.738543 sshd[4104]: Accepted publickey for core from 10.200.12.6 port 32836 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:19.740156 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:19.745259 systemd[1]: Started session-23.scope. Feb 12 19:48:19.746087 systemd-logind[1301]: New session 23 of user core. Feb 12 19:48:20.226827 sshd[4104]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:20.229881 systemd[1]: sshd@20-10.200.8.37:22-10.200.12.6:32836.service: Deactivated successfully. Feb 12 19:48:20.230882 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:48:20.231637 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:48:20.232391 systemd-logind[1301]: Removed session 23. Feb 12 19:48:25.332820 systemd[1]: Started sshd@21-10.200.8.37:22-10.200.12.6:32850.service. Feb 12 19:48:25.972132 sshd[4116]: Accepted publickey for core from 10.200.12.6 port 32850 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:25.973771 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:25.979930 systemd-logind[1301]: New session 24 of user core. Feb 12 19:48:25.980422 systemd[1]: Started session-24.scope. Feb 12 19:48:26.137531 update_engine[1304]: I0212 19:48:26.137485 1304 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:26.137948 update_engine[1304]: I0212 19:48:26.137733 1304 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:26.137948 update_engine[1304]: I0212 19:48:26.137931 1304 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:48:26.144574 update_engine[1304]: E0212 19:48:26.144542 1304 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:26.144709 update_engine[1304]: I0212 19:48:26.144639 1304 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:48:26.144709 update_engine[1304]: I0212 19:48:26.144649 1304 omaha_request_action.cc:621] Omaha request response: Feb 12 19:48:26.144794 update_engine[1304]: E0212 19:48:26.144727 1304 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 12 19:48:26.144794 update_engine[1304]: I0212 19:48:26.144742 1304 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 12 19:48:26.144794 update_engine[1304]: I0212 19:48:26.144747 1304 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:48:26.144794 update_engine[1304]: I0212 19:48:26.144750 1304 update_attempter.cc:306] Processing Done. Feb 12 19:48:26.144794 update_engine[1304]: E0212 19:48:26.144764 1304 update_attempter.cc:619] Update failed. Feb 12 19:48:26.144794 update_engine[1304]: I0212 19:48:26.144770 1304 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 12 19:48:26.144794 update_engine[1304]: I0212 19:48:26.144775 1304 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 12 19:48:26.144794 update_engine[1304]: I0212 19:48:26.144780 1304 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 12 19:48:26.145083 update_engine[1304]: I0212 19:48:26.144862 1304 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 19:48:26.145083 update_engine[1304]: I0212 19:48:26.144884 1304 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 19:48:26.145083 update_engine[1304]: I0212 19:48:26.144889 1304 omaha_request_action.cc:271] Request: <?xml version="1.0" encoding="UTF-8"?> Feb 12 19:48:26.145083 update_engine[1304]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1"> Feb 12 19:48:26.145083 update_engine[1304]: <os version="Chateau" platform="CoreOS" sp="3510.3.2_x86_64"></os> Feb 12 19:48:26.145083 update_engine[1304]: <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="3510.3.2" track="lts" bootid="{e516f8d0-5643-4214-bd32-7438c82cee71}" oem="azure" oemversion="2.6.0.2-r1" alephversion="3510.3.2" machineid="515fdd59f0a64d6f8732fe1988f9137a" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" > Feb 12 19:48:26.145083 update_engine[1304]: <event eventtype="3" eventresult="0" errorcode="268437456"></event> Feb 12 19:48:26.145083 update_engine[1304]: </app> Feb 12 19:48:26.145083 update_engine[1304]: </request> Feb 12 19:48:26.145083 update_engine[1304]: I0212 19:48:26.144896 1304 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 19:48:26.145083 update_engine[1304]: I0212 19:48:26.145033 1304 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 19:48:26.145370 update_engine[1304]: I0212 19:48:26.145171 1304 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 19:48:26.145689 locksmithd[1389]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 12 19:48:26.152087 update_engine[1304]: E0212 19:48:26.152058 1304 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 19:48:26.152179 update_engine[1304]: I0212 19:48:26.152143 1304 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 12 19:48:26.152179 update_engine[1304]: I0212 19:48:26.152151 1304 omaha_request_action.cc:621] Omaha request response: Feb 12 19:48:26.152179 update_engine[1304]: I0212 19:48:26.152158 1304 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:48:26.152179 update_engine[1304]: I0212 19:48:26.152161 1304 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 12 19:48:26.152179 update_engine[1304]: I0212 19:48:26.152165 1304 update_attempter.cc:306] Processing Done. Feb 12 19:48:26.152179 update_engine[1304]: I0212 19:48:26.152170 1304 update_attempter.cc:310] Error event sent. Feb 12 19:48:26.152402 update_engine[1304]: I0212 19:48:26.152179 1304 update_check_scheduler.cc:74] Next update check in 42m12s Feb 12 19:48:26.152560 locksmithd[1389]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 12 19:48:26.468781 sshd[4116]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:26.472008 systemd[1]: sshd@21-10.200.8.37:22-10.200.12.6:32850.service: Deactivated successfully. Feb 12 19:48:26.473189 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:48:26.474058 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:48:26.475083 systemd-logind[1301]: Removed session 24. Feb 12 19:48:31.575227 systemd[1]: Started sshd@22-10.200.8.37:22-10.200.12.6:50448.service. Feb 12 19:48:32.197340 sshd[4130]: Accepted publickey for core from 10.200.12.6 port 50448 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:32.199031 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:32.203234 systemd-logind[1301]: New session 25 of user core. Feb 12 19:48:32.205219 systemd[1]: Started session-25.scope. Feb 12 19:48:32.686144 sshd[4130]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:32.689510 systemd[1]: sshd@22-10.200.8.37:22-10.200.12.6:50448.service: Deactivated successfully. Feb 12 19:48:32.690648 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:48:32.691492 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:48:32.692277 systemd-logind[1301]: Removed session 25. Feb 12 19:48:32.791084 systemd[1]: Started sshd@23-10.200.8.37:22-10.200.12.6:50458.service. Feb 12 19:48:33.431470 sshd[4143]: Accepted publickey for core from 10.200.12.6 port 50458 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:33.433141 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:33.438005 systemd-logind[1301]: New session 26 of user core. Feb 12 19:48:33.438499 systemd[1]: Started session-26.scope. Feb 12 19:48:35.075437 env[1312]: time="2024-02-12T19:48:35.075383836Z" level=info msg="StopContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" with timeout 30 (s)" Feb 12 19:48:35.076118 env[1312]: time="2024-02-12T19:48:35.076083541Z" level=info msg="Stop container \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" with signal terminated" Feb 12 19:48:35.099992 systemd[1]: cri-containerd-af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415.scope: Deactivated successfully. Feb 12 19:48:35.106160 systemd[1]: run-containerd-runc-k8s.io-9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847-runc.V05GLU.mount: Deactivated successfully. Feb 12 19:48:35.132976 env[1312]: time="2024-02-12T19:48:35.132852298Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:48:35.137618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415-rootfs.mount: Deactivated successfully. Feb 12 19:48:35.143188 env[1312]: time="2024-02-12T19:48:35.143147581Z" level=info msg="StopContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" with timeout 1 (s)" Feb 12 19:48:35.143516 env[1312]: time="2024-02-12T19:48:35.143492684Z" level=info msg="Stop container \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" with signal terminated" Feb 12 19:48:35.151520 systemd-networkd[1460]: lxc_health: Link DOWN Feb 12 19:48:35.151528 systemd-networkd[1460]: lxc_health: Lost carrier Feb 12 19:48:35.163740 env[1312]: time="2024-02-12T19:48:35.163691947Z" level=info msg="shim disconnected" id=af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415 Feb 12 19:48:35.163862 env[1312]: time="2024-02-12T19:48:35.163742847Z" level=warning msg="cleaning up after shim disconnected" id=af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415 namespace=k8s.io Feb 12 19:48:35.163862 env[1312]: time="2024-02-12T19:48:35.163756747Z" level=info msg="cleaning up dead shim" Feb 12 19:48:35.173118 env[1312]: time="2024-02-12T19:48:35.173082822Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4200 runtime=io.containerd.runc.v2\n" Feb 12 19:48:35.176951 systemd[1]: cri-containerd-9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847.scope: Deactivated successfully. Feb 12 19:48:35.177241 systemd[1]: cri-containerd-9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847.scope: Consumed 7.477s CPU time. Feb 12 19:48:35.178489 env[1312]: time="2024-02-12T19:48:35.178352465Z" level=info msg="StopContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" returns successfully" Feb 12 19:48:35.180238 env[1312]: time="2024-02-12T19:48:35.180198979Z" level=info msg="StopPodSandbox for \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\"" Feb 12 19:48:35.180327 env[1312]: time="2024-02-12T19:48:35.180276680Z" level=info msg="Container to stop \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:35.182482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a-shm.mount: Deactivated successfully. Feb 12 19:48:35.190623 systemd[1]: cri-containerd-f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a.scope: Deactivated successfully. Feb 12 19:48:35.714053 env[1312]: time="2024-02-12T19:48:35.713993076Z" level=info msg="shim disconnected" id=9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847 Feb 12 19:48:35.714522 env[1312]: time="2024-02-12T19:48:35.714475180Z" level=warning msg="cleaning up after shim disconnected" id=9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847 namespace=k8s.io Feb 12 19:48:35.714522 env[1312]: time="2024-02-12T19:48:35.714515681Z" level=info msg="cleaning up dead shim" Feb 12 19:48:35.714856 env[1312]: time="2024-02-12T19:48:35.714288979Z" level=info msg="shim disconnected" id=f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a Feb 12 19:48:35.714952 env[1312]: time="2024-02-12T19:48:35.714864183Z" level=warning msg="cleaning up after shim disconnected" id=f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a namespace=k8s.io Feb 12 19:48:35.714952 env[1312]: time="2024-02-12T19:48:35.714888184Z" level=info msg="cleaning up dead shim" Feb 12 19:48:35.725757 env[1312]: time="2024-02-12T19:48:35.725719571Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4249 runtime=io.containerd.runc.v2\n" Feb 12 19:48:35.726177 env[1312]: time="2024-02-12T19:48:35.726144674Z" level=info msg="TearDown network for sandbox \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\" successfully" Feb 12 19:48:35.726266 env[1312]: time="2024-02-12T19:48:35.726175374Z" level=info msg="StopPodSandbox for \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\" returns successfully" Feb 12 19:48:35.726924 env[1312]: time="2024-02-12T19:48:35.726739179Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4248 runtime=io.containerd.runc.v2\n" Feb 12 19:48:35.731981 env[1312]: time="2024-02-12T19:48:35.731830420Z" level=info msg="StopContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" returns successfully" Feb 12 19:48:35.732416 env[1312]: time="2024-02-12T19:48:35.732361824Z" level=info msg="StopPodSandbox for \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\"" Feb 12 19:48:35.732523 env[1312]: time="2024-02-12T19:48:35.732457325Z" level=info msg="Container to stop \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:35.732523 env[1312]: time="2024-02-12T19:48:35.732480825Z" level=info msg="Container to stop \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:35.732523 env[1312]: time="2024-02-12T19:48:35.732498425Z" level=info msg="Container to stop \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:35.732523 env[1312]: time="2024-02-12T19:48:35.732513625Z" level=info msg="Container to stop \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:35.732682 env[1312]: time="2024-02-12T19:48:35.732527626Z" level=info msg="Container to stop \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:35.738644 systemd[1]: cri-containerd-ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322.scope: Deactivated successfully. Feb 12 19:48:35.917463 kubelet[2436]: I0212 19:48:35.917408 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pntf\" (UniqueName: \"kubernetes.io/projected/26103147-0f61-4d10-a5cd-c8596482a964-kube-api-access-8pntf\") pod \"26103147-0f61-4d10-a5cd-c8596482a964\" (UID: \"26103147-0f61-4d10-a5cd-c8596482a964\") " Feb 12 19:48:36.108077 kubelet[2436]: I0212 19:48:35.917504 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26103147-0f61-4d10-a5cd-c8596482a964-cilium-config-path\") pod \"26103147-0f61-4d10-a5cd-c8596482a964\" (UID: \"26103147-0f61-4d10-a5cd-c8596482a964\") " Feb 12 19:48:36.098536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847-rootfs.mount: Deactivated successfully. Feb 12 19:48:36.098680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a-rootfs.mount: Deactivated successfully. Feb 12 19:48:36.098787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322-rootfs.mount: Deactivated successfully. Feb 12 19:48:36.098890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322-shm.mount: Deactivated successfully. Feb 12 19:48:36.109286 kubelet[2436]: W0212 19:48:36.108871 2436 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/26103147-0f61-4d10-a5cd-c8596482a964/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:48:36.113584 kubelet[2436]: I0212 19:48:36.112182 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26103147-0f61-4d10-a5cd-c8596482a964-kube-api-access-8pntf" (OuterVolumeSpecName: "kube-api-access-8pntf") pod "26103147-0f61-4d10-a5cd-c8596482a964" (UID: "26103147-0f61-4d10-a5cd-c8596482a964"). InnerVolumeSpecName "kube-api-access-8pntf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:36.112774 systemd[1]: var-lib-kubelet-pods-26103147\x2d0f61\x2d4d10\x2da5cd\x2dc8596482a964-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8pntf.mount: Deactivated successfully. Feb 12 19:48:36.114819 kubelet[2436]: I0212 19:48:36.114791 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26103147-0f61-4d10-a5cd-c8596482a964-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26103147-0f61-4d10-a5cd-c8596482a964" (UID: "26103147-0f61-4d10-a5cd-c8596482a964"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:48:36.118346 kubelet[2436]: I0212 19:48:36.118323 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26103147-0f61-4d10-a5cd-c8596482a964-cilium-config-path\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.118432 kubelet[2436]: I0212 19:48:36.118353 2436 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8pntf\" (UniqueName: \"kubernetes.io/projected/26103147-0f61-4d10-a5cd-c8596482a964-kube-api-access-8pntf\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.361437 env[1312]: time="2024-02-12T19:48:36.361276479Z" level=info msg="shim disconnected" id=ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322 Feb 12 19:48:36.361437 env[1312]: time="2024-02-12T19:48:36.361340280Z" level=warning msg="cleaning up after shim disconnected" id=ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322 namespace=k8s.io Feb 12 19:48:36.361437 env[1312]: time="2024-02-12T19:48:36.361354680Z" level=info msg="cleaning up dead shim" Feb 12 19:48:36.363288 env[1312]: time="2024-02-12T19:48:36.363229595Z" level=info msg="shim disconnected" id=c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9 Feb 12 19:48:36.363527 env[1312]: time="2024-02-12T19:48:36.363289295Z" level=warning msg="cleaning up after shim disconnected" id=c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9 namespace=k8s.io Feb 12 19:48:36.363527 env[1312]: time="2024-02-12T19:48:36.363304196Z" level=info msg="cleaning up dead shim" Feb 12 19:48:36.373297 env[1312]: time="2024-02-12T19:48:36.373264476Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4297 runtime=io.containerd.runc.v2\n" Feb 12 19:48:36.373744 env[1312]: time="2024-02-12T19:48:36.373715379Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4296 runtime=io.containerd.runc.v2\n" Feb 12 19:48:36.373985 env[1312]: time="2024-02-12T19:48:36.373956081Z" level=info msg="TearDown network for sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" successfully" Feb 12 19:48:36.373985 env[1312]: time="2024-02-12T19:48:36.373980381Z" level=info msg="StopPodSandbox for \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" returns successfully" Feb 12 19:48:36.521928 kubelet[2436]: I0212 19:48:36.520599 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-clustermesh-secrets\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.521928 kubelet[2436]: I0212 19:48:36.520719 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-config-path\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.521928 kubelet[2436]: I0212 19:48:36.520759 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-lib-modules\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.521928 kubelet[2436]: I0212 19:48:36.520791 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-xtables-lock\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.521928 kubelet[2436]: I0212 19:48:36.520822 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hostproc\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.521928 kubelet[2436]: I0212 19:48:36.520855 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-net\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522422 kubelet[2436]: I0212 19:48:36.520887 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-etc-cni-netd\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522422 kubelet[2436]: I0212 19:48:36.520917 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-run\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522422 kubelet[2436]: I0212 19:48:36.520945 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cni-path\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522422 kubelet[2436]: I0212 19:48:36.520976 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-bpf-maps\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522422 kubelet[2436]: I0212 19:48:36.521012 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-kernel\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522422 kubelet[2436]: I0212 19:48:36.521056 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-cgroup\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522785 kubelet[2436]: I0212 19:48:36.521089 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqrxz\" (UniqueName: \"kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-kube-api-access-pqrxz\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522785 kubelet[2436]: I0212 19:48:36.521122 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hubble-tls\") pod \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\" (UID: \"ef7c71bc-bf38-4deb-b1d0-6347f99929b0\") " Feb 12 19:48:36.522785 kubelet[2436]: I0212 19:48:36.521502 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.522785 kubelet[2436]: W0212 19:48:36.521729 2436 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ef7c71bc-bf38-4deb-b1d0-6347f99929b0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:48:36.525462 kubelet[2436]: I0212 19:48:36.523063 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.525462 kubelet[2436]: I0212 19:48:36.523113 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.525462 kubelet[2436]: I0212 19:48:36.523139 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.525462 kubelet[2436]: I0212 19:48:36.523163 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.525462 kubelet[2436]: I0212 19:48:36.523189 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.536581 kubelet[2436]: I0212 19:48:36.526366 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.536581 kubelet[2436]: I0212 19:48:36.526411 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.536581 kubelet[2436]: I0212 19:48:36.526436 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.536581 kubelet[2436]: I0212 19:48:36.526495 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:36.536581 kubelet[2436]: I0212 19:48:36.531219 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:48:36.528523 systemd[1]: var-lib-kubelet-pods-ef7c71bc\x2dbf38\x2d4deb\x2db1d0\x2d6347f99929b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:48:36.536886 kubelet[2436]: I0212 19:48:36.531307 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:36.535576 systemd[1]: var-lib-kubelet-pods-ef7c71bc\x2dbf38\x2d4deb\x2db1d0\x2d6347f99929b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:48:36.537164 kubelet[2436]: I0212 19:48:36.537128 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:48:36.540533 kubelet[2436]: I0212 19:48:36.540475 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-kube-api-access-pqrxz" (OuterVolumeSpecName: "kube-api-access-pqrxz") pod "ef7c71bc-bf38-4deb-b1d0-6347f99929b0" (UID: "ef7c71bc-bf38-4deb-b1d0-6347f99929b0"). InnerVolumeSpecName "kube-api-access-pqrxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:36.541158 systemd[1]: var-lib-kubelet-pods-ef7c71bc\x2dbf38\x2d4deb\x2db1d0\x2d6347f99929b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpqrxz.mount: Deactivated successfully. Feb 12 19:48:36.622058 kubelet[2436]: I0212 19:48:36.621821 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-run\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.622058 kubelet[2436]: I0212 19:48:36.621900 2436 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cni-path\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.622058 kubelet[2436]: I0212 19:48:36.621944 2436 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-bpf-maps\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.622058 kubelet[2436]: I0212 19:48:36.621990 2436 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623676 kubelet[2436]: I0212 19:48:36.623638 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-cgroup\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623681 2436 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-pqrxz\" (UniqueName: \"kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-kube-api-access-pqrxz\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623703 2436 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hubble-tls\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623720 2436 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-clustermesh-secrets\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623738 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-cilium-config-path\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623757 2436 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-hostproc\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623775 2436 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-host-proc-sys-net\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.623793 kubelet[2436]: I0212 19:48:36.623796 2436 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-lib-modules\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.624042 kubelet[2436]: I0212 19:48:36.623814 2436 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-xtables-lock\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.624042 kubelet[2436]: I0212 19:48:36.623833 2436 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef7c71bc-bf38-4deb-b1d0-6347f99929b0-etc-cni-netd\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:36.648997 kubelet[2436]: I0212 19:48:36.648970 2436 scope.go:115] "RemoveContainer" containerID="9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847" Feb 12 19:48:36.653118 systemd[1]: Removed slice kubepods-burstable-podef7c71bc_bf38_4deb_b1d0_6347f99929b0.slice. Feb 12 19:48:36.653260 systemd[1]: kubepods-burstable-podef7c71bc_bf38_4deb_b1d0_6347f99929b0.slice: Consumed 7.584s CPU time. Feb 12 19:48:36.662576 env[1312]: time="2024-02-12T19:48:36.662540698Z" level=info msg="RemoveContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\"" Feb 12 19:48:36.670251 systemd[1]: Removed slice kubepods-besteffort-pod26103147_0f61_4d10_a5cd_c8596482a964.slice. Feb 12 19:48:36.674547 env[1312]: time="2024-02-12T19:48:36.674384793Z" level=info msg="RemoveContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" returns successfully" Feb 12 19:48:36.674799 kubelet[2436]: I0212 19:48:36.674779 2436 scope.go:115] "RemoveContainer" containerID="e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164" Feb 12 19:48:36.676949 env[1312]: time="2024-02-12T19:48:36.676902113Z" level=info msg="RemoveContainer for \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\"" Feb 12 19:48:36.686786 env[1312]: time="2024-02-12T19:48:36.686192088Z" level=info msg="RemoveContainer for \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\" returns successfully" Feb 12 19:48:36.686891 kubelet[2436]: I0212 19:48:36.686372 2436 scope.go:115] "RemoveContainer" containerID="2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f" Feb 12 19:48:36.688252 env[1312]: time="2024-02-12T19:48:36.688227104Z" level=info msg="RemoveContainer for \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\"" Feb 12 19:48:36.696674 env[1312]: time="2024-02-12T19:48:36.696643672Z" level=info msg="RemoveContainer for \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\" returns successfully" Feb 12 19:48:36.696800 kubelet[2436]: I0212 19:48:36.696781 2436 scope.go:115] "RemoveContainer" containerID="5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e" Feb 12 19:48:36.697864 env[1312]: time="2024-02-12T19:48:36.697843982Z" level=info msg="RemoveContainer for \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\"" Feb 12 19:48:36.760244 env[1312]: time="2024-02-12T19:48:36.760194682Z" level=info msg="RemoveContainer for \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\" returns successfully" Feb 12 19:48:36.760549 kubelet[2436]: I0212 19:48:36.760511 2436 scope.go:115] "RemoveContainer" containerID="c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9" Feb 12 19:48:36.761956 env[1312]: time="2024-02-12T19:48:36.761876896Z" level=info msg="RemoveContainer for \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\"" Feb 12 19:48:36.868878 env[1312]: time="2024-02-12T19:48:36.868830154Z" level=info msg="RemoveContainer for \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\" returns successfully" Feb 12 19:48:36.869133 kubelet[2436]: I0212 19:48:36.869108 2436 scope.go:115] "RemoveContainer" containerID="9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847" Feb 12 19:48:36.869514 env[1312]: time="2024-02-12T19:48:36.869419359Z" level=error msg="ContainerStatus for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\": not found" Feb 12 19:48:36.869675 kubelet[2436]: E0212 19:48:36.869654 2436 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\": not found" containerID="9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847" Feb 12 19:48:36.869756 kubelet[2436]: I0212 19:48:36.869703 2436 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847} err="failed to get container status \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\": not found" Feb 12 19:48:36.869756 kubelet[2436]: I0212 19:48:36.869720 2436 scope.go:115] "RemoveContainer" containerID="e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164" Feb 12 19:48:36.869953 env[1312]: time="2024-02-12T19:48:36.869906263Z" level=error msg="ContainerStatus for \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\": not found" Feb 12 19:48:36.870084 kubelet[2436]: E0212 19:48:36.870064 2436 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\": not found" containerID="e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164" Feb 12 19:48:36.870161 kubelet[2436]: I0212 19:48:36.870099 2436 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164} err="failed to get container status \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8f25beb85b4d78a2aea4731c3de995a892c49586d7e749df545d5a5deb1b164\": not found" Feb 12 19:48:36.870161 kubelet[2436]: I0212 19:48:36.870113 2436 scope.go:115] "RemoveContainer" containerID="2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f" Feb 12 19:48:36.870346 env[1312]: time="2024-02-12T19:48:36.870290066Z" level=error msg="ContainerStatus for \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\": not found" Feb 12 19:48:36.870490 kubelet[2436]: E0212 19:48:36.870473 2436 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\": not found" containerID="2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f" Feb 12 19:48:36.870573 kubelet[2436]: I0212 19:48:36.870506 2436 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f} err="failed to get container status \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f7a0d1cd72efe70920971a8f5f2c474431f04db34f2892870a75e9bf7677a7f\": not found" Feb 12 19:48:36.870573 kubelet[2436]: I0212 19:48:36.870518 2436 scope.go:115] "RemoveContainer" containerID="5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e" Feb 12 19:48:36.870729 env[1312]: time="2024-02-12T19:48:36.870682569Z" level=error msg="ContainerStatus for \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\": not found" Feb 12 19:48:36.870858 kubelet[2436]: E0212 19:48:36.870839 2436 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\": not found" containerID="5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e" Feb 12 19:48:36.870927 kubelet[2436]: I0212 19:48:36.870881 2436 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e} err="failed to get container status \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ba9987b374070c93c2b8824fc3aad435b12c91d56411f41920fac913723562e\": not found" Feb 12 19:48:36.870927 kubelet[2436]: I0212 19:48:36.870897 2436 scope.go:115] "RemoveContainer" containerID="c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9" Feb 12 19:48:36.871116 env[1312]: time="2024-02-12T19:48:36.871067972Z" level=error msg="ContainerStatus for \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\": not found" Feb 12 19:48:36.871233 kubelet[2436]: E0212 19:48:36.871214 2436 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\": not found" containerID="c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9" Feb 12 19:48:36.871312 kubelet[2436]: I0212 19:48:36.871246 2436 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9} err="failed to get container status \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2695b617f14cfb02ce6985a0eee088fc24575d0e68165689edbaefd5ce721a9\": not found" Feb 12 19:48:36.871312 kubelet[2436]: I0212 19:48:36.871259 2436 scope.go:115] "RemoveContainer" containerID="af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415" Feb 12 19:48:36.872318 env[1312]: time="2024-02-12T19:48:36.872232082Z" level=info msg="RemoveContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\"" Feb 12 19:48:36.965728 env[1312]: time="2024-02-12T19:48:36.965679132Z" level=info msg="RemoveContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" returns successfully" Feb 12 19:48:37.053389 env[1312]: time="2024-02-12T19:48:37.053331735Z" level=info msg="StopContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" with timeout 1 (s)" Feb 12 19:48:37.053605 env[1312]: time="2024-02-12T19:48:37.053401135Z" level=error msg="StopContainer for \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\": not found" Feb 12 19:48:37.053750 env[1312]: time="2024-02-12T19:48:37.053716338Z" level=info msg="StopContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" with timeout 1 (s)" Feb 12 19:48:37.053840 env[1312]: time="2024-02-12T19:48:37.053766238Z" level=error msg="StopContainer for \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\": not found" Feb 12 19:48:37.054071 kubelet[2436]: E0212 19:48:37.054050 2436 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415\": not found" containerID="af930dffb94b15ffbb84507b82ec82b172554d8ae483f6c73c982941bc840415" Feb 12 19:48:37.054873 kubelet[2436]: E0212 19:48:37.054351 2436 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847\": not found" containerID="9a54365e035423f071c49754a8619006c7cb3f1d8e1ab0fe8fbd80e6dde0f847" Feb 12 19:48:37.055382 env[1312]: time="2024-02-12T19:48:37.055344051Z" level=info msg="StopPodSandbox for \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\"" Feb 12 19:48:37.055603 env[1312]: time="2024-02-12T19:48:37.055513952Z" level=info msg="TearDown network for sandbox \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\" successfully" Feb 12 19:48:37.055689 env[1312]: time="2024-02-12T19:48:37.055595253Z" level=info msg="StopPodSandbox for \"f545c333e4402d76910ef0488453a0f6c3d623996b368227e5bf08aa95b5922a\" returns successfully" Feb 12 19:48:37.055787 env[1312]: time="2024-02-12T19:48:37.055759754Z" level=info msg="StopPodSandbox for \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\"" Feb 12 19:48:37.055939 env[1312]: time="2024-02-12T19:48:37.055869955Z" level=info msg="TearDown network for sandbox \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" successfully" Feb 12 19:48:37.056020 env[1312]: time="2024-02-12T19:48:37.055940456Z" level=info msg="StopPodSandbox for \"ce9d75326fe5a176315a13d55b28f6b8c69aef95250cbb8216fd29300c00f322\" returns successfully" Feb 12 19:48:37.056309 kubelet[2436]: I0212 19:48:37.056291 2436 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=26103147-0f61-4d10-a5cd-c8596482a964 path="/var/lib/kubelet/pods/26103147-0f61-4d10-a5cd-c8596482a964/volumes" Feb 12 19:48:37.057648 kubelet[2436]: I0212 19:48:37.057631 2436 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ef7c71bc-bf38-4deb-b1d0-6347f99929b0 path="/var/lib/kubelet/pods/ef7c71bc-bf38-4deb-b1d0-6347f99929b0/volumes" Feb 12 19:48:37.106405 sshd[4143]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:37.109691 systemd[1]: sshd@23-10.200.8.37:22-10.200.12.6:50458.service: Deactivated successfully. Feb 12 19:48:37.111119 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:48:37.111168 systemd-logind[1301]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:48:37.112544 systemd-logind[1301]: Removed session 26. Feb 12 19:48:37.211192 systemd[1]: Started sshd@24-10.200.8.37:22-10.200.12.6:57970.service. Feb 12 19:48:37.823823 sshd[4329]: Accepted publickey for core from 10.200.12.6 port 57970 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:37.825397 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:37.831363 systemd[1]: Started session-27.scope. Feb 12 19:48:37.831842 systemd-logind[1301]: New session 27 of user core. Feb 12 19:48:38.656847 kubelet[2436]: I0212 19:48:38.656804 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:38.657285 kubelet[2436]: E0212 19:48:38.656907 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef7c71bc-bf38-4deb-b1d0-6347f99929b0" containerName="mount-cgroup" Feb 12 19:48:38.657285 kubelet[2436]: E0212 19:48:38.656934 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26103147-0f61-4d10-a5cd-c8596482a964" containerName="cilium-operator" Feb 12 19:48:38.657285 kubelet[2436]: E0212 19:48:38.656976 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef7c71bc-bf38-4deb-b1d0-6347f99929b0" containerName="clean-cilium-state" Feb 12 19:48:38.657285 kubelet[2436]: E0212 19:48:38.656987 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef7c71bc-bf38-4deb-b1d0-6347f99929b0" containerName="apply-sysctl-overwrites" Feb 12 19:48:38.657285 kubelet[2436]: E0212 19:48:38.656997 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef7c71bc-bf38-4deb-b1d0-6347f99929b0" containerName="mount-bpf-fs" Feb 12 19:48:38.657285 kubelet[2436]: E0212 19:48:38.657022 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef7c71bc-bf38-4deb-b1d0-6347f99929b0" containerName="cilium-agent" Feb 12 19:48:38.657285 kubelet[2436]: I0212 19:48:38.657059 2436 memory_manager.go:346] "RemoveStaleState removing state" podUID="26103147-0f61-4d10-a5cd-c8596482a964" containerName="cilium-operator" Feb 12 19:48:38.657285 kubelet[2436]: I0212 19:48:38.657068 2436 memory_manager.go:346] "RemoveStaleState removing state" podUID="ef7c71bc-bf38-4deb-b1d0-6347f99929b0" containerName="cilium-agent" Feb 12 19:48:38.664588 systemd[1]: Created slice kubepods-burstable-pod0e88b652_c899_430e_82c8_3f9d600d1b17.slice. Feb 12 19:48:38.737211 kubelet[2436]: I0212 19:48:38.737172 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-hostproc\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737211 kubelet[2436]: I0212 19:48:38.737226 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-clustermesh-secrets\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737532 kubelet[2436]: I0212 19:48:38.737264 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-config-path\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737532 kubelet[2436]: I0212 19:48:38.737294 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-ipsec-secrets\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737532 kubelet[2436]: I0212 19:48:38.737324 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-hubble-tls\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737532 kubelet[2436]: I0212 19:48:38.737358 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-run\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737532 kubelet[2436]: I0212 19:48:38.737394 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-kernel\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737824 kubelet[2436]: I0212 19:48:38.737427 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77ks\" (UniqueName: \"kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-kube-api-access-r77ks\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737824 kubelet[2436]: I0212 19:48:38.737481 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-etc-cni-netd\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737824 kubelet[2436]: I0212 19:48:38.737520 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-cgroup\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737824 kubelet[2436]: I0212 19:48:38.737551 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-xtables-lock\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737824 kubelet[2436]: I0212 19:48:38.737589 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-net\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.737824 kubelet[2436]: I0212 19:48:38.737625 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cni-path\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.738080 kubelet[2436]: I0212 19:48:38.737664 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-lib-modules\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.738080 kubelet[2436]: I0212 19:48:38.737702 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-bpf-maps\") pod \"cilium-8r29r\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " pod="kube-system/cilium-8r29r" Feb 12 19:48:38.754763 sshd[4329]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:38.758082 systemd[1]: sshd@24-10.200.8.37:22-10.200.12.6:57970.service: Deactivated successfully. Feb 12 19:48:38.759377 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:48:38.759405 systemd-logind[1301]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:48:38.760607 systemd-logind[1301]: Removed session 27. Feb 12 19:48:38.865091 systemd[1]: Started sshd@25-10.200.8.37:22-10.200.12.6:57980.service. Feb 12 19:48:38.971162 env[1312]: time="2024-02-12T19:48:38.969864163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8r29r,Uid:0e88b652-c899-430e-82c8-3f9d600d1b17,Namespace:kube-system,Attempt:0,}" Feb 12 19:48:39.007900 env[1312]: time="2024-02-12T19:48:39.007822866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:39.007900 env[1312]: time="2024-02-12T19:48:39.007861867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:39.007900 env[1312]: time="2024-02-12T19:48:39.007875767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:39.008282 env[1312]: time="2024-02-12T19:48:39.008236270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601 pid=4353 runtime=io.containerd.runc.v2 Feb 12 19:48:39.020301 systemd[1]: Started cri-containerd-61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601.scope. Feb 12 19:48:39.049240 env[1312]: time="2024-02-12T19:48:39.049202696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8r29r,Uid:0e88b652-c899-430e-82c8-3f9d600d1b17,Namespace:kube-system,Attempt:0,} returns sandbox id \"61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601\"" Feb 12 19:48:39.054610 env[1312]: time="2024-02-12T19:48:39.054562439Z" level=info msg="CreateContainer within sandbox \"61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:48:39.088078 env[1312]: time="2024-02-12T19:48:39.088039105Z" level=info msg="CreateContainer within sandbox \"61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\"" Feb 12 19:48:39.090178 env[1312]: time="2024-02-12T19:48:39.090150722Z" level=info msg="StartContainer for \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\"" Feb 12 19:48:39.107596 systemd[1]: Started cri-containerd-eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c.scope. Feb 12 19:48:39.119950 systemd[1]: cri-containerd-eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c.scope: Deactivated successfully. Feb 12 19:48:39.491175 sshd[4343]: Accepted publickey for core from 10.200.12.6 port 57980 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:39.761551 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:39.767613 systemd-logind[1301]: New session 28 of user core. Feb 12 19:48:39.768195 systemd[1]: Started session-28.scope. Feb 12 19:48:40.067628 kubelet[2436]: E0212 19:48:40.067502 2436 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:48:40.182386 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:40.185687 systemd[1]: sshd@25-10.200.8.37:22-10.200.12.6:57980.service: Deactivated successfully. Feb 12 19:48:40.186853 systemd-logind[1301]: Session 28 logged out. Waiting for processes to exit. Feb 12 19:48:40.186950 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 19:48:40.188121 systemd-logind[1301]: Removed session 28. Feb 12 19:48:40.286932 systemd[1]: Started sshd@26-10.200.8.37:22-10.200.12.6:57994.service. Feb 12 19:48:40.660837 env[1312]: time="2024-02-12T19:48:40.660763224Z" level=info msg="shim disconnected" id=eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c Feb 12 19:48:40.660837 env[1312]: time="2024-02-12T19:48:40.660833225Z" level=warning msg="cleaning up after shim disconnected" id=eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c namespace=k8s.io Feb 12 19:48:40.661469 env[1312]: time="2024-02-12T19:48:40.660847825Z" level=info msg="cleaning up dead shim" Feb 12 19:48:40.669815 env[1312]: time="2024-02-12T19:48:40.669774496Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4421 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:48:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:48:40.670238 env[1312]: time="2024-02-12T19:48:40.670033298Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Feb 12 19:48:40.671550 env[1312]: time="2024-02-12T19:48:40.671505410Z" level=error msg="Failed to pipe stderr of container \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\"" error="reading from a closed fifo" Feb 12 19:48:40.671660 env[1312]: time="2024-02-12T19:48:40.671503510Z" level=error msg="Failed to pipe stdout of container \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\"" error="reading from a closed fifo" Feb 12 19:48:40.709649 env[1312]: time="2024-02-12T19:48:40.709589112Z" level=error msg="StartContainer for \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:48:40.710588 kubelet[2436]: E0212 19:48:40.710075 2436 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c" Feb 12 19:48:40.710588 kubelet[2436]: E0212 19:48:40.710225 2436 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:48:40.710588 kubelet[2436]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:48:40.710588 kubelet[2436]: rm /hostbin/cilium-mount Feb 12 19:48:40.710854 kubelet[2436]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r77ks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-8r29r_kube-system(0e88b652-c899-430e-82c8-3f9d600d1b17): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:48:40.710985 kubelet[2436]: E0212 19:48:40.710291 2436 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8r29r" podUID=0e88b652-c899-430e-82c8-3f9d600d1b17 Feb 12 19:48:40.915963 sshd[4419]: Accepted publickey for core from 10.200.12.6 port 57994 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:40.916930 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:40.921835 systemd[1]: Started session-29.scope. Feb 12 19:48:40.922599 systemd-logind[1301]: New session 29 of user core. Feb 12 19:48:41.399681 kubelet[2436]: I0212 19:48:41.399645 2436 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-48475fc0ad" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:48:41.399588789 +0000 UTC m=+276.635086497 LastTransitionTime:2024-02-12 19:48:41.399588789 +0000 UTC m=+276.635086497 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:48:41.714422 env[1312]: time="2024-02-12T19:48:41.714286385Z" level=info msg="StopPodSandbox for \"61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601\"" Feb 12 19:48:41.714422 env[1312]: time="2024-02-12T19:48:41.714376785Z" level=info msg="Container to stop \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:41.717589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601-shm.mount: Deactivated successfully. Feb 12 19:48:41.725974 systemd[1]: cri-containerd-61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601.scope: Deactivated successfully. Feb 12 19:48:41.753842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601-rootfs.mount: Deactivated successfully. Feb 12 19:48:42.019014 env[1312]: time="2024-02-12T19:48:42.018892200Z" level=info msg="shim disconnected" id=61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601 Feb 12 19:48:42.019235 env[1312]: time="2024-02-12T19:48:42.019072901Z" level=warning msg="cleaning up after shim disconnected" id=61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601 namespace=k8s.io Feb 12 19:48:42.019235 env[1312]: time="2024-02-12T19:48:42.019092101Z" level=info msg="cleaning up dead shim" Feb 12 19:48:42.027949 env[1312]: time="2024-02-12T19:48:42.027908271Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4460 runtime=io.containerd.runc.v2\n" Feb 12 19:48:42.028427 env[1312]: time="2024-02-12T19:48:42.028395575Z" level=info msg="TearDown network for sandbox \"61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601\" successfully" Feb 12 19:48:42.028662 env[1312]: time="2024-02-12T19:48:42.028637877Z" level=info msg="StopPodSandbox for \"61dce8c507c3c3fe200e5d9964504b2475ae961c3e9783bf57b365eabdbf1601\" returns successfully" Feb 12 19:48:42.061465 kubelet[2436]: I0212 19:48:42.060812 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-xtables-lock\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061465 kubelet[2436]: I0212 19:48:42.060857 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.061465 kubelet[2436]: I0212 19:48:42.060870 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-ipsec-secrets\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061465 kubelet[2436]: I0212 19:48:42.060900 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-clustermesh-secrets\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061465 kubelet[2436]: I0212 19:48:42.060929 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r77ks\" (UniqueName: \"kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-kube-api-access-r77ks\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061465 kubelet[2436]: I0212 19:48:42.060958 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-config-path\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061864 kubelet[2436]: I0212 19:48:42.060987 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-kernel\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061864 kubelet[2436]: I0212 19:48:42.061012 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-run\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061864 kubelet[2436]: I0212 19:48:42.061042 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-hubble-tls\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061864 kubelet[2436]: I0212 19:48:42.061072 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-etc-cni-netd\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061864 kubelet[2436]: I0212 19:48:42.061107 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-lib-modules\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.061864 kubelet[2436]: I0212 19:48:42.061131 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-cgroup\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.062127 kubelet[2436]: I0212 19:48:42.061155 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-bpf-maps\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.062127 kubelet[2436]: I0212 19:48:42.061178 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-hostproc\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.062127 kubelet[2436]: I0212 19:48:42.061208 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-net\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.062127 kubelet[2436]: I0212 19:48:42.061232 2436 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cni-path\") pod \"0e88b652-c899-430e-82c8-3f9d600d1b17\" (UID: \"0e88b652-c899-430e-82c8-3f9d600d1b17\") " Feb 12 19:48:42.062127 kubelet[2436]: I0212 19:48:42.061309 2436 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-xtables-lock\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.062127 kubelet[2436]: I0212 19:48:42.061335 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cni-path" (OuterVolumeSpecName: "cni-path") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.063076 kubelet[2436]: I0212 19:48:42.062498 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.063076 kubelet[2436]: W0212 19:48:42.062679 2436 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0e88b652-c899-430e-82c8-3f9d600d1b17/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:48:42.065383 kubelet[2436]: I0212 19:48:42.065355 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.065528 kubelet[2436]: I0212 19:48:42.065510 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.065829 kubelet[2436]: I0212 19:48:42.065804 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.065970 kubelet[2436]: I0212 19:48:42.065952 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.066086 kubelet[2436]: I0212 19:48:42.066067 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.066192 kubelet[2436]: I0212 19:48:42.066179 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-hostproc" (OuterVolumeSpecName: "hostproc") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.066303 kubelet[2436]: I0212 19:48:42.066286 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:42.069108 systemd[1]: var-lib-kubelet-pods-0e88b652\x2dc899\x2d430e\x2d82c8\x2d3f9d600d1b17-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:48:42.070950 kubelet[2436]: I0212 19:48:42.070926 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:48:42.071157 kubelet[2436]: I0212 19:48:42.071137 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:48:42.072895 systemd[1]: var-lib-kubelet-pods-0e88b652\x2dc899\x2d430e\x2d82c8\x2d3f9d600d1b17-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:48:42.073871 kubelet[2436]: I0212 19:48:42.073848 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:48:42.078302 systemd[1]: var-lib-kubelet-pods-0e88b652\x2dc899\x2d430e\x2d82c8\x2d3f9d600d1b17-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr77ks.mount: Deactivated successfully. Feb 12 19:48:42.079561 kubelet[2436]: I0212 19:48:42.079526 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-kube-api-access-r77ks" (OuterVolumeSpecName: "kube-api-access-r77ks") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "kube-api-access-r77ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:42.079639 kubelet[2436]: I0212 19:48:42.079540 2436 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0e88b652-c899-430e-82c8-3f9d600d1b17" (UID: "0e88b652-c899-430e-82c8-3f9d600d1b17"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:42.162295 kubelet[2436]: I0212 19:48:42.162236 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-config-path\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162295 kubelet[2436]: I0212 19:48:42.162286 2436 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162295 kubelet[2436]: I0212 19:48:42.162304 2436 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-r77ks\" (UniqueName: \"kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-kube-api-access-r77ks\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162326 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-run\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162344 2436 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-etc-cni-netd\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162361 2436 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-lib-modules\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162376 2436 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e88b652-c899-430e-82c8-3f9d600d1b17-hubble-tls\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162391 2436 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-bpf-maps\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162407 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-cgroup\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162422 2436 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-host-proc-sys-net\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162644 kubelet[2436]: I0212 19:48:42.162474 2436 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-cni-path\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162906 kubelet[2436]: I0212 19:48:42.162493 2436 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e88b652-c899-430e-82c8-3f9d600d1b17-hostproc\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162906 kubelet[2436]: I0212 19:48:42.162512 2436 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.162906 kubelet[2436]: I0212 19:48:42.162530 2436 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e88b652-c899-430e-82c8-3f9d600d1b17-clustermesh-secrets\") on node \"ci-3510.3.2-a-48475fc0ad\" DevicePath \"\"" Feb 12 19:48:42.718664 kubelet[2436]: I0212 19:48:42.718629 2436 scope.go:115] "RemoveContainer" containerID="eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c" Feb 12 19:48:42.719296 systemd[1]: var-lib-kubelet-pods-0e88b652\x2dc899\x2d430e\x2d82c8\x2d3f9d600d1b17-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:48:42.723474 env[1312]: time="2024-02-12T19:48:42.723350672Z" level=info msg="RemoveContainer for \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\"" Feb 12 19:48:42.725376 systemd[1]: Removed slice kubepods-burstable-pod0e88b652_c899_430e_82c8_3f9d600d1b17.slice. Feb 12 19:48:42.733718 env[1312]: time="2024-02-12T19:48:42.733617753Z" level=info msg="RemoveContainer for \"eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c\" returns successfully" Feb 12 19:48:42.752520 kubelet[2436]: I0212 19:48:42.752491 2436 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:42.752649 kubelet[2436]: E0212 19:48:42.752553 2436 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e88b652-c899-430e-82c8-3f9d600d1b17" containerName="mount-cgroup" Feb 12 19:48:42.752649 kubelet[2436]: I0212 19:48:42.752608 2436 memory_manager.go:346] "RemoveStaleState removing state" podUID="0e88b652-c899-430e-82c8-3f9d600d1b17" containerName="mount-cgroup" Feb 12 19:48:42.758806 systemd[1]: Created slice kubepods-burstable-pod1b42554c_e57d_4f83_8a0d_60be892e311a.slice. Feb 12 19:48:42.766282 kubelet[2436]: I0212 19:48:42.766262 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-cilium-cgroup\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766384 kubelet[2436]: I0212 19:48:42.766301 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-cni-path\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766384 kubelet[2436]: I0212 19:48:42.766330 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-host-proc-sys-kernel\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766384 kubelet[2436]: I0212 19:48:42.766359 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-etc-cni-netd\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766550 kubelet[2436]: I0212 19:48:42.766387 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b42554c-e57d-4f83-8a0d-60be892e311a-cilium-ipsec-secrets\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766550 kubelet[2436]: I0212 19:48:42.766427 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-host-proc-sys-net\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766550 kubelet[2436]: I0212 19:48:42.766496 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-xtables-lock\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766550 kubelet[2436]: I0212 19:48:42.766528 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2q9z\" (UniqueName: \"kubernetes.io/projected/1b42554c-e57d-4f83-8a0d-60be892e311a-kube-api-access-h2q9z\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766741 kubelet[2436]: I0212 19:48:42.766556 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b42554c-e57d-4f83-8a0d-60be892e311a-hubble-tls\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766741 kubelet[2436]: I0212 19:48:42.766600 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-cilium-run\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766741 kubelet[2436]: I0212 19:48:42.766636 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-bpf-maps\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766741 kubelet[2436]: I0212 19:48:42.766665 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b42554c-e57d-4f83-8a0d-60be892e311a-clustermesh-secrets\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766741 kubelet[2436]: I0212 19:48:42.766693 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-lib-modules\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766741 kubelet[2436]: I0212 19:48:42.766721 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b42554c-e57d-4f83-8a0d-60be892e311a-cilium-config-path\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:42.766979 kubelet[2436]: I0212 19:48:42.766748 2436 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b42554c-e57d-4f83-8a0d-60be892e311a-hostproc\") pod \"cilium-2r7nf\" (UID: \"1b42554c-e57d-4f83-8a0d-60be892e311a\") " pod="kube-system/cilium-2r7nf" Feb 12 19:48:43.061830 kubelet[2436]: I0212 19:48:43.060771 2436 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0e88b652-c899-430e-82c8-3f9d600d1b17 path="/var/lib/kubelet/pods/0e88b652-c899-430e-82c8-3f9d600d1b17/volumes" Feb 12 19:48:43.062810 env[1312]: time="2024-02-12T19:48:43.062763456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r7nf,Uid:1b42554c-e57d-4f83-8a0d-60be892e311a,Namespace:kube-system,Attempt:0,}" Feb 12 19:48:43.102542 env[1312]: time="2024-02-12T19:48:43.102426169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:43.102707 env[1312]: time="2024-02-12T19:48:43.102510169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:43.102707 env[1312]: time="2024-02-12T19:48:43.102524669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:43.102855 env[1312]: time="2024-02-12T19:48:43.102782272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7 pid=4490 runtime=io.containerd.runc.v2 Feb 12 19:48:43.114360 systemd[1]: Started cri-containerd-c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7.scope. Feb 12 19:48:43.138915 env[1312]: time="2024-02-12T19:48:43.138869756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2r7nf,Uid:1b42554c-e57d-4f83-8a0d-60be892e311a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\"" Feb 12 19:48:43.142870 env[1312]: time="2024-02-12T19:48:43.142837388Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:48:43.178668 env[1312]: time="2024-02-12T19:48:43.178627770Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d\"" Feb 12 19:48:43.179082 env[1312]: time="2024-02-12T19:48:43.179037273Z" level=info msg="StartContainer for \"00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d\"" Feb 12 19:48:43.195749 systemd[1]: Started cri-containerd-00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d.scope. Feb 12 19:48:43.225272 env[1312]: time="2024-02-12T19:48:43.225228538Z" level=info msg="StartContainer for \"00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d\" returns successfully" Feb 12 19:48:43.233824 systemd[1]: cri-containerd-00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d.scope: Deactivated successfully. Feb 12 19:48:43.767360 kubelet[2436]: W0212 19:48:43.767314 2436 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e88b652_c899_430e_82c8_3f9d600d1b17.slice/cri-containerd-eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c.scope WatchSource:0}: container "eb29110570ed60088cac76932cbbf64c7c7db6436af85f051a28499d24d1902c" in namespace "k8s.io": not found Feb 12 19:48:44.131634 env[1312]: time="2024-02-12T19:48:44.131568888Z" level=info msg="shim disconnected" id=00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d Feb 12 19:48:44.131634 env[1312]: time="2024-02-12T19:48:44.131633488Z" level=warning msg="cleaning up after shim disconnected" id=00d3fa64f6c5d2caf03f3265cda24d6530cd43308caa7ff1bcf1f1c06cf7bd6d namespace=k8s.io Feb 12 19:48:44.132136 env[1312]: time="2024-02-12T19:48:44.131645089Z" level=info msg="cleaning up dead shim" Feb 12 19:48:44.139516 env[1312]: time="2024-02-12T19:48:44.139477750Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4572 runtime=io.containerd.runc.v2\n" Feb 12 19:48:44.732567 env[1312]: time="2024-02-12T19:48:44.732515319Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:48:44.772805 env[1312]: time="2024-02-12T19:48:44.772753036Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa\"" Feb 12 19:48:44.773732 env[1312]: time="2024-02-12T19:48:44.773686543Z" level=info msg="StartContainer for \"b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa\"" Feb 12 19:48:44.797429 systemd[1]: Started cri-containerd-b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa.scope. Feb 12 19:48:44.832921 env[1312]: time="2024-02-12T19:48:44.832872409Z" level=info msg="StartContainer for \"b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa\" returns successfully" Feb 12 19:48:44.838309 systemd[1]: cri-containerd-b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa.scope: Deactivated successfully. Feb 12 19:48:44.855763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa-rootfs.mount: Deactivated successfully. Feb 12 19:48:44.871290 env[1312]: time="2024-02-12T19:48:44.871241111Z" level=info msg="shim disconnected" id=b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa Feb 12 19:48:44.871527 env[1312]: time="2024-02-12T19:48:44.871291712Z" level=warning msg="cleaning up after shim disconnected" id=b8ac7c21a391f59c2f7925470c16998af23b4e2b19ab3f364c36bba729dfcbfa namespace=k8s.io Feb 12 19:48:44.871527 env[1312]: time="2024-02-12T19:48:44.871303912Z" level=info msg="cleaning up dead shim" Feb 12 19:48:44.879124 env[1312]: time="2024-02-12T19:48:44.879087073Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4633 runtime=io.containerd.runc.v2\n" Feb 12 19:48:45.069862 kubelet[2436]: E0212 19:48:45.069755 2436 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:48:45.735625 env[1312]: time="2024-02-12T19:48:45.735575403Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:48:45.777075 env[1312]: time="2024-02-12T19:48:45.776986628Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46\"" Feb 12 19:48:45.777994 env[1312]: time="2024-02-12T19:48:45.777956736Z" level=info msg="StartContainer for \"0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46\"" Feb 12 19:48:45.804980 systemd[1]: Started cri-containerd-0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46.scope. Feb 12 19:48:45.836223 systemd[1]: cri-containerd-0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46.scope: Deactivated successfully. Feb 12 19:48:45.843274 env[1312]: time="2024-02-12T19:48:45.843235149Z" level=info msg="StartContainer for \"0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46\" returns successfully" Feb 12 19:48:45.875035 env[1312]: time="2024-02-12T19:48:45.874988498Z" level=info msg="shim disconnected" id=0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46 Feb 12 19:48:45.875251 env[1312]: time="2024-02-12T19:48:45.875044799Z" level=warning msg="cleaning up after shim disconnected" id=0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46 namespace=k8s.io Feb 12 19:48:45.875251 env[1312]: time="2024-02-12T19:48:45.875059299Z" level=info msg="cleaning up dead shim" Feb 12 19:48:45.882696 env[1312]: time="2024-02-12T19:48:45.882660859Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4692 runtime=io.containerd.runc.v2\n" Feb 12 19:48:46.738801 env[1312]: time="2024-02-12T19:48:46.738762170Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:48:46.760053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0537d8eee4eb4b2750b3ee4e8e8d8b5b6d6cde73381be6eb03bba1aa54be4a46-rootfs.mount: Deactivated successfully. Feb 12 19:48:46.768805 env[1312]: time="2024-02-12T19:48:46.768761205Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee\"" Feb 12 19:48:46.769340 env[1312]: time="2024-02-12T19:48:46.769308110Z" level=info msg="StartContainer for \"e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee\"" Feb 12 19:48:46.797901 systemd[1]: Started cri-containerd-e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee.scope. Feb 12 19:48:46.823356 systemd[1]: cri-containerd-e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee.scope: Deactivated successfully. Feb 12 19:48:46.827101 env[1312]: time="2024-02-12T19:48:46.827058562Z" level=info msg="StartContainer for \"e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee\" returns successfully" Feb 12 19:48:46.845264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee-rootfs.mount: Deactivated successfully. Feb 12 19:48:46.858504 env[1312]: time="2024-02-12T19:48:46.858412308Z" level=info msg="shim disconnected" id=e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee Feb 12 19:48:46.858786 env[1312]: time="2024-02-12T19:48:46.858760011Z" level=warning msg="cleaning up after shim disconnected" id=e7211ed2079087e3bb4e11096fd58a800b2e8803cf9e80f6227eaaa188485eee namespace=k8s.io Feb 12 19:48:46.858877 env[1312]: time="2024-02-12T19:48:46.858861911Z" level=info msg="cleaning up dead shim" Feb 12 19:48:46.867095 env[1312]: time="2024-02-12T19:48:46.867058476Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4750 runtime=io.containerd.runc.v2\n" Feb 12 19:48:47.744152 env[1312]: time="2024-02-12T19:48:47.744108836Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:48:47.773094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552934952.mount: Deactivated successfully. Feb 12 19:48:47.785901 env[1312]: time="2024-02-12T19:48:47.785796162Z" level=info msg="CreateContainer within sandbox \"c7aa0ac77c304a850a80a69c0c6bd1bd3a1711f6cdc72fd7894f34ae2a97e8d7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b\"" Feb 12 19:48:47.786800 env[1312]: time="2024-02-12T19:48:47.786761770Z" level=info msg="StartContainer for \"9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b\"" Feb 12 19:48:47.812249 systemd[1]: Started cri-containerd-9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b.scope. Feb 12 19:48:47.863139 env[1312]: time="2024-02-12T19:48:47.863095567Z" level=info msg="StartContainer for \"9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b\" returns successfully" Feb 12 19:48:48.272470 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:48:48.769819 systemd[1]: run-containerd-runc-k8s.io-9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b-runc.cmBkWM.mount: Deactivated successfully. Feb 12 19:48:49.606786 systemd[1]: run-containerd-runc-k8s.io-9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b-runc.efuEg5.mount: Deactivated successfully. Feb 12 19:48:50.823423 systemd-networkd[1460]: lxc_health: Link UP Feb 12 19:48:50.841464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:48:50.841625 systemd-networkd[1460]: lxc_health: Gained carrier Feb 12 19:48:51.088606 kubelet[2436]: I0212 19:48:51.088475 2436 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2r7nf" podStartSLOduration=9.088413179 pod.CreationTimestamp="2024-02-12 19:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:48:48.766719719 +0000 UTC m=+284.002217527" watchObservedRunningTime="2024-02-12 19:48:51.088413179 +0000 UTC m=+286.323910987" Feb 12 19:48:51.808898 systemd[1]: run-containerd-runc-k8s.io-9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b-runc.J22OKL.mount: Deactivated successfully. Feb 12 19:48:52.283701 systemd-networkd[1460]: lxc_health: Gained IPv6LL Feb 12 19:48:56.185635 systemd[1]: run-containerd-runc-k8s.io-9f3e34dbf142d141880e32cb78593c3acdcfc97c229215f9a752529e988a2c6b-runc.WgWn9R.mount: Deactivated successfully. Feb 12 19:48:56.333546 sshd[4419]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:56.337139 systemd[1]: sshd@26-10.200.8.37:22-10.200.12.6:57994.service: Deactivated successfully. Feb 12 19:48:56.338198 systemd[1]: session-29.scope: Deactivated successfully. Feb 12 19:48:56.339115 systemd-logind[1301]: Session 29 logged out. Waiting for processes to exit. Feb 12 19:48:56.340159 systemd-logind[1301]: Removed session 29. Feb 12 19:49:01.698705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.712231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.725256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.738026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.752140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.765624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.765873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.782123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.782392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.782561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.793237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.793529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.804265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.804550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.815341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.815591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.826200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.826411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.837099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.837326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.848131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.848357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.858892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.876655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.876790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.876929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.877056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.888224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.906390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.958344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.969499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.969698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.969834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.969964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.970995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.971122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.980650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.980928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.991986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:01.992279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.003295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.003573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.013989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.014257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.025393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.025646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.036173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.036406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.047050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.053162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.064820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.065052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.065196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.075850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.076075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.092402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.092665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.092813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.103412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.103698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.126529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.126827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.132771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.133002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.143622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.143837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.155081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.160839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.161064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.178962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.179223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.190056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.190305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.201511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.201709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.201848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.212203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.212412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.223836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.224041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.234863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.240491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.240712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.251740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.251960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.263040 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.263233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.274551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.280135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.285964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.291794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.297541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.303474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.309175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.314704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.320268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.331483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.331814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.337765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.343485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.348938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.360004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.381952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.382110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.382252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.382389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.382537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.382664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.398928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.404698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.415508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.421173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.431899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.437776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.437915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.438042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.438169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.438308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.455010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.466094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.471790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.471906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.477554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.483294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.488844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.488979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.489118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.505299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.510764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.516616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.522028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.538659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.551318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.551605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.551739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.551878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.552008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.552136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.563506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.574315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.579924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.585371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.596325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.601747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.607535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.607793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.607927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.608058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.618679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.624413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.624656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.635487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.635681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.646392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.651902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.657563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.657700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.669074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.675474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.686497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.702877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.708894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.720004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.720152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.720281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.720407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.720606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.720775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.731472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.770484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.781486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.781643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.781770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.781933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.782090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.782251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.782602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.782741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.782870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.792804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.810132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.821166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.832118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.837971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.838113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.838248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.838377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.838536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.838663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.849468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.860404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.891493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.891722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.891865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.891996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.892121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.892255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.892382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.901050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.912469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.918386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.924133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.935176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.940805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.940958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.941084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.941209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.951484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.957628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.969017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.985920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.997642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.997794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.997924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.998048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.998171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:02.998308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.015373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.026657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.037907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.049305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.056660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.056874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.057024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.057156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.057285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.057411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.066370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.077437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.089274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.100327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.106265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.106395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.106707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.106841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.106967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.123691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.134882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.146123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.157684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.165000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.165141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.165272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.165399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.165555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.165686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.174546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.185801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.191453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.203631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.203813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.209668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.209818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.209949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.220605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.226363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.226522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.243115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.249044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.249183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.260230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.271375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.282761 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.282919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.283048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.283176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.283304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.299657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.322513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339761 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.339933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.340072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.350529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.350749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.361481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.361709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.372540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.378256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.378507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.389427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.389701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.406926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.447979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.448116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.448241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.448365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.458662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.481010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.481255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.481389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.481536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.481660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.496180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.506335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.506498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.506641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.506776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.521570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.546704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.557874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.558894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.574631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.601272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.612591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.612737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.612865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.613053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.613189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.613314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.613455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.613587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.623716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.623957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.634116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.634330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.644491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 12 19:49:03.644695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001