Feb 8 23:34:58.040576 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:34:58.040609 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:34:58.040624 kernel: BIOS-provided physical RAM map: Feb 8 23:34:58.040635 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:34:58.040645 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:34:58.040656 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:34:58.040672 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:34:58.040684 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:34:58.040695 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:34:58.040705 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:34:58.040714 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:34:58.040724 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:34:58.040735 kernel: NX (Execute Disable) protection: active Feb 8 23:34:58.040746 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:34:58.040815 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:34:58.040828 kernel: random: crng init done Feb 8 23:34:58.040838 kernel: SMBIOS 3.1.0 present. Feb 8 23:34:58.040848 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:34:58.040858 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:34:58.040868 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:34:58.040878 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:34:58.040887 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:34:58.040900 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:34:58.040909 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:34:58.040917 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:34:58.040925 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:34:58.040935 kernel: tsc: Detected 2593.905 MHz processor Feb 8 23:34:58.040942 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:34:58.040952 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:34:58.040958 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:34:58.040966 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:34:58.040974 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:34:58.040986 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:34:58.040992 kernel: Using GB pages for direct mapping Feb 8 23:34:58.040999 kernel: Secure boot disabled Feb 8 23:34:58.041008 kernel: ACPI: Early table checksum verification disabled Feb 8 23:34:58.041016 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:34:58.041024 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041030 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041038 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:34:58.041053 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:34:58.041062 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041068 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041078 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041086 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041096 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041105 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041114 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:34:58.041122 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:34:58.041131 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:34:58.041138 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:34:58.041147 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:34:58.041155 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:34:58.041165 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:34:58.041174 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:34:58.041182 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:34:58.041191 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:34:58.041201 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:34:58.041208 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:34:58.041215 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:34:58.041224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:34:58.041233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:34:58.041241 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:34:58.041250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:34:58.041260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:34:58.041268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:34:58.041276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:34:58.041283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:34:58.041292 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:34:58.041300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:34:58.041310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:34:58.041317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:34:58.041328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:34:58.041336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:34:58.041346 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:34:58.041353 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:34:58.041361 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:34:58.041370 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:34:58.041379 kernel: Zone ranges: Feb 8 23:34:58.041387 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:34:58.041394 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:34:58.041405 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:34:58.041415 kernel: Movable zone start for each node Feb 8 23:34:58.041422 kernel: Early memory node ranges Feb 8 23:34:58.041429 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:34:58.041439 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:34:58.041447 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:34:58.041456 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:34:58.041463 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:34:58.041472 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:34:58.041482 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:34:58.041492 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:34:58.041499 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:34:58.041507 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:34:58.041515 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:34:58.041525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:34:58.041532 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:34:58.041540 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:34:58.041549 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:34:58.041562 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:34:58.041569 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:34:58.041577 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:34:58.041586 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:34:58.041595 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:34:58.041603 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:34:58.041610 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:34:58.041619 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:34:58.041627 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:34:58.041638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:34:58.041645 kernel: Policy zone: Normal Feb 8 23:34:58.041656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:34:58.041665 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:34:58.041673 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:34:58.041680 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:34:58.041690 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:34:58.041698 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:34:58.041709 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:34:58.041716 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:34:58.041734 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:34:58.041746 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:34:58.041754 kernel: rcu: RCU event tracing is enabled. Feb 8 23:34:58.041775 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:34:58.041783 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:34:58.041791 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:34:58.041801 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:34:58.041811 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:34:58.041819 kernel: Using NULL legacy PIC Feb 8 23:34:58.041830 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:34:58.041839 kernel: Console: colour dummy device 80x25 Feb 8 23:34:58.041850 kernel: printk: console [tty1] enabled Feb 8 23:34:58.041857 kernel: printk: console [ttyS0] enabled Feb 8 23:34:58.041865 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:34:58.041877 kernel: ACPI: Core revision 20210730 Feb 8 23:34:58.041888 kernel: Failed to register legacy timer interrupt Feb 8 23:34:58.041895 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:34:58.041904 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:34:58.041913 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 8 23:34:58.041924 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:34:58.041931 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:34:58.041940 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:34:58.041949 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:34:58.041959 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:34:58.041968 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:34:58.041978 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:34:58.041986 kernel: RETBleed: Vulnerable Feb 8 23:34:58.041996 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:34:58.042003 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:34:58.042012 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:34:58.042021 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:34:58.042031 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:34:58.042038 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:34:58.042047 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:34:58.042058 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:34:58.042068 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:34:58.042075 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:34:58.042085 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:34:58.042093 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:34:58.042103 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:34:58.042110 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:34:58.042119 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:34:58.042127 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:34:58.042138 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:34:58.042145 kernel: LSM: Security Framework initializing Feb 8 23:34:58.042153 kernel: SELinux: Initializing. Feb 8 23:34:58.042164 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:34:58.042175 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:34:58.042182 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:34:58.042191 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:34:58.042199 kernel: signal: max sigframe size: 3632 Feb 8 23:34:58.042210 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:34:58.042217 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:34:58.042226 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:34:58.042235 kernel: x86: Booting SMP configuration: Feb 8 23:34:58.042245 kernel: .... node #0, CPUs: #1 Feb 8 23:34:58.042255 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:34:58.042266 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:34:58.042274 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:34:58.042284 kernel: smpboot: Max logical packages: 1 Feb 8 23:34:58.042291 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:34:58.042302 kernel: devtmpfs: initialized Feb 8 23:34:58.042310 kernel: x86/mm: Memory block size: 128MB Feb 8 23:34:58.042320 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:34:58.042329 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:34:58.042340 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:34:58.042349 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:34:58.042358 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:34:58.042365 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:34:58.042375 kernel: audit: type=2000 audit(1707435297.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:34:58.042384 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:34:58.042393 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:34:58.042400 kernel: cpuidle: using governor menu Feb 8 23:34:58.042412 kernel: ACPI: bus type PCI registered Feb 8 23:34:58.042422 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:34:58.042431 kernel: dca service started, version 1.12.1 Feb 8 23:34:58.042439 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:34:58.042449 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:34:58.042460 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:34:58.042467 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:34:58.042475 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:34:58.042485 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:34:58.042497 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:34:58.042505 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:34:58.042513 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:34:58.042522 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:34:58.042533 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:34:58.042541 kernel: ACPI: Interpreter enabled Feb 8 23:34:58.042549 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:34:58.042558 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:34:58.042569 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:34:58.042578 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:34:58.042587 kernel: iommu: Default domain type: Translated Feb 8 23:34:58.042596 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:34:58.042603 kernel: vgaarb: loaded Feb 8 23:34:58.042611 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:34:58.042619 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:34:58.042628 kernel: PTP clock support registered Feb 8 23:34:58.042636 kernel: Registered efivars operations Feb 8 23:34:58.042643 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:34:58.042653 kernel: PCI: System does not support PCI Feb 8 23:34:58.042662 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:34:58.042673 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:34:58.042680 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:34:58.042687 kernel: pnp: PnP ACPI init Feb 8 23:34:58.042697 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:34:58.042705 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:34:58.042715 kernel: NET: Registered PF_INET protocol family Feb 8 23:34:58.042722 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:34:58.042734 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:34:58.042742 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:34:58.042753 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:34:58.044905 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:34:58.044952 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:34:58.044967 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:34:58.044980 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:34:58.044993 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:34:58.045005 kernel: NET: Registered PF_XDP protocol family Feb 8 23:34:58.045022 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:34:58.045034 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:34:58.045047 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:34:58.045060 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:34:58.045074 kernel: Initialise system trusted keyrings Feb 8 23:34:58.045087 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:34:58.045101 kernel: Key type asymmetric registered Feb 8 23:34:58.045114 kernel: Asymmetric key parser 'x509' registered Feb 8 23:34:58.045128 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:34:58.045144 kernel: io scheduler mq-deadline registered Feb 8 23:34:58.045158 kernel: io scheduler kyber registered Feb 8 23:34:58.045172 kernel: io scheduler bfq registered Feb 8 23:34:58.045187 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:34:58.045200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:34:58.045214 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:34:58.045228 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:34:58.045242 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:34:58.045409 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:34:58.045528 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:34:57 UTC (1707435297) Feb 8 23:34:58.045634 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:34:58.045652 kernel: fail to initialize ptp_kvm Feb 8 23:34:58.045666 kernel: intel_pstate: CPU model not supported Feb 8 23:34:58.045680 kernel: efifb: probing for efifb Feb 8 23:34:58.045694 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:34:58.045707 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:34:58.045721 kernel: efifb: scrolling: redraw Feb 8 23:34:58.045738 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:34:58.045752 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:34:58.045779 kernel: fb0: EFI VGA frame buffer device Feb 8 23:34:58.045793 kernel: pstore: Registered efi as persistent store backend Feb 8 23:34:58.045807 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:34:58.045821 kernel: Segment Routing with IPv6 Feb 8 23:34:58.045834 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:34:58.045848 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:34:58.045862 kernel: Key type dns_resolver registered Feb 8 23:34:58.045878 kernel: IPI shorthand broadcast: enabled Feb 8 23:34:58.045892 kernel: sched_clock: Marking stable (781145000, 22968100)->(1000197800, -196084700) Feb 8 23:34:58.045905 kernel: registered taskstats version 1 Feb 8 23:34:58.045919 kernel: Loading compiled-in X.509 certificates Feb 8 23:34:58.045933 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:34:58.045946 kernel: Key type .fscrypt registered Feb 8 23:34:58.045960 kernel: Key type fscrypt-provisioning registered Feb 8 23:34:58.045974 kernel: pstore: Using crash dump compression: deflate Feb 8 23:34:58.045990 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:34:58.046004 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:34:58.046017 kernel: ima: No architecture policies found Feb 8 23:34:58.046031 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:34:58.046045 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:34:58.046058 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:34:58.046072 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:34:58.046086 kernel: Run /init as init process Feb 8 23:34:58.046100 kernel: with arguments: Feb 8 23:34:58.046114 kernel: /init Feb 8 23:34:58.046130 kernel: with environment: Feb 8 23:34:58.046142 kernel: HOME=/ Feb 8 23:34:58.046155 kernel: TERM=linux Feb 8 23:34:58.046169 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:34:58.046186 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:34:58.046203 systemd[1]: Detected virtualization microsoft. Feb 8 23:34:58.046217 systemd[1]: Detected architecture x86-64. Feb 8 23:34:58.046234 systemd[1]: Running in initrd. Feb 8 23:34:58.046248 systemd[1]: No hostname configured, using default hostname. Feb 8 23:34:58.046262 systemd[1]: Hostname set to . Feb 8 23:34:58.046277 systemd[1]: Initializing machine ID from random generator. Feb 8 23:34:58.046291 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:34:58.046306 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:34:58.046320 systemd[1]: Reached target cryptsetup.target. Feb 8 23:34:58.046334 systemd[1]: Reached target paths.target. Feb 8 23:34:58.046347 systemd[1]: Reached target slices.target. Feb 8 23:34:58.046364 systemd[1]: Reached target swap.target. Feb 8 23:34:58.046378 systemd[1]: Reached target timers.target. Feb 8 23:34:58.046393 systemd[1]: Listening on iscsid.socket. Feb 8 23:34:58.046407 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:34:58.046422 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:34:58.046437 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:34:58.046451 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:34:58.046468 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:34:58.046483 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:34:58.046497 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:34:58.046511 systemd[1]: Reached target sockets.target. Feb 8 23:34:58.046526 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:34:58.046540 systemd[1]: Finished network-cleanup.service. Feb 8 23:34:58.046555 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:34:58.046569 systemd[1]: Starting systemd-journald.service... Feb 8 23:34:58.046584 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:34:58.046601 systemd[1]: Starting systemd-resolved.service... Feb 8 23:34:58.046615 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:34:58.046629 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:34:58.046648 systemd-journald[183]: Journal started Feb 8 23:34:58.046716 systemd-journald[183]: Runtime Journal (/run/log/journal/f9540c4857fe4f9eaf2fa6595c879c43) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:34:58.030943 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:34:58.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.066900 kernel: audit: type=1130 audit(1707435298.049:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.066935 systemd[1]: Started systemd-journald.service. Feb 8 23:34:58.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.074196 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:34:58.102156 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:34:58.102179 kernel: audit: type=1130 audit(1707435298.073:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.092635 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:34:58.096392 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:34:58.099737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:34:58.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.124788 kernel: audit: type=1130 audit(1707435298.091:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.129581 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:34:58.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.135011 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:34:58.147791 kernel: Bridge firewalling registered Feb 8 23:34:58.147811 kernel: audit: type=1130 audit(1707435298.094:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.135623 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:34:58.135661 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:34:58.138259 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:34:58.146565 systemd[1]: Started systemd-resolved.service. Feb 8 23:34:58.146927 systemd[1]: Reached target nss-lookup.target. Feb 8 23:34:58.149988 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:34:58.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.199162 kernel: audit: type=1130 audit(1707435298.146:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.199219 kernel: SCSI subsystem initialized Feb 8 23:34:58.199234 kernel: audit: type=1130 audit(1707435298.146:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.212828 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:34:58.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.216194 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:34:58.235183 kernel: audit: type=1130 audit(1707435298.215:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.241108 dracut-cmdline[201]: dracut-dracut-053 Feb 8 23:34:58.251807 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:34:58.273441 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:34:58.273470 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:34:58.273487 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:34:58.277525 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:34:58.280572 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:34:58.298038 kernel: audit: type=1130 audit(1707435298.285:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.297672 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:34:58.307813 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:34:58.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.323821 kernel: audit: type=1130 audit(1707435298.310:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.365782 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:34:58.378782 kernel: iscsi: registered transport (tcp) Feb 8 23:34:58.403631 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:34:58.403693 kernel: QLogic iSCSI HBA Driver Feb 8 23:34:58.432522 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:34:58.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.436512 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:34:58.487792 kernel: raid6: avx512x4 gen() 18312 MB/s Feb 8 23:34:58.507780 kernel: raid6: avx512x4 xor() 8627 MB/s Feb 8 23:34:58.527776 kernel: raid6: avx512x2 gen() 18386 MB/s Feb 8 23:34:58.547781 kernel: raid6: avx512x2 xor() 30281 MB/s Feb 8 23:34:58.567774 kernel: raid6: avx512x1 gen() 18350 MB/s Feb 8 23:34:58.587774 kernel: raid6: avx512x1 xor() 27013 MB/s Feb 8 23:34:58.607776 kernel: raid6: avx2x4 gen() 18389 MB/s Feb 8 23:34:58.627792 kernel: raid6: avx2x4 xor() 7957 MB/s Feb 8 23:34:58.647774 kernel: raid6: avx2x2 gen() 18285 MB/s Feb 8 23:34:58.668777 kernel: raid6: avx2x2 xor() 22292 MB/s Feb 8 23:34:58.688773 kernel: raid6: avx2x1 gen() 13753 MB/s Feb 8 23:34:58.708774 kernel: raid6: avx2x1 xor() 19509 MB/s Feb 8 23:34:58.728775 kernel: raid6: sse2x4 gen() 11753 MB/s Feb 8 23:34:58.748774 kernel: raid6: sse2x4 xor() 7357 MB/s Feb 8 23:34:58.768772 kernel: raid6: sse2x2 gen() 12887 MB/s Feb 8 23:34:58.788774 kernel: raid6: sse2x2 xor() 7429 MB/s Feb 8 23:34:58.808774 kernel: raid6: sse2x1 gen() 11711 MB/s Feb 8 23:34:58.831452 kernel: raid6: sse2x1 xor() 5934 MB/s Feb 8 23:34:58.831478 kernel: raid6: using algorithm avx2x4 gen() 18389 MB/s Feb 8 23:34:58.831489 kernel: raid6: .... xor() 7957 MB/s, rmw enabled Feb 8 23:34:58.834953 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:34:58.854789 kernel: xor: automatically using best checksumming function avx Feb 8 23:34:58.950789 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:34:58.959175 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:34:58.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.963000 audit: BPF prog-id=7 op=LOAD Feb 8 23:34:58.963000 audit: BPF prog-id=8 op=LOAD Feb 8 23:34:58.964686 systemd[1]: Starting systemd-udevd.service... Feb 8 23:34:58.979481 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:34:58.984160 systemd[1]: Started systemd-udevd.service. Feb 8 23:34:58.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:58.992475 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:34:59.008636 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 8 23:34:59.038734 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:34:59.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.043888 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:34:59.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:34:59.076829 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:34:59.123782 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:34:59.158156 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:34:59.158209 kernel: AES CTR mode by8 optimization enabled Feb 8 23:34:59.161787 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:34:59.172397 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:34:59.183787 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:34:59.195785 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:34:59.202779 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:34:59.208778 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:34:59.220485 kernel: scsi host0: storvsc_host_t Feb 8 23:34:59.220705 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:34:59.220720 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:34:59.231224 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:34:59.231373 kernel: scsi host1: storvsc_host_t Feb 8 23:34:59.239775 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:34:59.239802 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:34:59.266437 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:34:59.266745 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:34:59.275657 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:34:59.275895 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:34:59.285368 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:34:59.285590 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:34:59.285705 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:34:59.287288 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:34:59.291781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:34:59.297489 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:34:59.395531 kernel: hv_netvsc 00224899-a3a6-0022-4899-a3a600224899 eth0: VF slot 1 added Feb 8 23:34:59.404779 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:34:59.410780 kernel: hv_pci 6fe29632-417a-4191-b89b-638354eae20c: PCI VMBus probing: Using version 0x10004 Feb 8 23:34:59.422961 kernel: hv_pci 6fe29632-417a-4191-b89b-638354eae20c: PCI host bridge to bus 417a:00 Feb 8 23:34:59.423119 kernel: pci_bus 417a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:34:59.423249 kernel: pci_bus 417a:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:34:59.432885 kernel: pci 417a:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:34:59.443567 kernel: pci 417a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:34:59.459886 kernel: pci 417a:00:02.0: enabling Extended Tags Feb 8 23:34:59.474821 kernel: pci 417a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 417a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:34:59.483788 kernel: pci_bus 417a:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:34:59.483963 kernel: pci 417a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:34:59.577787 kernel: mlx5_core 417a:00:02.0: firmware version: 14.30.1350 Feb 8 23:34:59.734787 kernel: mlx5_core 417a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:34:59.774937 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:34:59.827784 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (466) Feb 8 23:34:59.841621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:34:59.890934 kernel: mlx5_core 417a:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:34:59.891171 kernel: mlx5_core 417a:00:02.0: mlx5e_tc_post_act_init:40:(pid 7): firmware level support is missing Feb 8 23:34:59.902249 kernel: hv_netvsc 00224899-a3a6-0022-4899-a3a600224899 eth0: VF registering: eth1 Feb 8 23:34:59.902417 kernel: mlx5_core 417a:00:02.0 eth1: joined to eth0 Feb 8 23:34:59.914782 kernel: mlx5_core 417a:00:02.0 enP16762s1: renamed from eth1 Feb 8 23:34:59.998519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:35:00.005207 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:35:00.012777 systemd[1]: Starting disk-uuid.service... Feb 8 23:35:00.071300 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:35:01.032746 disk-uuid[559]: The operation has completed successfully. Feb 8 23:35:01.035887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:35:01.104956 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:35:01.105056 systemd[1]: Finished disk-uuid.service. Feb 8 23:35:01.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.119737 systemd[1]: Starting verity-setup.service... Feb 8 23:35:01.154786 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:35:01.471482 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:35:01.478286 systemd[1]: Finished verity-setup.service. Feb 8 23:35:01.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.483573 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:35:01.557783 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:35:01.557890 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:35:01.562012 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:35:01.566308 systemd[1]: Starting ignition-setup.service... Feb 8 23:35:01.571256 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:35:01.586799 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:35:01.586861 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:35:01.586880 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:35:01.644832 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:35:01.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.649000 audit: BPF prog-id=9 op=LOAD Feb 8 23:35:01.651313 systemd[1]: Starting systemd-networkd.service... Feb 8 23:35:01.674887 systemd-networkd[829]: lo: Link UP Feb 8 23:35:01.675180 systemd-networkd[829]: lo: Gained carrier Feb 8 23:35:01.675696 systemd-networkd[829]: Enumeration completed Feb 8 23:35:01.675783 systemd[1]: Started systemd-networkd.service. Feb 8 23:35:01.677480 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:35:01.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.688270 systemd[1]: Reached target network.target. Feb 8 23:35:01.691651 systemd[1]: Starting iscsiuio.service... Feb 8 23:35:01.697188 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:35:01.705102 systemd[1]: Started iscsiuio.service. Feb 8 23:35:01.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.708705 systemd[1]: Starting iscsid.service... Feb 8 23:35:01.715548 iscsid[838]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:35:01.715548 iscsid[838]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:35:01.715548 iscsid[838]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:35:01.715548 iscsid[838]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:35:01.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.743305 iscsid[838]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:35:01.743305 iscsid[838]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:35:01.734250 systemd[1]: Started iscsid.service. Feb 8 23:35:01.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.739174 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:35:01.766778 kernel: mlx5_core 417a:00:02.0 enP16762s1: Link up Feb 8 23:35:01.752255 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:35:01.754993 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:35:01.759962 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:35:01.762367 systemd[1]: Reached target remote-fs.target. Feb 8 23:35:01.767610 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:35:01.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.778682 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:35:01.853287 kernel: hv_netvsc 00224899-a3a6-0022-4899-a3a600224899 eth0: Data path switched to VF: enP16762s1 Feb 8 23:35:01.853521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:35:01.853805 systemd-networkd[829]: enP16762s1: Link UP Feb 8 23:35:01.855888 systemd-networkd[829]: eth0: Link UP Feb 8 23:35:01.856095 systemd-networkd[829]: eth0: Gained carrier Feb 8 23:35:01.864207 systemd-networkd[829]: enP16762s1: Gained carrier Feb 8 23:35:01.899867 systemd-networkd[829]: eth0: DHCPv4 address 10.200.8.36/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:35:01.916258 systemd[1]: Finished ignition-setup.service. Feb 8 23:35:01.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:01.922095 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:35:03.040001 systemd-networkd[829]: eth0: Gained IPv6LL Feb 8 23:35:05.186634 ignition[853]: Ignition 2.14.0 Feb 8 23:35:05.186652 ignition[853]: Stage: fetch-offline Feb 8 23:35:05.186750 ignition[853]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:05.186826 ignition[853]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:05.254881 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:05.255068 ignition[853]: parsed url from cmdline: "" Feb 8 23:35:05.256725 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:35:05.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.269681 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:35:05.269708 kernel: audit: type=1130 audit(1707435305.264:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.255072 ignition[853]: no config URL provided Feb 8 23:35:05.265677 systemd[1]: Starting ignition-fetch.service... Feb 8 23:35:05.255078 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:35:05.255087 ignition[853]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:35:05.255092 ignition[853]: failed to fetch config: resource requires networking Feb 8 23:35:05.255452 ignition[853]: Ignition finished successfully Feb 8 23:35:05.274166 ignition[859]: Ignition 2.14.0 Feb 8 23:35:05.274173 ignition[859]: Stage: fetch Feb 8 23:35:05.274270 ignition[859]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:05.274293 ignition[859]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:05.278933 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:05.279729 ignition[859]: parsed url from cmdline: "" Feb 8 23:35:05.279735 ignition[859]: no config URL provided Feb 8 23:35:05.279741 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:35:05.279753 ignition[859]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:35:05.279803 ignition[859]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:35:05.364734 ignition[859]: GET result: OK Feb 8 23:35:05.364756 ignition[859]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 8 23:35:05.473093 ignition[859]: opening config device: "/dev/sr0" Feb 8 23:35:05.473420 ignition[859]: getting drive status for "/dev/sr0" Feb 8 23:35:05.473466 ignition[859]: drive status: OK Feb 8 23:35:05.473503 ignition[859]: mounting config device Feb 8 23:35:05.473526 ignition[859]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure3367898974" Feb 8 23:35:05.498518 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/09 00:00 (1000) Feb 8 23:35:05.497806 ignition[859]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure3367898974" Feb 8 23:35:05.497813 ignition[859]: checking for config drive Feb 8 23:35:05.499325 systemd[1]: tmp-ignition\x2dazure3367898974.mount: Deactivated successfully. Feb 8 23:35:05.498116 ignition[859]: reading config Feb 8 23:35:05.498472 ignition[859]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure3367898974" Feb 8 23:35:05.498552 ignition[859]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure3367898974" Feb 8 23:35:05.498574 ignition[859]: config has been read from custom data Feb 8 23:35:05.498635 ignition[859]: parsing config with SHA512: 217bff78626e7ecd2ad296c7191ccf058f0edebb9d3cf87109751bd909ce76e40364d81f9c621003697a4badf5766ad43c0f503d9de8bdddca8d9ddbbd44d96c Feb 8 23:35:05.543214 unknown[859]: fetched base config from "system" Feb 8 23:35:05.543230 unknown[859]: fetched base config from "system" Feb 8 23:35:05.543239 unknown[859]: fetched user config from "azure" Feb 8 23:35:05.551339 ignition[859]: fetch: fetch complete Feb 8 23:35:05.551350 ignition[859]: fetch: fetch passed Feb 8 23:35:05.551402 ignition[859]: Ignition finished successfully Feb 8 23:35:05.558367 systemd[1]: Finished ignition-fetch.service. Feb 8 23:35:05.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.561564 systemd[1]: Starting ignition-kargs.service... Feb 8 23:35:05.578734 kernel: audit: type=1130 audit(1707435305.560:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.586988 ignition[868]: Ignition 2.14.0 Feb 8 23:35:05.586998 ignition[868]: Stage: kargs Feb 8 23:35:05.587128 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:05.587164 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:05.613174 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:05.617422 ignition[868]: kargs: kargs passed Feb 8 23:35:05.617484 ignition[868]: Ignition finished successfully Feb 8 23:35:05.619612 systemd[1]: Finished ignition-kargs.service. Feb 8 23:35:05.624716 systemd[1]: Starting ignition-disks.service... Feb 8 23:35:05.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.643783 kernel: audit: type=1130 audit(1707435305.623:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.646341 ignition[874]: Ignition 2.14.0 Feb 8 23:35:05.646351 ignition[874]: Stage: disks Feb 8 23:35:05.646488 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:05.646520 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:05.657426 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:05.658915 ignition[874]: disks: disks passed Feb 8 23:35:05.658957 ignition[874]: Ignition finished successfully Feb 8 23:35:05.662692 systemd[1]: Finished ignition-disks.service. Feb 8 23:35:05.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.669194 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:35:05.685730 kernel: audit: type=1130 audit(1707435305.668:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.685776 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:35:05.690175 systemd[1]: Reached target local-fs.target. Feb 8 23:35:05.692307 systemd[1]: Reached target sysinit.target. Feb 8 23:35:05.698358 systemd[1]: Reached target basic.target. Feb 8 23:35:05.703198 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:35:05.758359 systemd-fsck[882]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:35:05.763248 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:35:05.781898 kernel: audit: type=1130 audit(1707435305.765:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:05.766917 systemd[1]: Mounting sysroot.mount... Feb 8 23:35:05.794814 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:35:05.795165 systemd[1]: Mounted sysroot.mount. Feb 8 23:35:05.798980 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:35:05.833346 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:35:05.839829 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:35:05.844640 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:35:05.844678 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:35:05.854638 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:35:05.902950 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:35:05.913081 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:35:05.925777 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (892) Feb 8 23:35:05.934162 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:35:05.942532 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:35:05.942559 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:35:05.942577 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:35:05.943965 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:35:05.955190 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:35:05.962051 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:35:05.985725 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:35:06.473497 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:35:06.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.476984 systemd[1]: Starting ignition-mount.service... Feb 8 23:35:06.500863 kernel: audit: type=1130 audit(1707435306.476:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.494287 systemd[1]: Starting sysroot-boot.service... Feb 8 23:35:06.505241 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:35:06.505372 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:35:06.526666 ignition[959]: INFO : Ignition 2.14.0 Feb 8 23:35:06.529384 ignition[959]: INFO : Stage: mount Feb 8 23:35:06.531858 ignition[959]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:06.535191 ignition[959]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:06.553557 kernel: audit: type=1130 audit(1707435306.537:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.534208 systemd[1]: Finished sysroot-boot.service. Feb 8 23:35:06.556408 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:06.560000 ignition[959]: INFO : mount: mount passed Feb 8 23:35:06.564181 ignition[959]: INFO : Ignition finished successfully Feb 8 23:35:06.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:06.560777 systemd[1]: Finished ignition-mount.service. Feb 8 23:35:06.578668 kernel: audit: type=1130 audit(1707435306.564:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.218729 coreos-metadata[891]: Feb 08 23:35:07.218 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:35:07.234555 coreos-metadata[891]: Feb 08 23:35:07.234 INFO Fetch successful Feb 8 23:35:07.266942 coreos-metadata[891]: Feb 08 23:35:07.266 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:35:07.277285 coreos-metadata[891]: Feb 08 23:35:07.277 INFO Fetch successful Feb 8 23:35:07.293594 coreos-metadata[891]: Feb 08 23:35:07.293 INFO wrote hostname ci-3510.3.2-a-baa4ff5fd1 to /sysroot/etc/hostname Feb 8 23:35:07.295593 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:35:07.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.318087 kernel: audit: type=1130 audit(1707435307.300:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:07.315748 systemd[1]: Starting ignition-files.service... Feb 8 23:35:07.323286 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:35:07.333778 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (970) Feb 8 23:35:07.333814 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:35:07.342674 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:35:07.342698 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:35:07.350542 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:35:07.364099 ignition[989]: INFO : Ignition 2.14.0 Feb 8 23:35:07.366392 ignition[989]: INFO : Stage: files Feb 8 23:35:07.366392 ignition[989]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:07.366392 ignition[989]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:07.379160 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:07.398605 ignition[989]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:35:07.402212 ignition[989]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:35:07.402212 ignition[989]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:35:07.468926 ignition[989]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:35:07.473782 ignition[989]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:35:07.483079 unknown[989]: wrote ssh authorized keys file for user: core Feb 8 23:35:07.485797 ignition[989]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:35:07.503320 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:35:07.508813 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:35:07.913810 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:35:08.053824 ignition[989]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:35:08.062903 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:35:08.062903 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:35:08.062903 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:35:08.256323 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:35:08.367892 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:35:08.374030 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:35:08.374030 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:35:08.853364 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:35:09.013161 ignition[989]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:35:09.021346 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:35:09.021346 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:35:09.021346 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:35:09.276855 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:35:13.458981 ignition[989]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 8 23:35:13.467909 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:35:13.467909 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:35:13.467909 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:35:13.604773 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:35:13.838812 ignition[989]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 8 23:35:13.847259 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:35:13.847259 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:35:13.847259 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:35:13.966111 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:35:14.470977 ignition[989]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 8 23:35:14.479192 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:35:14.479192 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:35:14.479192 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:35:14.479192 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:35:14.479192 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:35:14.964952 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:35:15.068890 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:35:15.074695 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:35:15.079595 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:35:15.079595 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:35:15.088760 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:35:15.093401 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:35:15.097860 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:35:15.102478 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:35:15.107186 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:35:15.111824 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:35:15.116530 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:35:15.121296 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:35:15.126192 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:35:15.133801 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2650475079" Feb 8 23:35:15.145970 ignition[989]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2650475079": device or resource busy Feb 8 23:35:15.145970 ignition[989]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2650475079", trying btrfs: device or resource busy Feb 8 23:35:15.145970 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2650475079" Feb 8 23:35:15.163297 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (989) Feb 8 23:35:15.163319 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2650475079" Feb 8 23:35:15.168748 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2650475079" Feb 8 23:35:15.174428 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2650475079" Feb 8 23:35:15.174428 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:35:15.174428 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:35:15.174428 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:35:15.169966 systemd[1]: mnt-oem2650475079.mount: Deactivated successfully. Feb 8 23:35:15.197208 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2587824481" Feb 8 23:35:15.197208 ignition[989]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2587824481": device or resource busy Feb 8 23:35:15.197208 ignition[989]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2587824481", trying btrfs: device or resource busy Feb 8 23:35:15.197208 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2587824481" Feb 8 23:35:15.197208 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2587824481" Feb 8 23:35:15.197208 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2587824481" Feb 8 23:35:15.197208 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2587824481" Feb 8 23:35:15.197208 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:35:15.197208 ignition[989]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 8 23:35:15.188011 systemd[1]: mnt-oem2587824481.mount: Deactivated successfully. Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(22): [started] setting preset to enabled for "waagent.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(22): [finished] setting preset to enabled for "waagent.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:35:15.210796 ignition[989]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:35:15.210796 ignition[989]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:35:15.210796 ignition[989]: INFO : files: files passed Feb 8 23:35:15.210796 ignition[989]: INFO : Ignition finished successfully Feb 8 23:35:15.318691 kernel: audit: type=1130 audit(1707435315.212:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.198492 systemd[1]: Finished ignition-files.service. Feb 8 23:35:15.268358 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:35:15.319676 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:35:15.373252 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:35:15.378864 systemd[1]: Starting ignition-quench.service... Feb 8 23:35:15.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.381307 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:35:15.404564 kernel: audit: type=1130 audit(1707435315.386:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.386407 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:35:15.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.386485 systemd[1]: Finished ignition-quench.service. Feb 8 23:35:15.434350 kernel: audit: type=1130 audit(1707435315.404:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.434380 kernel: audit: type=1131 audit(1707435315.404:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.417607 systemd[1]: Reached target ignition-complete.target. Feb 8 23:35:15.435103 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:35:15.450757 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:35:15.453610 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:35:15.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.458272 systemd[1]: Reached target initrd-fs.target. Feb 8 23:35:15.487429 kernel: audit: type=1130 audit(1707435315.457:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.487458 kernel: audit: type=1131 audit(1707435315.457:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.487422 systemd[1]: Reached target initrd.target. Feb 8 23:35:15.489434 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:35:15.490287 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:35:15.502617 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:35:15.522784 kernel: audit: type=1130 audit(1707435315.506:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.520435 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:35:15.529303 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:35:15.531583 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:35:15.535947 systemd[1]: Stopped target timers.target. Feb 8 23:35:15.540073 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:35:15.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.540198 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:35:15.563774 kernel: audit: type=1131 audit(1707435315.544:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.544318 systemd[1]: Stopped target initrd.target. Feb 8 23:35:15.559675 systemd[1]: Stopped target basic.target. Feb 8 23:35:15.563827 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:35:15.568148 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:35:15.572619 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:35:15.577195 systemd[1]: Stopped target remote-fs.target. Feb 8 23:35:15.584095 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:35:15.590538 systemd[1]: Stopped target sysinit.target. Feb 8 23:35:15.594782 systemd[1]: Stopped target local-fs.target. Feb 8 23:35:15.598917 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:35:15.603232 systemd[1]: Stopped target swap.target. Feb 8 23:35:15.607008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:35:15.607128 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:35:15.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.625789 kernel: audit: type=1131 audit(1707435315.611:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.624890 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:35:15.629171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:35:15.631704 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:35:15.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.636117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:35:15.653755 kernel: audit: type=1131 audit(1707435315.635:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.636219 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:35:15.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.656313 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:35:15.658987 systemd[1]: Stopped ignition-files.service. Feb 8 23:35:15.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.664574 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:35:15.664841 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:35:15.670679 systemd[1]: Stopping ignition-mount.service... Feb 8 23:35:15.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.677786 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:35:15.679907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:35:15.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.694948 ignition[1027]: INFO : Ignition 2.14.0 Feb 8 23:35:15.694948 ignition[1027]: INFO : Stage: umount Feb 8 23:35:15.694948 ignition[1027]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:35:15.694948 ignition[1027]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:35:15.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.680063 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:35:15.713543 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:35:15.713543 ignition[1027]: INFO : umount: umount passed Feb 8 23:35:15.713543 ignition[1027]: INFO : Ignition finished successfully Feb 8 23:35:15.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.682676 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:35:15.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.682829 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:35:15.688034 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:35:15.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.688112 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:35:15.702177 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:35:15.702251 systemd[1]: Stopped ignition-mount.service. Feb 8 23:35:15.704916 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:35:15.704960 systemd[1]: Stopped ignition-disks.service. Feb 8 23:35:15.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.721081 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:35:15.723397 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:35:15.728295 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:35:15.728340 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:35:15.735045 systemd[1]: Stopped target network.target. Feb 8 23:35:15.739100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:35:15.739161 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:35:15.750168 systemd[1]: Stopped target paths.target. Feb 8 23:35:15.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.757041 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:35:15.762798 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:35:15.766660 systemd[1]: Stopped target slices.target. Feb 8 23:35:15.768593 systemd[1]: Stopped target sockets.target. Feb 8 23:35:15.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.773468 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:35:15.773501 systemd[1]: Closed iscsid.socket. Feb 8 23:35:15.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.777483 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:35:15.777516 systemd[1]: Closed iscsiuio.socket. Feb 8 23:35:15.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.779352 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:35:15.811000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:35:15.779398 systemd[1]: Stopped ignition-setup.service. Feb 8 23:35:15.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.783638 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:35:15.788385 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:35:15.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.792822 systemd-networkd[829]: eth0: DHCPv6 lease lost Feb 8 23:35:15.832000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:35:15.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.795928 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:35:15.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.796369 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:35:15.796456 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:35:15.802020 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:35:15.802110 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:35:15.808353 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:35:15.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.808434 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:35:15.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.812149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:35:15.812183 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:35:15.817006 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:35:15.817054 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:35:15.819309 systemd[1]: Stopping network-cleanup.service... Feb 8 23:35:15.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.825503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:35:15.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.825560 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:35:15.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.827995 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:35:15.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.828042 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:35:15.832377 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:35:15.832424 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:35:15.834888 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:35:15.839037 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:35:15.839161 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:35:15.846882 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:35:15.846920 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:35:15.853311 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:35:15.853411 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:35:15.857898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:35:15.935233 kernel: hv_netvsc 00224899-a3a6-0022-4899-a3a600224899 eth0: Data path switched from VF: enP16762s1 Feb 8 23:35:15.857943 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:35:15.862498 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:35:15.862540 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:35:15.864648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:35:15.864692 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:35:15.869617 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:35:15.881256 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:35:15.881326 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:35:15.886069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:35:15.886116 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:35:15.891148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:35:15.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:15.891196 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:35:15.896218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:35:15.896308 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:35:15.961568 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:35:15.961684 systemd[1]: Stopped network-cleanup.service. Feb 8 23:35:15.966509 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:35:15.971824 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:35:15.985831 systemd[1]: Switching root. Feb 8 23:35:16.016026 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:35:16.016097 iscsid[838]: iscsid shutting down. Feb 8 23:35:16.017916 systemd-journald[183]: Journal stopped Feb 8 23:35:28.617961 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:35:28.618000 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:35:28.618020 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:35:28.618037 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:35:28.618051 kernel: SELinux: policy capability open_perms=1 Feb 8 23:35:28.618066 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:35:28.618081 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:35:28.618102 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:35:28.618115 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:35:28.618128 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:35:28.619937 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:35:28.619966 systemd[1]: Successfully loaded SELinux policy in 303.728ms. Feb 8 23:35:28.619987 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.544ms. Feb 8 23:35:28.620006 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:35:28.620029 systemd[1]: Detected virtualization microsoft. Feb 8 23:35:28.620044 systemd[1]: Detected architecture x86-64. Feb 8 23:35:28.620057 systemd[1]: Detected first boot. Feb 8 23:35:28.620072 systemd[1]: Hostname set to . Feb 8 23:35:28.620086 systemd[1]: Initializing machine ID from random generator. Feb 8 23:35:28.620119 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:35:28.620132 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:35:28.620142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:35:28.620155 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:35:28.620168 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:35:28.620178 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 8 23:35:28.620187 kernel: audit: type=1334 audit(1707435328.094:90): prog-id=12 op=LOAD Feb 8 23:35:28.620200 kernel: audit: type=1334 audit(1707435328.094:91): prog-id=3 op=UNLOAD Feb 8 23:35:28.620212 kernel: audit: type=1334 audit(1707435328.099:92): prog-id=13 op=LOAD Feb 8 23:35:28.620220 kernel: audit: type=1334 audit(1707435328.103:93): prog-id=14 op=LOAD Feb 8 23:35:28.620230 kernel: audit: type=1334 audit(1707435328.103:94): prog-id=4 op=UNLOAD Feb 8 23:35:28.620244 kernel: audit: type=1334 audit(1707435328.103:95): prog-id=5 op=UNLOAD Feb 8 23:35:28.620255 kernel: audit: type=1334 audit(1707435328.108:96): prog-id=15 op=LOAD Feb 8 23:35:28.620264 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:35:28.620274 kernel: audit: type=1334 audit(1707435328.108:97): prog-id=12 op=UNLOAD Feb 8 23:35:28.620286 kernel: audit: type=1334 audit(1707435328.113:98): prog-id=16 op=LOAD Feb 8 23:35:28.620298 kernel: audit: type=1334 audit(1707435328.118:99): prog-id=17 op=LOAD Feb 8 23:35:28.620307 systemd[1]: Stopped iscsiuio.service. Feb 8 23:35:28.620319 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:35:28.620329 systemd[1]: Stopped iscsid.service. Feb 8 23:35:28.620341 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:35:28.620354 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:35:28.620368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:35:28.620381 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:35:28.620391 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:35:28.620403 systemd[1]: Created slice system-getty.slice. Feb 8 23:35:28.620412 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:35:28.620425 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:35:28.620434 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:35:28.620447 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:35:28.620457 systemd[1]: Created slice user.slice. Feb 8 23:35:28.620470 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:35:28.620480 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:35:28.620492 systemd[1]: Set up automount boot.automount. Feb 8 23:35:28.620504 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:35:28.620514 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:35:28.620526 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:35:28.620536 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:35:28.620549 systemd[1]: Reached target integritysetup.target. Feb 8 23:35:28.620561 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:35:28.620573 systemd[1]: Reached target remote-fs.target. Feb 8 23:35:28.620584 systemd[1]: Reached target slices.target. Feb 8 23:35:28.620595 systemd[1]: Reached target swap.target. Feb 8 23:35:28.620604 systemd[1]: Reached target torcx.target. Feb 8 23:35:28.620617 systemd[1]: Reached target veritysetup.target. Feb 8 23:35:28.620632 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:35:28.620642 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:35:28.620654 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:35:28.620665 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:35:28.620676 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:35:28.620686 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:35:28.620699 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:35:28.620711 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:35:28.620723 systemd[1]: Mounting media.mount... Feb 8 23:35:28.620736 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:35:28.620747 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:35:28.620759 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:35:28.620814 systemd[1]: Mounting tmp.mount... Feb 8 23:35:28.620826 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:35:28.620839 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:35:28.620848 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:35:28.620863 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:35:28.620875 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:35:28.620886 systemd[1]: Starting modprobe@drm.service... Feb 8 23:35:28.620896 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:35:28.620908 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:35:28.620920 systemd[1]: Starting modprobe@loop.service... Feb 8 23:35:28.620930 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:35:28.620942 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:35:28.620953 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:35:28.620967 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:35:28.620977 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:35:28.620989 systemd[1]: Stopped systemd-journald.service. Feb 8 23:35:28.621001 systemd[1]: Starting systemd-journald.service... Feb 8 23:35:28.621012 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:35:28.621023 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:35:28.621034 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:35:28.621047 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:35:28.621056 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:35:28.621071 systemd[1]: Stopped verity-setup.service. Feb 8 23:35:28.621081 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:35:28.621093 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:35:28.621102 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:35:28.621115 kernel: loop: module loaded Feb 8 23:35:28.621126 systemd[1]: Mounted media.mount. Feb 8 23:35:28.621136 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:35:28.621146 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:35:28.621158 kernel: fuse: init (API version 7.34) Feb 8 23:35:28.621172 systemd[1]: Mounted tmp.mount. Feb 8 23:35:28.621181 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:35:28.621201 systemd-journald[1157]: Journal started Feb 8 23:35:28.621252 systemd-journald[1157]: Runtime Journal (/run/log/journal/68cb283cd3154a29b62a12a5f767d137) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:35:18.264000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:35:18.905000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:35:18.924000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:35:18.924000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:35:18.924000 audit: BPF prog-id=10 op=LOAD Feb 8 23:35:18.924000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:35:18.924000 audit: BPF prog-id=11 op=LOAD Feb 8 23:35:18.924000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:35:20.178000 audit[1060]: AVC avc: denied { associate } for pid=1060 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:35:20.178000 audit[1060]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1043 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:20.178000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:35:20.187000 audit[1060]: AVC avc: denied { associate } for pid=1060 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:35:20.187000 audit[1060]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1043 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:20.187000 audit: CWD cwd="/" Feb 8 23:35:20.187000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:20.187000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:20.187000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:35:28.094000 audit: BPF prog-id=12 op=LOAD Feb 8 23:35:28.094000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:35:28.099000 audit: BPF prog-id=13 op=LOAD Feb 8 23:35:28.103000 audit: BPF prog-id=14 op=LOAD Feb 8 23:35:28.103000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:35:28.103000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:35:28.108000 audit: BPF prog-id=15 op=LOAD Feb 8 23:35:28.108000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:35:28.113000 audit: BPF prog-id=16 op=LOAD Feb 8 23:35:28.118000 audit: BPF prog-id=17 op=LOAD Feb 8 23:35:28.118000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:35:28.118000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:35:28.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.156000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:35:28.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.511000 audit: BPF prog-id=18 op=LOAD Feb 8 23:35:28.511000 audit: BPF prog-id=19 op=LOAD Feb 8 23:35:28.511000 audit: BPF prog-id=20 op=LOAD Feb 8 23:35:28.511000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:35:28.511000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:35:28.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.614000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:35:28.614000 audit[1157]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc65f86cb0 a2=4000 a3=7ffc65f86d4c items=0 ppid=1 pid=1157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:28.614000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:35:20.135549 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:35:28.093549 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:35:20.149162 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:35:28.120131 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:35:20.149187 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:35:20.149233 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:35:20.149245 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:35:20.149308 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:35:20.149327 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:35:20.149564 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:35:20.149617 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:35:20.149637 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:35:20.164286 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:35:20.164330 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:35:20.164361 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:35:20.164379 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:35:20.164400 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:35:20.164415 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:35:27.101217 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:35:27.101470 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:35:27.101935 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:35:27.102279 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:35:27.102334 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:35:27.102389 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:35:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:35:28.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.632381 systemd[1]: Started systemd-journald.service. Feb 8 23:35:28.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.633119 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:35:28.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.635821 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:35:28.635958 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:35:28.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.638863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:35:28.638998 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:35:28.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.641825 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:35:28.641971 systemd[1]: Finished modprobe@drm.service. Feb 8 23:35:28.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.644719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:35:28.644911 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:35:28.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.647950 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:35:28.648081 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:35:28.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.650807 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:35:28.650941 systemd[1]: Finished modprobe@loop.service. Feb 8 23:35:28.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.653571 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:35:28.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.656402 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:35:28.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.659422 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:35:28.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.662488 systemd[1]: Reached target network-pre.target. Feb 8 23:35:28.665722 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:35:28.669426 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:35:28.672704 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:35:28.674367 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:35:28.677752 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:35:28.680221 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:35:28.681550 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:35:28.684076 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:35:28.685460 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:35:28.690066 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:35:28.697058 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:35:28.704916 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:35:28.713406 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:35:28.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.716191 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:35:28.725205 systemd-journald[1157]: Time spent on flushing to /var/log/journal/68cb283cd3154a29b62a12a5f767d137 is 26.615ms for 1202 entries. Feb 8 23:35:28.725205 systemd-journald[1157]: System Journal (/var/log/journal/68cb283cd3154a29b62a12a5f767d137) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:35:28.815617 systemd-journald[1157]: Received client request to flush runtime journal. Feb 8 23:35:28.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:28.748950 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:35:28.753545 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:35:28.816241 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:35:28.757411 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:35:28.816703 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:35:28.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:29.208662 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:35:29.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:29.212736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:35:29.520384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:35:29.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:29.886472 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:35:29.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:29.889000 audit: BPF prog-id=21 op=LOAD Feb 8 23:35:29.889000 audit: BPF prog-id=22 op=LOAD Feb 8 23:35:29.889000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:35:29.889000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:35:29.890857 systemd[1]: Starting systemd-udevd.service... Feb 8 23:35:29.910214 systemd-udevd[1188]: Using default interface naming scheme 'v252'. Feb 8 23:35:30.142227 systemd[1]: Started systemd-udevd.service. Feb 8 23:35:30.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:30.145000 audit: BPF prog-id=23 op=LOAD Feb 8 23:35:30.147791 systemd[1]: Starting systemd-networkd.service... Feb 8 23:35:30.182439 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:35:30.256811 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:35:30.261000 audit: BPF prog-id=24 op=LOAD Feb 8 23:35:30.261000 audit: BPF prog-id=25 op=LOAD Feb 8 23:35:30.261000 audit: BPF prog-id=26 op=LOAD Feb 8 23:35:30.263526 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:35:30.311613 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:35:30.311723 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:35:30.300000 audit[1197]: AVC avc: denied { confidentiality } for pid=1197 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:35:30.317780 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:35:30.321777 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:35:31.328247 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:35:31.328322 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:35:31.328406 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:35:31.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.336957 systemd[1]: Started systemd-userdbd.service. Feb 8 23:35:30.300000 audit[1197]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5577f3027d80 a1=f884 a2=7ffa999ebbc5 a3=5 items=12 ppid=1188 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:30.300000 audit: CWD cwd="/" Feb 8 23:35:30.300000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=1 name=(null) inode=15559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=2 name=(null) inode=15559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=3 name=(null) inode=15560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=4 name=(null) inode=15559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=5 name=(null) inode=15561 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=6 name=(null) inode=15559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=7 name=(null) inode=15562 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=8 name=(null) inode=15559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=9 name=(null) inode=15563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=10 name=(null) inode=15559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PATH item=11 name=(null) inode=15564 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:35:30.300000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:35:31.354217 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:35:31.401227 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:35:31.401338 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:35:31.408537 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:35:31.414363 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:35:31.514150 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1193) Feb 8 23:35:31.550148 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:35:31.586004 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:35:31.617485 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:35:31.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.621716 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:35:31.668244 systemd-networkd[1194]: lo: Link UP Feb 8 23:35:31.668254 systemd-networkd[1194]: lo: Gained carrier Feb 8 23:35:31.668807 systemd-networkd[1194]: Enumeration completed Feb 8 23:35:31.668913 systemd[1]: Started systemd-networkd.service. Feb 8 23:35:31.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:31.673295 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:35:31.704606 systemd-networkd[1194]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:35:31.761146 kernel: mlx5_core 417a:00:02.0 enP16762s1: Link up Feb 8 23:35:31.806155 kernel: hv_netvsc 00224899-a3a6-0022-4899-a3a600224899 eth0: Data path switched to VF: enP16762s1 Feb 8 23:35:31.807204 systemd-networkd[1194]: enP16762s1: Link UP Feb 8 23:35:31.807502 systemd-networkd[1194]: eth0: Link UP Feb 8 23:35:31.807603 systemd-networkd[1194]: eth0: Gained carrier Feb 8 23:35:31.812397 systemd-networkd[1194]: enP16762s1: Gained carrier Feb 8 23:35:31.840261 systemd-networkd[1194]: eth0: DHCPv4 address 10.200.8.36/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:35:31.982994 lvm[1264]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:35:32.012416 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:35:32.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:32.016010 systemd[1]: Reached target cryptsetup.target. Feb 8 23:35:32.020313 systemd[1]: Starting lvm2-activation.service... Feb 8 23:35:32.024964 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:35:32.045108 systemd[1]: Finished lvm2-activation.service. Feb 8 23:35:32.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:32.048042 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:35:32.050743 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:35:32.050780 systemd[1]: Reached target local-fs.target. Feb 8 23:35:32.053106 systemd[1]: Reached target machines.target. Feb 8 23:35:32.056851 systemd[1]: Starting ldconfig.service... Feb 8 23:35:32.059404 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:35:32.059503 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:35:32.060691 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:35:32.064196 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:35:32.068315 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:35:32.071189 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:35:32.071280 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:35:32.072348 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:35:32.104718 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) Feb 8 23:35:32.105944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:35:32.211409 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:35:32.480402 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:35:32.512708 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:35:32.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:32.532540 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:35:33.091394 systemd-networkd[1194]: eth0: Gained IPv6LL Feb 8 23:35:33.096985 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:35:33.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:33.877684 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:35:33.878335 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:35:33.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.155931 systemd-fsck[1276]: fsck.fat 4.2 (2021-01-31) Feb 8 23:35:34.155931 systemd-fsck[1276]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:35:34.158423 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:35:34.163895 systemd[1]: Mounting boot.mount... Feb 8 23:35:34.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.170569 kernel: kauditd_printk_skb: 79 callbacks suppressed Feb 8 23:35:34.170644 kernel: audit: type=1130 audit(1707435334.162:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.180470 systemd[1]: Mounted boot.mount. Feb 8 23:35:34.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.195505 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:35:34.210225 kernel: audit: type=1130 audit(1707435334.197:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.308450 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:35:34.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.312966 systemd[1]: Starting audit-rules.service... Feb 8 23:35:34.323199 kernel: audit: type=1130 audit(1707435334.311:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.326432 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:35:34.333000 audit: BPF prog-id=27 op=LOAD Feb 8 23:35:34.330220 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:35:34.334629 systemd[1]: Starting systemd-resolved.service... Feb 8 23:35:34.339464 kernel: audit: type=1334 audit(1707435334.333:165): prog-id=27 op=LOAD Feb 8 23:35:34.346581 kernel: audit: type=1334 audit(1707435334.340:166): prog-id=28 op=LOAD Feb 8 23:35:34.340000 audit: BPF prog-id=28 op=LOAD Feb 8 23:35:34.345097 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:35:34.348477 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:35:34.381821 kernel: audit: type=1127 audit(1707435334.369:167): pid=1289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.369000 audit[1289]: SYSTEM_BOOT pid=1289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.387894 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:35:34.402505 kernel: audit: type=1130 audit(1707435334.390:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.419928 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:35:34.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.422840 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:35:34.434153 kernel: audit: type=1130 audit(1707435334.422:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.445440 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:35:34.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.448557 systemd[1]: Reached target time-set.target. Feb 8 23:35:34.464365 kernel: audit: type=1130 audit(1707435334.448:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.540033 systemd-resolved[1286]: Positive Trust Anchors: Feb 8 23:35:34.540049 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:35:34.540087 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:35:34.589283 systemd-timesyncd[1288]: Contacted time server 193.1.8.106:123 (0.flatcar.pool.ntp.org). Feb 8 23:35:34.589355 systemd-timesyncd[1288]: Initial clock synchronization to Thu 2024-02-08 23:35:34.591280 UTC. Feb 8 23:35:34.599075 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:35:34.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.617144 kernel: audit: type=1130 audit(1707435334.602:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:35:34.647000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:35:34.647000 audit[1304]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce271c4e0 a2=420 a3=0 items=0 ppid=1283 pid=1304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:35:34.647000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:35:34.648439 augenrules[1304]: No rules Feb 8 23:35:34.648892 systemd[1]: Finished audit-rules.service. Feb 8 23:35:34.671781 systemd-resolved[1286]: Using system hostname 'ci-3510.3.2-a-baa4ff5fd1'. Feb 8 23:35:34.673665 systemd[1]: Started systemd-resolved.service. Feb 8 23:35:34.676936 systemd[1]: Reached target network.target. Feb 8 23:35:34.679542 systemd[1]: Reached target network-online.target. Feb 8 23:35:34.681930 systemd[1]: Reached target nss-lookup.target. Feb 8 23:35:40.381429 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:35:40.390696 systemd[1]: Finished ldconfig.service. Feb 8 23:35:40.395269 systemd[1]: Starting systemd-update-done.service... Feb 8 23:35:40.418468 systemd[1]: Finished systemd-update-done.service. Feb 8 23:35:40.421662 systemd[1]: Reached target sysinit.target. Feb 8 23:35:40.424478 systemd[1]: Started motdgen.path. Feb 8 23:35:40.427506 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:35:40.431748 systemd[1]: Started logrotate.timer. Feb 8 23:35:40.434793 systemd[1]: Started mdadm.timer. Feb 8 23:35:40.437070 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:35:40.439872 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:35:40.439925 systemd[1]: Reached target paths.target. Feb 8 23:35:40.442361 systemd[1]: Reached target timers.target. Feb 8 23:35:40.445470 systemd[1]: Listening on dbus.socket. Feb 8 23:35:40.449065 systemd[1]: Starting docker.socket... Feb 8 23:35:40.454888 systemd[1]: Listening on sshd.socket. Feb 8 23:35:40.457754 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:35:40.458221 systemd[1]: Listening on docker.socket. Feb 8 23:35:40.461383 systemd[1]: Reached target sockets.target. Feb 8 23:35:40.464174 systemd[1]: Reached target basic.target. Feb 8 23:35:40.466859 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:35:40.466893 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:35:40.467904 systemd[1]: Starting containerd.service... Feb 8 23:35:40.471823 systemd[1]: Starting dbus.service... Feb 8 23:35:40.475576 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:35:40.479995 systemd[1]: Starting extend-filesystems.service... Feb 8 23:35:40.482656 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:35:40.484213 systemd[1]: Starting motdgen.service... Feb 8 23:35:40.488256 systemd[1]: Started nvidia.service. Feb 8 23:35:40.492390 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:35:40.495663 systemd[1]: Starting prepare-critools.service... Feb 8 23:35:40.499245 systemd[1]: Starting prepare-helm.service... Feb 8 23:35:40.502351 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:35:40.505744 systemd[1]: Starting sshd-keygen.service... Feb 8 23:35:40.510653 systemd[1]: Starting systemd-logind.service... Feb 8 23:35:40.512784 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:35:40.512858 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:35:40.513388 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:35:40.514231 systemd[1]: Starting update-engine.service... Feb 8 23:35:40.517614 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:35:40.532744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:35:40.532958 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:35:40.592946 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:35:40.593180 systemd[1]: Finished motdgen.service. Feb 8 23:35:40.601983 jq[1314]: false Feb 8 23:35:40.602297 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:35:40.602506 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:35:40.604555 jq[1330]: true Feb 8 23:35:40.607921 extend-filesystems[1315]: Found sda Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda1 Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda2 Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda3 Feb 8 23:35:40.610412 extend-filesystems[1315]: Found usr Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda4 Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda6 Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda7 Feb 8 23:35:40.610412 extend-filesystems[1315]: Found sda9 Feb 8 23:35:40.610412 extend-filesystems[1315]: Checking size of /dev/sda9 Feb 8 23:35:40.636517 jq[1344]: true Feb 8 23:35:40.663088 systemd-logind[1327]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:35:40.666979 systemd-logind[1327]: New seat seat0. Feb 8 23:35:40.685339 tar[1332]: ./ Feb 8 23:35:40.685339 tar[1332]: ./loopback Feb 8 23:35:40.690958 tar[1334]: linux-amd64/helm Feb 8 23:35:40.694413 tar[1333]: crictl Feb 8 23:35:40.696768 env[1338]: time="2024-02-08T23:35:40.696711444Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:35:40.703976 extend-filesystems[1315]: Old size kept for /dev/sda9 Feb 8 23:35:40.710388 extend-filesystems[1315]: Found sr0 Feb 8 23:35:40.707200 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:35:40.707360 systemd[1]: Finished extend-filesystems.service. Feb 8 23:35:40.790835 bash[1371]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:35:40.791214 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:35:40.804221 dbus-daemon[1313]: [system] SELinux support is enabled Feb 8 23:35:40.804408 systemd[1]: Started dbus.service. Feb 8 23:35:40.807704 env[1338]: time="2024-02-08T23:35:40.807665507Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:35:40.807952 env[1338]: time="2024-02-08T23:35:40.807932132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:40.809169 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:35:40.809207 systemd[1]: Reached target system-config.target. Feb 8 23:35:40.811718 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:35:40.811744 systemd[1]: Reached target user-config.target. Feb 8 23:35:40.815606 systemd[1]: Started systemd-logind.service. Feb 8 23:35:40.818646 dbus-daemon[1313]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:35:40.819560 env[1338]: time="2024-02-08T23:35:40.819521825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:35:40.819651 env[1338]: time="2024-02-08T23:35:40.819637436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:40.820018 env[1338]: time="2024-02-08T23:35:40.819995469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:35:40.821175 env[1338]: time="2024-02-08T23:35:40.821151778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:40.821283 env[1338]: time="2024-02-08T23:35:40.821265689Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:35:40.821345 env[1338]: time="2024-02-08T23:35:40.821332095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:40.821495 env[1338]: time="2024-02-08T23:35:40.821480909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:40.821810 env[1338]: time="2024-02-08T23:35:40.821791539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:35:40.822089 env[1338]: time="2024-02-08T23:35:40.822065665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:35:40.823168 env[1338]: time="2024-02-08T23:35:40.823147367Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:35:40.823311 env[1338]: time="2024-02-08T23:35:40.823292280Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:35:40.823391 env[1338]: time="2024-02-08T23:35:40.823377088Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:35:40.838596 env[1338]: time="2024-02-08T23:35:40.838550919Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:35:40.838732 env[1338]: time="2024-02-08T23:35:40.838604924Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:35:40.838732 env[1338]: time="2024-02-08T23:35:40.838621726Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:35:40.838732 env[1338]: time="2024-02-08T23:35:40.838674731Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838732 env[1338]: time="2024-02-08T23:35:40.838694433Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838732 env[1338]: time="2024-02-08T23:35:40.838711934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838732 env[1338]: time="2024-02-08T23:35:40.838727636Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838936 env[1338]: time="2024-02-08T23:35:40.838746938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838936 env[1338]: time="2024-02-08T23:35:40.838765139Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838936 env[1338]: time="2024-02-08T23:35:40.838793142Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838936 env[1338]: time="2024-02-08T23:35:40.838811744Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.838936 env[1338]: time="2024-02-08T23:35:40.838830245Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:35:40.839105 env[1338]: time="2024-02-08T23:35:40.838968859Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:35:40.839105 env[1338]: time="2024-02-08T23:35:40.839065468Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:35:40.839431 env[1338]: time="2024-02-08T23:35:40.839404100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:35:40.839557 env[1338]: time="2024-02-08T23:35:40.839541513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.839629 env[1338]: time="2024-02-08T23:35:40.839616320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:35:40.839738 env[1338]: time="2024-02-08T23:35:40.839722530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.839818 env[1338]: time="2024-02-08T23:35:40.839804637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.839930 env[1338]: time="2024-02-08T23:35:40.839917948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.840004 env[1338]: time="2024-02-08T23:35:40.839990655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.840074 env[1338]: time="2024-02-08T23:35:40.840062162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.840160 env[1338]: time="2024-02-08T23:35:40.840147270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.840224 env[1338]: time="2024-02-08T23:35:40.840211676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.840300 env[1338]: time="2024-02-08T23:35:40.840286083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840372191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840514004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840533306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840549008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840563909Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840583811Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840600312Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840623315Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:35:40.841457 env[1338]: time="2024-02-08T23:35:40.840663918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:35:40.841873 env[1338]: time="2024-02-08T23:35:40.840924443Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:35:40.841873 env[1338]: time="2024-02-08T23:35:40.840996150Z" level=info msg="Connect containerd service" Feb 8 23:35:40.841873 env[1338]: time="2024-02-08T23:35:40.841039954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.842400282Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.842673708Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.842721812Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.846177738Z" level=info msg="containerd successfully booted in 0.158566s" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.847398353Z" level=info msg="Start subscribing containerd event" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.847460359Z" level=info msg="Start recovering state" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.847543267Z" level=info msg="Start event monitor" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.847560569Z" level=info msg="Start snapshots syncer" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.847573570Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:35:40.876443 env[1338]: time="2024-02-08T23:35:40.847588071Z" level=info msg="Start streaming server" Feb 8 23:35:40.842838 systemd[1]: Started containerd.service. Feb 8 23:35:40.867464 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:35:40.880733 tar[1332]: ./bandwidth Feb 8 23:35:40.983269 tar[1332]: ./ptp Feb 8 23:35:41.070090 tar[1332]: ./vlan Feb 8 23:35:41.114556 tar[1332]: ./host-device Feb 8 23:35:41.158034 tar[1332]: ./tuning Feb 8 23:35:41.197794 tar[1332]: ./vrf Feb 8 23:35:41.237964 tar[1332]: ./sbr Feb 8 23:35:41.292079 tar[1332]: ./tap Feb 8 23:35:41.375534 tar[1332]: ./dhcp Feb 8 23:35:41.421667 update_engine[1328]: I0208 23:35:41.403621 1328 main.cc:92] Flatcar Update Engine starting Feb 8 23:35:41.475845 systemd[1]: Started update-engine.service. Feb 8 23:35:41.484377 update_engine[1328]: I0208 23:35:41.475914 1328 update_check_scheduler.cc:74] Next update check in 9m37s Feb 8 23:35:41.481065 systemd[1]: Started locksmithd.service. Feb 8 23:35:41.614415 tar[1332]: ./static Feb 8 23:35:41.676342 tar[1332]: ./firewall Feb 8 23:35:41.776062 tar[1332]: ./macvlan Feb 8 23:35:41.864268 tar[1332]: ./dummy Feb 8 23:35:41.945714 tar[1334]: linux-amd64/LICENSE Feb 8 23:35:41.946092 tar[1334]: linux-amd64/README.md Feb 8 23:35:41.951875 systemd[1]: Finished prepare-helm.service. Feb 8 23:35:41.956749 tar[1332]: ./bridge Feb 8 23:35:41.997521 systemd[1]: Finished prepare-critools.service. Feb 8 23:35:42.027272 tar[1332]: ./ipvlan Feb 8 23:35:42.071509 tar[1332]: ./portmap Feb 8 23:35:42.113508 tar[1332]: ./host-local Feb 8 23:35:42.194450 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:35:43.148698 sshd_keygen[1337]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:35:43.168591 systemd[1]: Finished sshd-keygen.service. Feb 8 23:35:43.172796 systemd[1]: Starting issuegen.service... Feb 8 23:35:43.176340 systemd[1]: Started waagent.service. Feb 8 23:35:43.179704 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:35:43.179931 systemd[1]: Finished issuegen.service. Feb 8 23:35:43.183959 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:35:43.190249 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:35:43.194236 systemd[1]: Started getty@tty1.service. Feb 8 23:35:43.197634 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:35:43.200045 systemd[1]: Reached target getty.target. Feb 8 23:35:43.202112 systemd[1]: Reached target multi-user.target. Feb 8 23:35:43.205805 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:35:43.214774 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:35:43.214937 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:35:43.217816 systemd[1]: Startup finished in 882ms (firmware) + 27.641s (loader) + 945ms (kernel) + 20.019s (initrd) + 24.482s (userspace) = 1min 13.972s. Feb 8 23:35:43.430677 locksmithd[1419]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:35:43.615516 login[1442]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:35:43.615694 login[1441]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:35:43.641983 systemd[1]: Created slice user-500.slice. Feb 8 23:35:43.643470 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:35:43.647228 systemd-logind[1327]: New session 2 of user core. Feb 8 23:35:43.651449 systemd-logind[1327]: New session 1 of user core. Feb 8 23:35:43.655293 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:35:43.656929 systemd[1]: Starting user@500.service... Feb 8 23:35:43.660206 (systemd)[1445]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:35:43.895501 systemd[1445]: Queued start job for default target default.target. Feb 8 23:35:43.896212 systemd[1445]: Reached target paths.target. Feb 8 23:35:43.896253 systemd[1445]: Reached target sockets.target. Feb 8 23:35:43.896273 systemd[1445]: Reached target timers.target. Feb 8 23:35:43.896292 systemd[1445]: Reached target basic.target. Feb 8 23:35:43.896364 systemd[1445]: Reached target default.target. Feb 8 23:35:43.896408 systemd[1445]: Startup finished in 230ms. Feb 8 23:35:43.896450 systemd[1]: Started user@500.service. Feb 8 23:35:43.897970 systemd[1]: Started session-1.scope. Feb 8 23:35:43.898754 systemd[1]: Started session-2.scope. Feb 8 23:35:49.551215 waagent[1436]: 2024-02-08T23:35:49.551092Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:35:49.555904 waagent[1436]: 2024-02-08T23:35:49.555828Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:35:49.558814 waagent[1436]: 2024-02-08T23:35:49.558756Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:35:49.561463 waagent[1436]: 2024-02-08T23:35:49.561395Z INFO Daemon Daemon Run daemon Feb 8 23:35:49.563935 waagent[1436]: 2024-02-08T23:35:49.563874Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:35:49.576534 waagent[1436]: 2024-02-08T23:35:49.576421Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:35:49.583766 waagent[1436]: 2024-02-08T23:35:49.583664Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.584036Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.584955Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.586390Z INFO Daemon Daemon Activate resource disk Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.587278Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.594855Z INFO Daemon Daemon Found device: None Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.595532Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.596404Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.598150Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:35:49.614553 waagent[1436]: 2024-02-08T23:35:49.599624Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:35:49.617070 waagent[1436]: 2024-02-08T23:35:49.616951Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:35:49.624367 waagent[1436]: 2024-02-08T23:35:49.624266Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:35:49.624709 waagent[1436]: 2024-02-08T23:35:49.624657Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:35:49.625655 waagent[1436]: 2024-02-08T23:35:49.625605Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:35:49.663607 waagent[1436]: 2024-02-08T23:35:49.663485Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:35:49.787234 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:35:49.793660 waagent[1436]: 2024-02-08T23:35:49.793526Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:35:49.810233 waagent[1436]: 2024-02-08T23:35:49.794069Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:35:49.810233 waagent[1436]: 2024-02-08T23:35:49.795380Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:35:49.810233 waagent[1436]: 2024-02-08T23:35:49.796410Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:35:49.810233 waagent[1436]: 2024-02-08T23:35:49.797657Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:35:49.810233 waagent[1436]: 2024-02-08T23:35:49.798535Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:35:49.879522 waagent[1436]: 2024-02-08T23:35:49.879448Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:35:49.887836 waagent[1436]: 2024-02-08T23:35:49.880358Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:35:49.887836 waagent[1436]: 2024-02-08T23:35:49.881429Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:35:50.275792 waagent[1436]: 2024-02-08T23:35:50.275587Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:35:50.287368 waagent[1436]: 2024-02-08T23:35:50.287294Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:35:50.290660 waagent[1436]: 2024-02-08T23:35:50.290595Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:35:50.369298 waagent[1436]: 2024-02-08T23:35:50.369179Z INFO Daemon Daemon Found private key matching thumbprint 8D6A134EEC45F6CEEBA7B11F708F64A7D9E19C87 Feb 8 23:35:50.380953 waagent[1436]: 2024-02-08T23:35:50.369682Z INFO Daemon Daemon Certificate with thumbprint 1FE58736EA834DD6DE3C93D13DC96075354DB6B6 has no matching private key. Feb 8 23:35:50.380953 waagent[1436]: 2024-02-08T23:35:50.370908Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:35:50.421648 waagent[1436]: 2024-02-08T23:35:50.421558Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 7d86e0de-968c-446d-b478-e97544988219 New eTag: 7028495161634228396] Feb 8 23:35:50.430508 waagent[1436]: 2024-02-08T23:35:50.422641Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:35:50.431808 waagent[1436]: 2024-02-08T23:35:50.431747Z INFO Daemon Daemon Starting provisioning Feb 8 23:35:50.438914 waagent[1436]: 2024-02-08T23:35:50.432051Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:35:50.438914 waagent[1436]: 2024-02-08T23:35:50.433012Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-baa4ff5fd1] Feb 8 23:35:50.452259 waagent[1436]: 2024-02-08T23:35:50.452156Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-baa4ff5fd1] Feb 8 23:35:50.460387 waagent[1436]: 2024-02-08T23:35:50.452747Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:35:50.460387 waagent[1436]: 2024-02-08T23:35:50.453854Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:35:50.466824 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:35:50.467090 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:35:50.467182 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:35:50.467522 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:35:50.473161 systemd-networkd[1194]: eth0: DHCPv6 lease lost Feb 8 23:35:50.474406 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:35:50.474593 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:35:50.476800 systemd[1]: Starting systemd-networkd.service... Feb 8 23:35:50.507241 systemd-networkd[1490]: enP16762s1: Link UP Feb 8 23:35:50.507250 systemd-networkd[1490]: enP16762s1: Gained carrier Feb 8 23:35:50.508532 systemd-networkd[1490]: eth0: Link UP Feb 8 23:35:50.508541 systemd-networkd[1490]: eth0: Gained carrier Feb 8 23:35:50.508955 systemd-networkd[1490]: lo: Link UP Feb 8 23:35:50.508964 systemd-networkd[1490]: lo: Gained carrier Feb 8 23:35:50.509361 systemd-networkd[1490]: eth0: Gained IPv6LL Feb 8 23:35:50.509635 systemd-networkd[1490]: Enumeration completed Feb 8 23:35:50.513537 waagent[1436]: 2024-02-08T23:35:50.510918Z INFO Daemon Daemon Create user account if not exists Feb 8 23:35:50.513537 waagent[1436]: 2024-02-08T23:35:50.511548Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:35:50.513537 waagent[1436]: 2024-02-08T23:35:50.512434Z INFO Daemon Daemon Configure sudoer Feb 8 23:35:50.509733 systemd[1]: Started systemd-networkd.service. Feb 8 23:35:50.514094 waagent[1436]: 2024-02-08T23:35:50.514038Z INFO Daemon Daemon Configure sshd Feb 8 23:35:50.514925 waagent[1436]: 2024-02-08T23:35:50.514874Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:35:50.518943 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:35:50.525190 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:35:50.553210 systemd-networkd[1490]: eth0: DHCPv4 address 10.200.8.36/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:35:50.557080 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:35:50.565535 waagent[1436]: 2024-02-08T23:35:50.565426Z INFO Daemon Daemon Decode custom data Feb 8 23:35:50.568460 waagent[1436]: 2024-02-08T23:35:50.568390Z INFO Daemon Daemon Save custom data Feb 8 23:35:51.778571 waagent[1436]: 2024-02-08T23:35:51.778480Z INFO Daemon Daemon Provisioning complete Feb 8 23:35:51.793933 waagent[1436]: 2024-02-08T23:35:51.793856Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:35:51.797363 waagent[1436]: 2024-02-08T23:35:51.797291Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:35:51.801920 waagent[1436]: 2024-02-08T23:35:51.798418Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:35:52.067442 waagent[1499]: 2024-02-08T23:35:52.066861Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:35:52.067854 waagent[1499]: 2024-02-08T23:35:52.067781Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:52.068014 waagent[1499]: 2024-02-08T23:35:52.067962Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:52.079060 waagent[1499]: 2024-02-08T23:35:52.078983Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:35:52.079235 waagent[1499]: 2024-02-08T23:35:52.079176Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:35:52.137962 waagent[1499]: 2024-02-08T23:35:52.137835Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8D6A134EEC45F6CEEBA7B11F708F64A7D9E19C87 Feb 8 23:35:52.138216 waagent[1499]: 2024-02-08T23:35:52.138157Z INFO ExtHandler ExtHandler Certificate with thumbprint 1FE58736EA834DD6DE3C93D13DC96075354DB6B6 has no matching private key. Feb 8 23:35:52.138465 waagent[1499]: 2024-02-08T23:35:52.138414Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:35:52.152085 waagent[1499]: 2024-02-08T23:35:52.152022Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: a86a330e-cc15-4ab8-9a22-ac1b0f29af85 New eTag: 7028495161634228396] Feb 8 23:35:52.152651 waagent[1499]: 2024-02-08T23:35:52.152593Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:35:52.455392 waagent[1499]: 2024-02-08T23:35:52.455167Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:35:52.479626 waagent[1499]: 2024-02-08T23:35:52.479512Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1499 Feb 8 23:35:52.483251 waagent[1499]: 2024-02-08T23:35:52.483183Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:35:52.484521 waagent[1499]: 2024-02-08T23:35:52.484460Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:35:52.557272 waagent[1499]: 2024-02-08T23:35:52.557193Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:35:52.557768 waagent[1499]: 2024-02-08T23:35:52.557690Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:35:52.566306 waagent[1499]: 2024-02-08T23:35:52.566251Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:35:52.566774 waagent[1499]: 2024-02-08T23:35:52.566712Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:35:52.567846 waagent[1499]: 2024-02-08T23:35:52.567772Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:35:52.569144 waagent[1499]: 2024-02-08T23:35:52.569071Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:35:52.570026 waagent[1499]: 2024-02-08T23:35:52.569969Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:52.570270 waagent[1499]: 2024-02-08T23:35:52.570214Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:52.570430 waagent[1499]: 2024-02-08T23:35:52.570381Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:52.570610 waagent[1499]: 2024-02-08T23:35:52.570538Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:35:52.571179 waagent[1499]: 2024-02-08T23:35:52.571102Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:35:52.571820 waagent[1499]: 2024-02-08T23:35:52.571760Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:35:52.572090 waagent[1499]: 2024-02-08T23:35:52.572038Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:52.572203 waagent[1499]: 2024-02-08T23:35:52.572140Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:35:52.572203 waagent[1499]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:35:52.572203 waagent[1499]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:35:52.572203 waagent[1499]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:35:52.572203 waagent[1499]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:52.572203 waagent[1499]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:52.572203 waagent[1499]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:52.572467 waagent[1499]: 2024-02-08T23:35:52.572413Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:35:52.573354 waagent[1499]: 2024-02-08T23:35:52.573295Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:35:52.575869 waagent[1499]: 2024-02-08T23:35:52.575629Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:35:52.576769 waagent[1499]: 2024-02-08T23:35:52.576703Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:35:52.577264 waagent[1499]: 2024-02-08T23:35:52.577193Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:35:52.577678 waagent[1499]: 2024-02-08T23:35:52.577624Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:35:52.579775 waagent[1499]: 2024-02-08T23:35:52.579718Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:35:52.589959 waagent[1499]: 2024-02-08T23:35:52.589894Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:35:52.590630 waagent[1499]: 2024-02-08T23:35:52.590578Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:35:52.592900 waagent[1499]: 2024-02-08T23:35:52.592847Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:35:52.617728 waagent[1499]: 2024-02-08T23:35:52.617628Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1490' Feb 8 23:35:52.640796 waagent[1499]: 2024-02-08T23:35:52.640727Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:35:52.725923 waagent[1499]: 2024-02-08T23:35:52.725765Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:35:52.725923 waagent[1499]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:35:52.725923 waagent[1499]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:35:52.725923 waagent[1499]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:99:a3:a6 brd ff:ff:ff:ff:ff:ff Feb 8 23:35:52.725923 waagent[1499]: 3: enP16762s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:99:a3:a6 brd ff:ff:ff:ff:ff:ff\ altname enP16762p0s2 Feb 8 23:35:52.725923 waagent[1499]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:35:52.725923 waagent[1499]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:35:52.725923 waagent[1499]: 2: eth0 inet 10.200.8.36/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:35:52.725923 waagent[1499]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:35:52.725923 waagent[1499]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:35:52.725923 waagent[1499]: 2: eth0 inet6 fe80::222:48ff:fe99:a3a6/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:35:52.963107 waagent[1499]: 2024-02-08T23:35:52.963040Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:35:53.802699 waagent[1436]: 2024-02-08T23:35:53.802517Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:35:53.808793 waagent[1436]: 2024-02-08T23:35:53.808733Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:35:54.813096 waagent[1537]: 2024-02-08T23:35:54.812986Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:35:54.813807 waagent[1537]: 2024-02-08T23:35:54.813740Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:35:54.813960 waagent[1537]: 2024-02-08T23:35:54.813906Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:35:54.823589 waagent[1537]: 2024-02-08T23:35:54.823493Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:35:54.823957 waagent[1537]: 2024-02-08T23:35:54.823900Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:54.824128 waagent[1537]: 2024-02-08T23:35:54.824068Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:54.835439 waagent[1537]: 2024-02-08T23:35:54.835367Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:35:54.844178 waagent[1537]: 2024-02-08T23:35:54.844107Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:35:54.845038 waagent[1537]: 2024-02-08T23:35:54.844979Z INFO ExtHandler Feb 8 23:35:54.845198 waagent[1537]: 2024-02-08T23:35:54.845148Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b07d02d0-befa-4e15-bb74-102e33644fd2 eTag: 7028495161634228396 source: Fabric] Feb 8 23:35:54.845873 waagent[1537]: 2024-02-08T23:35:54.845816Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:35:54.846952 waagent[1537]: 2024-02-08T23:35:54.846873Z INFO ExtHandler Feb 8 23:35:54.847093 waagent[1537]: 2024-02-08T23:35:54.847041Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:35:54.869084 waagent[1537]: 2024-02-08T23:35:54.869030Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:35:54.869512 waagent[1537]: 2024-02-08T23:35:54.869462Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:35:54.889653 waagent[1537]: 2024-02-08T23:35:54.889593Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:35:54.955258 waagent[1537]: 2024-02-08T23:35:54.955102Z INFO ExtHandler Downloaded certificate {'thumbprint': '8D6A134EEC45F6CEEBA7B11F708F64A7D9E19C87', 'hasPrivateKey': True} Feb 8 23:35:54.956198 waagent[1537]: 2024-02-08T23:35:54.956117Z INFO ExtHandler Downloaded certificate {'thumbprint': '1FE58736EA834DD6DE3C93D13DC96075354DB6B6', 'hasPrivateKey': False} Feb 8 23:35:54.957144 waagent[1537]: 2024-02-08T23:35:54.957070Z INFO ExtHandler Fetch goal state completed Feb 8 23:35:54.976765 waagent[1537]: 2024-02-08T23:35:54.976696Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1537 Feb 8 23:35:54.980073 waagent[1537]: 2024-02-08T23:35:54.980008Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:35:54.981544 waagent[1537]: 2024-02-08T23:35:54.981486Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:35:54.986516 waagent[1537]: 2024-02-08T23:35:54.986463Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:35:54.986870 waagent[1537]: 2024-02-08T23:35:54.986806Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:35:54.994891 waagent[1537]: 2024-02-08T23:35:54.994832Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:35:54.995402 waagent[1537]: 2024-02-08T23:35:54.995343Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:35:55.001763 waagent[1537]: 2024-02-08T23:35:55.001667Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:35:55.006435 waagent[1537]: 2024-02-08T23:35:55.006375Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:35:55.007824 waagent[1537]: 2024-02-08T23:35:55.007766Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:35:55.008268 waagent[1537]: 2024-02-08T23:35:55.008209Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:55.008610 waagent[1537]: 2024-02-08T23:35:55.008554Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:55.009145 waagent[1537]: 2024-02-08T23:35:55.009074Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:35:55.009433 waagent[1537]: 2024-02-08T23:35:55.009376Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:35:55.009433 waagent[1537]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:35:55.009433 waagent[1537]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:35:55.009433 waagent[1537]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:35:55.009433 waagent[1537]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:55.009433 waagent[1537]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:55.009433 waagent[1537]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:35:55.012140 waagent[1537]: 2024-02-08T23:35:55.011995Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:35:55.013378 waagent[1537]: 2024-02-08T23:35:55.013317Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:35:55.013641 waagent[1537]: 2024-02-08T23:35:55.013586Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:35:55.017893 waagent[1537]: 2024-02-08T23:35:55.017743Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:35:55.018115 waagent[1537]: 2024-02-08T23:35:55.018038Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:35:55.018506 waagent[1537]: 2024-02-08T23:35:55.018431Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:35:55.019313 waagent[1537]: 2024-02-08T23:35:55.019244Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:35:55.019562 waagent[1537]: 2024-02-08T23:35:55.019499Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:35:55.022044 waagent[1537]: 2024-02-08T23:35:55.021768Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:35:55.022514 waagent[1537]: 2024-02-08T23:35:55.022452Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:35:55.022920 waagent[1537]: 2024-02-08T23:35:55.022853Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:35:55.025190 waagent[1537]: 2024-02-08T23:35:55.025116Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:35:55.025190 waagent[1537]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:35:55.025190 waagent[1537]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:35:55.025190 waagent[1537]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:99:a3:a6 brd ff:ff:ff:ff:ff:ff Feb 8 23:35:55.025190 waagent[1537]: 3: enP16762s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:99:a3:a6 brd ff:ff:ff:ff:ff:ff\ altname enP16762p0s2 Feb 8 23:35:55.025190 waagent[1537]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:35:55.025190 waagent[1537]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:35:55.025190 waagent[1537]: 2: eth0 inet 10.200.8.36/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:35:55.025190 waagent[1537]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:35:55.025190 waagent[1537]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:35:55.025190 waagent[1537]: 2: eth0 inet6 fe80::222:48ff:fe99:a3a6/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:35:55.049959 waagent[1537]: 2024-02-08T23:35:55.049884Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:35:55.051564 waagent[1537]: 2024-02-08T23:35:55.051505Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:35:55.127720 waagent[1537]: 2024-02-08T23:35:55.127577Z INFO ExtHandler ExtHandler Feb 8 23:35:55.128093 waagent[1537]: 2024-02-08T23:35:55.128026Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a8f36967-c1b9-4d67-b985-150a18e5feba correlation bed9acb5-b9da-479f-86fc-9e1841ec84da created: 2024-02-08T23:34:20.378281Z] Feb 8 23:35:55.129460 waagent[1537]: 2024-02-08T23:35:55.129386Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:35:55.134143 waagent[1537]: 2024-02-08T23:35:55.134027Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Feb 8 23:35:55.155388 waagent[1537]: 2024-02-08T23:35:55.155286Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:35:55.155388 waagent[1537]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.155388 waagent[1537]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.155388 waagent[1537]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.155388 waagent[1537]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.155388 waagent[1537]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.155388 waagent[1537]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.155388 waagent[1537]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:35:55.155388 waagent[1537]: 5 2645 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:35:55.155388 waagent[1537]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:35:55.162376 waagent[1537]: 2024-02-08T23:35:55.162274Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:35:55.162376 waagent[1537]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.162376 waagent[1537]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.162376 waagent[1537]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.162376 waagent[1537]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.162376 waagent[1537]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:35:55.162376 waagent[1537]: pkts bytes target prot opt in out source destination Feb 8 23:35:55.162376 waagent[1537]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:35:55.162376 waagent[1537]: 5 2645 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:35:55.162376 waagent[1537]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:35:55.163578 waagent[1537]: 2024-02-08T23:35:55.163521Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:35:55.164738 waagent[1537]: 2024-02-08T23:35:55.164683Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:35:55.173824 waagent[1537]: 2024-02-08T23:35:55.173754Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 111B7791-513D-4273-B475-C81FF8767707;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:36:19.472934 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:36:26.806972 update_engine[1328]: I0208 23:36:26.806882 1328 update_attempter.cc:509] Updating boot flags... Feb 8 23:36:28.599019 systemd[1]: Created slice system-sshd.slice. Feb 8 23:36:28.600880 systemd[1]: Started sshd@0-10.200.8.36:22-10.200.12.6:60206.service. Feb 8 23:36:29.432997 sshd[1648]: Accepted publickey for core from 10.200.12.6 port 60206 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:29.434632 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:29.438911 systemd-logind[1327]: New session 3 of user core. Feb 8 23:36:29.440839 systemd[1]: Started session-3.scope. Feb 8 23:36:29.971500 systemd[1]: Started sshd@1-10.200.8.36:22-10.200.12.6:60212.service. Feb 8 23:36:30.589250 sshd[1653]: Accepted publickey for core from 10.200.12.6 port 60212 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:30.590874 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:30.596344 systemd[1]: Started session-4.scope. Feb 8 23:36:30.596926 systemd-logind[1327]: New session 4 of user core. Feb 8 23:36:31.039185 sshd[1653]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:31.042532 systemd[1]: sshd@1-10.200.8.36:22-10.200.12.6:60212.service: Deactivated successfully. Feb 8 23:36:31.043552 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:36:31.044383 systemd-logind[1327]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:36:31.045272 systemd-logind[1327]: Removed session 4. Feb 8 23:36:31.142685 systemd[1]: Started sshd@2-10.200.8.36:22-10.200.12.6:60220.service. Feb 8 23:36:31.769233 sshd[1659]: Accepted publickey for core from 10.200.12.6 port 60220 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:31.770906 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:31.775982 systemd[1]: Started session-5.scope. Feb 8 23:36:31.776586 systemd-logind[1327]: New session 5 of user core. Feb 8 23:36:32.203620 sshd[1659]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:32.206874 systemd[1]: sshd@2-10.200.8.36:22-10.200.12.6:60220.service: Deactivated successfully. Feb 8 23:36:32.208241 systemd-logind[1327]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:36:32.208333 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:36:32.209656 systemd-logind[1327]: Removed session 5. Feb 8 23:36:32.308099 systemd[1]: Started sshd@3-10.200.8.36:22-10.200.12.6:60222.service. Feb 8 23:36:32.929330 sshd[1668]: Accepted publickey for core from 10.200.12.6 port 60222 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:32.930793 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:32.936449 systemd[1]: Started session-6.scope. Feb 8 23:36:32.936997 systemd-logind[1327]: New session 6 of user core. Feb 8 23:36:33.369014 sshd[1668]: pam_unix(sshd:session): session closed for user core Feb 8 23:36:33.372250 systemd[1]: sshd@3-10.200.8.36:22-10.200.12.6:60222.service: Deactivated successfully. Feb 8 23:36:33.373249 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:36:33.374026 systemd-logind[1327]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:36:33.374936 systemd-logind[1327]: Removed session 6. Feb 8 23:36:33.473657 systemd[1]: Started sshd@4-10.200.8.36:22-10.200.12.6:60224.service. Feb 8 23:36:34.104976 sshd[1674]: Accepted publickey for core from 10.200.12.6 port 60224 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:36:34.106646 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:36:34.112233 systemd-logind[1327]: New session 7 of user core. Feb 8 23:36:34.112268 systemd[1]: Started session-7.scope. Feb 8 23:36:34.667542 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:36:34.667845 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:36:35.712008 systemd[1]: Starting docker.service... Feb 8 23:36:35.766813 env[1692]: time="2024-02-08T23:36:35.766765399Z" level=info msg="Starting up" Feb 8 23:36:35.768034 env[1692]: time="2024-02-08T23:36:35.768003602Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:36:35.768034 env[1692]: time="2024-02-08T23:36:35.768024202Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:36:35.768948 env[1692]: time="2024-02-08T23:36:35.768050902Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:36:35.768948 env[1692]: time="2024-02-08T23:36:35.768064702Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:36:35.770015 env[1692]: time="2024-02-08T23:36:35.769987708Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:36:35.770015 env[1692]: time="2024-02-08T23:36:35.770006108Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:36:35.770172 env[1692]: time="2024-02-08T23:36:35.770023508Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:36:35.770172 env[1692]: time="2024-02-08T23:36:35.770035608Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:36:35.775618 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2144740085-merged.mount: Deactivated successfully. Feb 8 23:36:35.853347 env[1692]: time="2024-02-08T23:36:35.853301933Z" level=info msg="Loading containers: start." Feb 8 23:36:35.956144 kernel: Initializing XFRM netlink socket Feb 8 23:36:35.980593 env[1692]: time="2024-02-08T23:36:35.980475278Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:36:36.103783 systemd-networkd[1490]: docker0: Link UP Feb 8 23:36:36.123067 env[1692]: time="2024-02-08T23:36:36.123022544Z" level=info msg="Loading containers: done." Feb 8 23:36:36.135473 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1212607550-merged.mount: Deactivated successfully. Feb 8 23:36:36.141750 env[1692]: time="2024-02-08T23:36:36.141700791Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:36:36.141933 env[1692]: time="2024-02-08T23:36:36.141909592Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:36:36.142037 env[1692]: time="2024-02-08T23:36:36.142017192Z" level=info msg="Daemon has completed initialization" Feb 8 23:36:36.174134 systemd[1]: Started docker.service. Feb 8 23:36:36.182899 env[1692]: time="2024-02-08T23:36:36.182840796Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:36:36.200309 systemd[1]: Reloading. Feb 8 23:36:36.272421 /usr/lib/systemd/system-generators/torcx-generator[1821]: time="2024-02-08T23:36:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:36:36.272459 /usr/lib/systemd/system-generators/torcx-generator[1821]: time="2024-02-08T23:36:36Z" level=info msg="torcx already run" Feb 8 23:36:36.365351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:36:36.365372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:36:36.383087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:36:36.467574 systemd[1]: Started kubelet.service. Feb 8 23:36:36.538216 kubelet[1883]: E0208 23:36:36.538066 1883 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:36:36.539968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:36:36.540145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:36:41.055928 env[1338]: time="2024-02-08T23:36:41.055875662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 8 23:36:41.790731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3965122635.mount: Deactivated successfully. Feb 8 23:36:43.735454 env[1338]: time="2024-02-08T23:36:43.735390078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:43.741714 env[1338]: time="2024-02-08T23:36:43.741660685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:43.746382 env[1338]: time="2024-02-08T23:36:43.746341166Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:43.750492 env[1338]: time="2024-02-08T23:36:43.750452936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:43.751111 env[1338]: time="2024-02-08T23:36:43.751077247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 8 23:36:43.762152 env[1338]: time="2024-02-08T23:36:43.762094736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 8 23:36:45.914213 env[1338]: time="2024-02-08T23:36:45.914146367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:45.920262 env[1338]: time="2024-02-08T23:36:45.919547955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:45.923435 env[1338]: time="2024-02-08T23:36:45.923061912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:45.927055 env[1338]: time="2024-02-08T23:36:45.926904874Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:45.927778 env[1338]: time="2024-02-08T23:36:45.927744788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 8 23:36:45.938041 env[1338]: time="2024-02-08T23:36:45.938015755Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 8 23:36:46.545778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:36:46.546097 systemd[1]: Stopped kubelet.service. Feb 8 23:36:46.548051 systemd[1]: Started kubelet.service. Feb 8 23:36:46.615323 kubelet[1912]: E0208 23:36:46.615263 1912 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:36:46.618050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:36:46.618221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:36:47.207220 env[1338]: time="2024-02-08T23:36:47.207160750Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:47.214453 env[1338]: time="2024-02-08T23:36:47.214407962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:47.219976 env[1338]: time="2024-02-08T23:36:47.219938947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:47.224031 env[1338]: time="2024-02-08T23:36:47.223998209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:47.224657 env[1338]: time="2024-02-08T23:36:47.224622419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 8 23:36:47.235372 env[1338]: time="2024-02-08T23:36:47.235337684Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 8 23:36:48.306709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount598733937.mount: Deactivated successfully. Feb 8 23:36:48.888954 env[1338]: time="2024-02-08T23:36:48.888898538Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.892969 env[1338]: time="2024-02-08T23:36:48.892919798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.895434 env[1338]: time="2024-02-08T23:36:48.895407636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.901061 env[1338]: time="2024-02-08T23:36:48.901031820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:48.901455 env[1338]: time="2024-02-08T23:36:48.901421126Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 8 23:36:48.910977 env[1338]: time="2024-02-08T23:36:48.910949268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:36:49.432716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2486492411.mount: Deactivated successfully. Feb 8 23:36:49.457716 env[1338]: time="2024-02-08T23:36:49.457601164Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.465470 env[1338]: time="2024-02-08T23:36:49.465426078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.469103 env[1338]: time="2024-02-08T23:36:49.469008730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.474665 env[1338]: time="2024-02-08T23:36:49.474633712Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:49.475081 env[1338]: time="2024-02-08T23:36:49.475053718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:36:49.484990 env[1338]: time="2024-02-08T23:36:49.484963962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 8 23:36:49.934288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount755623414.mount: Deactivated successfully. Feb 8 23:36:54.563556 env[1338]: time="2024-02-08T23:36:54.563493814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:54.569574 env[1338]: time="2024-02-08T23:36:54.569531990Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:54.572644 env[1338]: time="2024-02-08T23:36:54.572609630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:54.576522 env[1338]: time="2024-02-08T23:36:54.576487779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:54.577310 env[1338]: time="2024-02-08T23:36:54.577279389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 8 23:36:54.587182 env[1338]: time="2024-02-08T23:36:54.587163215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 8 23:36:55.065457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390354650.mount: Deactivated successfully. Feb 8 23:36:55.800972 env[1338]: time="2024-02-08T23:36:55.800907408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:55.807593 env[1338]: time="2024-02-08T23:36:55.807521890Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:55.811362 env[1338]: time="2024-02-08T23:36:55.811324138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:55.815858 env[1338]: time="2024-02-08T23:36:55.815820493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:36:55.816392 env[1338]: time="2024-02-08T23:36:55.816360500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 8 23:36:56.795824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:36:56.796202 systemd[1]: Stopped kubelet.service. Feb 8 23:36:56.798165 systemd[1]: Started kubelet.service. Feb 8 23:36:56.874856 kubelet[1945]: E0208 23:36:56.874809 1945 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:36:56.876708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:36:56.876863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:36:58.986523 systemd[1]: Stopped kubelet.service. Feb 8 23:36:59.001098 systemd[1]: Reloading. Feb 8 23:36:59.085073 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2024-02-08T23:36:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:36:59.085551 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2024-02-08T23:36:59Z" level=info msg="torcx already run" Feb 8 23:36:59.173564 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:36:59.173586 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:36:59.191547 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:36:59.281832 systemd[1]: Started kubelet.service. Feb 8 23:36:59.334578 kubelet[2091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:36:59.334578 kubelet[2091]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:36:59.334578 kubelet[2091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:36:59.335056 kubelet[2091]: I0208 23:36:59.334615 2091 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:36:59.755843 kubelet[2091]: I0208 23:36:59.755741 2091 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 8 23:36:59.755843 kubelet[2091]: I0208 23:36:59.755779 2091 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:36:59.756215 kubelet[2091]: I0208 23:36:59.756190 2091 server.go:895] "Client rotation is on, will bootstrap in background" Feb 8 23:36:59.760682 kubelet[2091]: E0208 23:36:59.760661 2091 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.763565 kubelet[2091]: I0208 23:36:59.763540 2091 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:36:59.768682 kubelet[2091]: I0208 23:36:59.768660 2091 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:36:59.768934 kubelet[2091]: I0208 23:36:59.768915 2091 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:36:59.769093 kubelet[2091]: I0208 23:36:59.769074 2091 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 8 23:36:59.769262 kubelet[2091]: I0208 23:36:59.769104 2091 topology_manager.go:138] "Creating topology manager with none policy" Feb 8 23:36:59.769262 kubelet[2091]: I0208 23:36:59.769116 2091 container_manager_linux.go:301] "Creating device plugin manager" Feb 8 23:36:59.769262 kubelet[2091]: I0208 23:36:59.769241 2091 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:36:59.769394 kubelet[2091]: I0208 23:36:59.769337 2091 kubelet.go:393] "Attempting to sync node with API server" Feb 8 23:36:59.769394 kubelet[2091]: I0208 23:36:59.769355 2091 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:36:59.769394 kubelet[2091]: I0208 23:36:59.769388 2091 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:36:59.769501 kubelet[2091]: I0208 23:36:59.769410 2091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:36:59.770447 kubelet[2091]: W0208 23:36:59.770404 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.770576 kubelet[2091]: E0208 23:36:59.770565 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.770764 kubelet[2091]: W0208 23:36:59.770727 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-baa4ff5fd1&limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.770864 kubelet[2091]: E0208 23:36:59.770853 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-baa4ff5fd1&limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.771005 kubelet[2091]: I0208 23:36:59.770995 2091 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:36:59.771342 kubelet[2091]: W0208 23:36:59.771328 2091 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:36:59.772161 kubelet[2091]: I0208 23:36:59.772143 2091 server.go:1232] "Started kubelet" Feb 8 23:36:59.778482 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:36:59.778550 kubelet[2091]: E0208 23:36:59.774936 2091 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-baa4ff5fd1.17b20777c86fab17", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-baa4ff5fd1", UID:"ci-3510.3.2-a-baa4ff5fd1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-baa4ff5fd1"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 36, 59, 772103447, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 36, 59, 772103447, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-baa4ff5fd1"}': 'Post "https://10.200.8.36:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.36:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:36:59.778550 kubelet[2091]: I0208 23:36:59.775079 2091 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:36:59.778550 kubelet[2091]: I0208 23:36:59.775337 2091 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 8 23:36:59.778550 kubelet[2091]: I0208 23:36:59.775376 2091 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:36:59.778550 kubelet[2091]: I0208 23:36:59.775968 2091 server.go:462] "Adding debug handlers to kubelet server" Feb 8 23:36:59.778723 kubelet[2091]: E0208 23:36:59.777706 2091 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:36:59.778723 kubelet[2091]: E0208 23:36:59.777724 2091 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:36:59.778904 kubelet[2091]: I0208 23:36:59.778889 2091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:36:59.779613 kubelet[2091]: I0208 23:36:59.779471 2091 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 8 23:36:59.781343 kubelet[2091]: I0208 23:36:59.781322 2091 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:36:59.781425 kubelet[2091]: I0208 23:36:59.781386 2091 reconciler_new.go:29] "Reconciler: start to sync state" Feb 8 23:36:59.781691 kubelet[2091]: E0208 23:36:59.781669 2091 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-baa4ff5fd1\" not found" Feb 8 23:36:59.782728 kubelet[2091]: E0208 23:36:59.782708 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-baa4ff5fd1?timeout=10s\": dial tcp 10.200.8.36:6443: connect: connection refused" interval="200ms" Feb 8 23:36:59.782821 kubelet[2091]: W0208 23:36:59.782785 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.782881 kubelet[2091]: E0208 23:36:59.782835 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.809900 kubelet[2091]: I0208 23:36:59.809874 2091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 8 23:36:59.812172 kubelet[2091]: I0208 23:36:59.812148 2091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 8 23:36:59.812666 kubelet[2091]: I0208 23:36:59.812642 2091 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 8 23:36:59.812754 kubelet[2091]: I0208 23:36:59.812680 2091 kubelet.go:2303] "Starting kubelet main sync loop" Feb 8 23:36:59.812754 kubelet[2091]: E0208 23:36:59.812731 2091 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:36:59.813730 kubelet[2091]: W0208 23:36:59.813701 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.813829 kubelet[2091]: E0208 23:36:59.813740 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:36:59.839149 kubelet[2091]: I0208 23:36:59.839112 2091 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:36:59.839149 kubelet[2091]: I0208 23:36:59.839144 2091 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:36:59.839401 kubelet[2091]: I0208 23:36:59.839164 2091 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:36:59.851841 kubelet[2091]: I0208 23:36:59.851811 2091 policy_none.go:49] "None policy: Start" Feb 8 23:36:59.852601 kubelet[2091]: I0208 23:36:59.852580 2091 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:36:59.852601 kubelet[2091]: I0208 23:36:59.852605 2091 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:36:59.859587 systemd[1]: Created slice kubepods.slice. Feb 8 23:36:59.863876 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:36:59.866709 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:36:59.872754 kubelet[2091]: I0208 23:36:59.872735 2091 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:36:59.873775 kubelet[2091]: I0208 23:36:59.873756 2091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:36:59.875323 kubelet[2091]: E0208 23:36:59.874510 2091 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-baa4ff5fd1\" not found" Feb 8 23:36:59.882901 kubelet[2091]: I0208 23:36:59.882883 2091 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.883260 kubelet[2091]: E0208 23:36:59.883240 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.36:6443/api/v1/nodes\": dial tcp 10.200.8.36:6443: connect: connection refused" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.913585 kubelet[2091]: I0208 23:36:59.913534 2091 topology_manager.go:215] "Topology Admit Handler" podUID="4021bae91e03ae5d8577c58089b5eb7c" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.915189 kubelet[2091]: I0208 23:36:59.915167 2091 topology_manager.go:215] "Topology Admit Handler" podUID="3c43594631d665c0a38225266174683a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.916570 kubelet[2091]: I0208 23:36:59.916548 2091 topology_manager.go:215] "Topology Admit Handler" podUID="0b62764cb9db0315b921eac32a107f4a" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.923274 systemd[1]: Created slice kubepods-burstable-pod4021bae91e03ae5d8577c58089b5eb7c.slice. Feb 8 23:36:59.932623 systemd[1]: Created slice kubepods-burstable-pod3c43594631d665c0a38225266174683a.slice. Feb 8 23:36:59.936475 systemd[1]: Created slice kubepods-burstable-pod0b62764cb9db0315b921eac32a107f4a.slice. Feb 8 23:36:59.981991 kubelet[2091]: I0208 23:36:59.981943 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4021bae91e03ae5d8577c58089b5eb7c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"4021bae91e03ae5d8577c58089b5eb7c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982188 kubelet[2091]: I0208 23:36:59.982000 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4021bae91e03ae5d8577c58089b5eb7c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"4021bae91e03ae5d8577c58089b5eb7c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982188 kubelet[2091]: I0208 23:36:59.982049 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982188 kubelet[2091]: I0208 23:36:59.982087 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982188 kubelet[2091]: I0208 23:36:59.982150 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982188 kubelet[2091]: I0208 23:36:59.982185 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4021bae91e03ae5d8577c58089b5eb7c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"4021bae91e03ae5d8577c58089b5eb7c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982472 kubelet[2091]: I0208 23:36:59.982220 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982472 kubelet[2091]: I0208 23:36:59.982258 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.982472 kubelet[2091]: I0208 23:36:59.982296 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b62764cb9db0315b921eac32a107f4a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"0b62764cb9db0315b921eac32a107f4a\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:36:59.984143 kubelet[2091]: E0208 23:36:59.984089 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-baa4ff5fd1?timeout=10s\": dial tcp 10.200.8.36:6443: connect: connection refused" interval="400ms" Feb 8 23:37:00.085092 kubelet[2091]: I0208 23:37:00.085057 2091 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:00.085481 kubelet[2091]: E0208 23:37:00.085452 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.36:6443/api/v1/nodes\": dial tcp 10.200.8.36:6443: connect: connection refused" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:00.231280 env[1338]: time="2024-02-08T23:37:00.230966012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-baa4ff5fd1,Uid:4021bae91e03ae5d8577c58089b5eb7c,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:00.236620 env[1338]: time="2024-02-08T23:37:00.236567073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1,Uid:3c43594631d665c0a38225266174683a,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:00.239668 env[1338]: time="2024-02-08T23:37:00.239605306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-baa4ff5fd1,Uid:0b62764cb9db0315b921eac32a107f4a,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:00.385511 kubelet[2091]: E0208 23:37:00.385398 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-baa4ff5fd1?timeout=10s\": dial tcp 10.200.8.36:6443: connect: connection refused" interval="800ms" Feb 8 23:37:00.487047 kubelet[2091]: I0208 23:37:00.487014 2091 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:00.487355 kubelet[2091]: E0208 23:37:00.487335 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.36:6443/api/v1/nodes\": dial tcp 10.200.8.36:6443: connect: connection refused" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:00.686762 kubelet[2091]: W0208 23:37:00.686627 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-baa4ff5fd1&limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:00.686762 kubelet[2091]: E0208 23:37:00.686701 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-baa4ff5fd1&limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:00.756749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241431912.mount: Deactivated successfully. Feb 8 23:37:00.792664 env[1338]: time="2024-02-08T23:37:00.792617332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.795688 env[1338]: time="2024-02-08T23:37:00.795650965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.807721 env[1338]: time="2024-02-08T23:37:00.807681796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.812026 env[1338]: time="2024-02-08T23:37:00.811989043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.814917 env[1338]: time="2024-02-08T23:37:00.814879375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.819280 env[1338]: time="2024-02-08T23:37:00.819197122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.822202 env[1338]: time="2024-02-08T23:37:00.822171754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.825946 env[1338]: time="2024-02-08T23:37:00.825910395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.853342 env[1338]: time="2024-02-08T23:37:00.853288493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.856163 env[1338]: time="2024-02-08T23:37:00.856116824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.875839 env[1338]: time="2024-02-08T23:37:00.875795039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.885281 env[1338]: time="2024-02-08T23:37:00.885225241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:00.919205 env[1338]: time="2024-02-08T23:37:00.916477482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:00.919205 env[1338]: time="2024-02-08T23:37:00.916508182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:00.919205 env[1338]: time="2024-02-08T23:37:00.916517882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:00.919205 env[1338]: time="2024-02-08T23:37:00.916636284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ba8660d441ff1e255a553d5259e4ea1d52cefa404cad611a488d92df2ccd9a1 pid=2128 runtime=io.containerd.runc.v2 Feb 8 23:37:00.938669 systemd[1]: Started cri-containerd-2ba8660d441ff1e255a553d5259e4ea1d52cefa404cad611a488d92df2ccd9a1.scope. Feb 8 23:37:00.957343 env[1338]: time="2024-02-08T23:37:00.957280826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:00.957540 env[1338]: time="2024-02-08T23:37:00.957519429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:00.957635 env[1338]: time="2024-02-08T23:37:00.957619230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:00.957944 env[1338]: time="2024-02-08T23:37:00.957889633Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ad19bdc98fae88e07aa707fcbcc8b063e54202aea6269e2aa82aa4639328084 pid=2163 runtime=io.containerd.runc.v2 Feb 8 23:37:00.975973 systemd[1]: Started cri-containerd-0ad19bdc98fae88e07aa707fcbcc8b063e54202aea6269e2aa82aa4639328084.scope. Feb 8 23:37:00.996187 env[1338]: time="2024-02-08T23:37:00.995916947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:00.996187 env[1338]: time="2024-02-08T23:37:00.995970048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:00.996187 env[1338]: time="2024-02-08T23:37:00.995984048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:00.996187 env[1338]: time="2024-02-08T23:37:00.996161250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a5614de51cf45085bf1706f344766a1d5da5882a2515b53d65b83f5cead3421 pid=2193 runtime=io.containerd.runc.v2 Feb 8 23:37:01.012608 env[1338]: time="2024-02-08T23:37:01.012566126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-baa4ff5fd1,Uid:4021bae91e03ae5d8577c58089b5eb7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba8660d441ff1e255a553d5259e4ea1d52cefa404cad611a488d92df2ccd9a1\"" Feb 8 23:37:01.017790 env[1338]: time="2024-02-08T23:37:01.017742581Z" level=info msg="CreateContainer within sandbox \"2ba8660d441ff1e255a553d5259e4ea1d52cefa404cad611a488d92df2ccd9a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:37:01.034708 systemd[1]: Started cri-containerd-0a5614de51cf45085bf1706f344766a1d5da5882a2515b53d65b83f5cead3421.scope. Feb 8 23:37:01.074551 kubelet[2091]: W0208 23:37:01.074418 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.074551 kubelet[2091]: E0208 23:37:01.074499 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.079543 env[1338]: time="2024-02-08T23:37:01.079492237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1,Uid:3c43594631d665c0a38225266174683a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ad19bdc98fae88e07aa707fcbcc8b063e54202aea6269e2aa82aa4639328084\"" Feb 8 23:37:01.083449 env[1338]: time="2024-02-08T23:37:01.083414378Z" level=info msg="CreateContainer within sandbox \"0ad19bdc98fae88e07aa707fcbcc8b063e54202aea6269e2aa82aa4639328084\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:37:01.100345 env[1338]: time="2024-02-08T23:37:01.100306958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-baa4ff5fd1,Uid:0b62764cb9db0315b921eac32a107f4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a5614de51cf45085bf1706f344766a1d5da5882a2515b53d65b83f5cead3421\"" Feb 8 23:37:01.102897 env[1338]: time="2024-02-08T23:37:01.102866485Z" level=info msg="CreateContainer within sandbox \"0a5614de51cf45085bf1706f344766a1d5da5882a2515b53d65b83f5cead3421\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:37:01.186841 kubelet[2091]: E0208 23:37:01.186800 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-baa4ff5fd1?timeout=10s\": dial tcp 10.200.8.36:6443: connect: connection refused" interval="1.6s" Feb 8 23:37:01.205705 kubelet[2091]: W0208 23:37:01.205542 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.205705 kubelet[2091]: E0208 23:37:01.205619 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.290238 kubelet[2091]: I0208 23:37:01.290197 2091 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:01.290559 kubelet[2091]: E0208 23:37:01.290537 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.36:6443/api/v1/nodes\": dial tcp 10.200.8.36:6443: connect: connection refused" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:01.335339 kubelet[2091]: W0208 23:37:01.335287 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.335339 kubelet[2091]: E0208 23:37:01.335335 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.675104 env[1338]: time="2024-02-08T23:37:01.675023363Z" level=info msg="CreateContainer within sandbox \"2ba8660d441ff1e255a553d5259e4ea1d52cefa404cad611a488d92df2ccd9a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e5093b63b9f5d2c9330dc1feefca320ab46d63cfb10c62eb2ebf8e7ed749a9e\"" Feb 8 23:37:01.676402 env[1338]: time="2024-02-08T23:37:01.676361777Z" level=info msg="StartContainer for \"0e5093b63b9f5d2c9330dc1feefca320ab46d63cfb10c62eb2ebf8e7ed749a9e\"" Feb 8 23:37:01.695631 systemd[1]: Started cri-containerd-0e5093b63b9f5d2c9330dc1feefca320ab46d63cfb10c62eb2ebf8e7ed749a9e.scope. Feb 8 23:37:01.727012 env[1338]: time="2024-02-08T23:37:01.726939714Z" level=info msg="CreateContainer within sandbox \"0ad19bdc98fae88e07aa707fcbcc8b063e54202aea6269e2aa82aa4639328084\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a867a3de36e4909bc972969bf19ecb3c5eb38c625a6d7b5672c7a7cea54c1faf\"" Feb 8 23:37:01.727879 env[1338]: time="2024-02-08T23:37:01.727852024Z" level=info msg="StartContainer for \"a867a3de36e4909bc972969bf19ecb3c5eb38c625a6d7b5672c7a7cea54c1faf\"" Feb 8 23:37:01.740601 env[1338]: time="2024-02-08T23:37:01.740562059Z" level=info msg="CreateContainer within sandbox \"0a5614de51cf45085bf1706f344766a1d5da5882a2515b53d65b83f5cead3421\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be87701de371276a809e398bbafaaf3187a1095dcb1de82c87a4ac8a85286f35\"" Feb 8 23:37:01.741248 env[1338]: time="2024-02-08T23:37:01.741215366Z" level=info msg="StartContainer for \"be87701de371276a809e398bbafaaf3187a1095dcb1de82c87a4ac8a85286f35\"" Feb 8 23:37:01.763044 env[1338]: time="2024-02-08T23:37:01.763001597Z" level=info msg="StartContainer for \"0e5093b63b9f5d2c9330dc1feefca320ab46d63cfb10c62eb2ebf8e7ed749a9e\" returns successfully" Feb 8 23:37:01.787759 systemd[1]: Started cri-containerd-a867a3de36e4909bc972969bf19ecb3c5eb38c625a6d7b5672c7a7cea54c1faf.scope. Feb 8 23:37:01.805034 systemd[1]: Started cri-containerd-be87701de371276a809e398bbafaaf3187a1095dcb1de82c87a4ac8a85286f35.scope. Feb 8 23:37:01.877152 env[1338]: time="2024-02-08T23:37:01.877088909Z" level=info msg="StartContainer for \"a867a3de36e4909bc972969bf19ecb3c5eb38c625a6d7b5672c7a7cea54c1faf\" returns successfully" Feb 8 23:37:01.918147 kubelet[2091]: E0208 23:37:01.918101 2091 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.36:6443: connect: connection refused Feb 8 23:37:01.953112 env[1338]: time="2024-02-08T23:37:01.952989116Z" level=info msg="StartContainer for \"be87701de371276a809e398bbafaaf3187a1095dcb1de82c87a4ac8a85286f35\" returns successfully" Feb 8 23:37:02.892711 kubelet[2091]: I0208 23:37:02.892669 2091 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:04.401409 kubelet[2091]: E0208 23:37:04.401354 2091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-baa4ff5fd1\" not found" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:04.442210 kubelet[2091]: I0208 23:37:04.442176 2091 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:04.774262 kubelet[2091]: I0208 23:37:04.774157 2091 apiserver.go:52] "Watching apiserver" Feb 8 23:37:04.781880 kubelet[2091]: I0208 23:37:04.781857 2091 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:37:04.867986 kubelet[2091]: E0208 23:37:04.867944 2091 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-baa4ff5fd1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:06.177815 kubelet[2091]: W0208 23:37:06.177779 2091 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:37:06.930043 systemd[1]: Reloading. Feb 8 23:37:07.004547 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2024-02-08T23:37:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:37:07.004587 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2024-02-08T23:37:07Z" level=info msg="torcx already run" Feb 8 23:37:07.105167 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:37:07.105187 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:37:08.015283 kubelet[2446]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:37:08.015283 kubelet[2446]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:37:08.015283 kubelet[2446]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:37:08.015283 kubelet[2446]: I0208 23:37:07.330214 2446 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:37:08.015283 kubelet[2446]: I0208 23:37:07.336741 2446 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 8 23:37:08.015283 kubelet[2446]: I0208 23:37:07.336757 2446 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:37:08.015283 kubelet[2446]: I0208 23:37:07.336931 2446 server.go:895] "Client rotation is on, will bootstrap in background" Feb 8 23:37:07.123372 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:37:07.234353 systemd[1]: Stopping kubelet.service... Feb 8 23:37:07.253663 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:37:07.253833 systemd[1]: Stopped kubelet.service. Feb 8 23:37:07.255849 systemd[1]: Started kubelet.service. Feb 8 23:37:08.022559 kubelet[2446]: I0208 23:37:08.020445 2446 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:37:08.022559 kubelet[2446]: I0208 23:37:08.022418 2446 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:37:08.033190 kubelet[2446]: I0208 23:37:08.033165 2446 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:37:08.033428 kubelet[2446]: I0208 23:37:08.033410 2446 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:37:08.033676 kubelet[2446]: I0208 23:37:08.033625 2446 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 8 23:37:08.033815 kubelet[2446]: I0208 23:37:08.033679 2446 topology_manager.go:138] "Creating topology manager with none policy" Feb 8 23:37:08.033815 kubelet[2446]: I0208 23:37:08.033693 2446 container_manager_linux.go:301] "Creating device plugin manager" Feb 8 23:37:08.033815 kubelet[2446]: I0208 23:37:08.033739 2446 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:37:08.033940 kubelet[2446]: I0208 23:37:08.033843 2446 kubelet.go:393] "Attempting to sync node with API server" Feb 8 23:37:08.033940 kubelet[2446]: I0208 23:37:08.033859 2446 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:37:08.033940 kubelet[2446]: I0208 23:37:08.033889 2446 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:37:08.033940 kubelet[2446]: I0208 23:37:08.033905 2446 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:37:08.035262 kubelet[2446]: I0208 23:37:08.035245 2446 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:37:08.035921 kubelet[2446]: I0208 23:37:08.035904 2446 server.go:1232] "Started kubelet" Feb 8 23:37:08.038194 kubelet[2446]: I0208 23:37:08.038178 2446 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:37:08.038826 kubelet[2446]: I0208 23:37:08.038789 2446 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:37:08.040824 kubelet[2446]: I0208 23:37:08.040805 2446 server.go:462] "Adding debug handlers to kubelet server" Feb 8 23:37:08.043031 kubelet[2446]: I0208 23:37:08.043000 2446 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:37:08.043329 kubelet[2446]: I0208 23:37:08.043315 2446 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 8 23:37:08.046535 kubelet[2446]: I0208 23:37:08.046518 2446 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 8 23:37:08.047044 kubelet[2446]: I0208 23:37:08.047019 2446 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:37:08.047203 kubelet[2446]: I0208 23:37:08.047186 2446 reconciler_new.go:29] "Reconciler: start to sync state" Feb 8 23:37:08.055065 kubelet[2446]: I0208 23:37:08.055049 2446 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 8 23:37:08.056241 kubelet[2446]: I0208 23:37:08.056226 2446 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 8 23:37:08.056351 kubelet[2446]: I0208 23:37:08.056342 2446 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 8 23:37:08.056424 kubelet[2446]: I0208 23:37:08.056417 2446 kubelet.go:2303] "Starting kubelet main sync loop" Feb 8 23:37:08.056532 kubelet[2446]: E0208 23:37:08.056521 2446 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:37:08.058155 kubelet[2446]: E0208 23:37:08.058096 2446 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:37:08.058155 kubelet[2446]: E0208 23:37:08.058140 2446 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:37:08.137037 kubelet[2446]: I0208 23:37:08.137007 2446 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:37:08.137037 kubelet[2446]: I0208 23:37:08.137028 2446 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:37:08.137037 kubelet[2446]: I0208 23:37:08.137047 2446 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:37:08.137304 kubelet[2446]: I0208 23:37:08.137233 2446 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:37:08.137304 kubelet[2446]: I0208 23:37:08.137261 2446 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 8 23:37:08.137304 kubelet[2446]: I0208 23:37:08.137270 2446 policy_none.go:49] "None policy: Start" Feb 8 23:37:08.137823 kubelet[2446]: I0208 23:37:08.137801 2446 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:37:08.137823 kubelet[2446]: I0208 23:37:08.137825 2446 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:37:08.137989 kubelet[2446]: I0208 23:37:08.137968 2446 state_mem.go:75] "Updated machine memory state" Feb 8 23:37:08.141603 kubelet[2446]: I0208 23:37:08.141581 2446 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:37:08.144343 kubelet[2446]: I0208 23:37:08.144150 2446 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:37:08.151238 kubelet[2446]: I0208 23:37:08.151184 2446 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.157175 kubelet[2446]: I0208 23:37:08.157157 2446 topology_manager.go:215] "Topology Admit Handler" podUID="4021bae91e03ae5d8577c58089b5eb7c" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.157474 kubelet[2446]: I0208 23:37:08.157454 2446 topology_manager.go:215] "Topology Admit Handler" podUID="3c43594631d665c0a38225266174683a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.157600 kubelet[2446]: I0208 23:37:08.157591 2446 topology_manager.go:215] "Topology Admit Handler" podUID="0b62764cb9db0315b921eac32a107f4a" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.162263 kubelet[2446]: W0208 23:37:08.162247 2446 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:37:08.168405 kubelet[2446]: W0208 23:37:08.168380 2446 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:37:08.168613 kubelet[2446]: E0208 23:37:08.168599 2446 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.168872 kubelet[2446]: W0208 23:37:08.168857 2446 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:37:08.171136 kubelet[2446]: I0208 23:37:08.171110 2446 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.171228 kubelet[2446]: I0208 23:37:08.171198 2446 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348187 kubelet[2446]: I0208 23:37:08.348146 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4021bae91e03ae5d8577c58089b5eb7c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"4021bae91e03ae5d8577c58089b5eb7c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348468 kubelet[2446]: I0208 23:37:08.348441 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4021bae91e03ae5d8577c58089b5eb7c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"4021bae91e03ae5d8577c58089b5eb7c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348617 kubelet[2446]: I0208 23:37:08.348489 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348617 kubelet[2446]: I0208 23:37:08.348526 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348617 kubelet[2446]: I0208 23:37:08.348566 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348789 kubelet[2446]: I0208 23:37:08.348624 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348789 kubelet[2446]: I0208 23:37:08.348677 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c43594631d665c0a38225266174683a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"3c43594631d665c0a38225266174683a\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348789 kubelet[2446]: I0208 23:37:08.348713 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b62764cb9db0315b921eac32a107f4a-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"0b62764cb9db0315b921eac32a107f4a\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.348789 kubelet[2446]: I0208 23:37:08.348750 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4021bae91e03ae5d8577c58089b5eb7c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" (UID: \"4021bae91e03ae5d8577c58089b5eb7c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:08.798202 sudo[2475]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:37:08.798787 sudo[2475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:37:09.034756 kubelet[2446]: I0208 23:37:09.034690 2446 apiserver.go:52] "Watching apiserver" Feb 8 23:37:09.047609 kubelet[2446]: I0208 23:37:09.047573 2446 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:37:09.125072 kubelet[2446]: W0208 23:37:09.124972 2446 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:37:09.125356 kubelet[2446]: E0208 23:37:09.125336 2446 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-baa4ff5fd1\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" Feb 8 23:37:09.158614 kubelet[2446]: I0208 23:37:09.158586 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-baa4ff5fd1" podStartSLOduration=1.158527578 podCreationTimestamp="2024-02-08 23:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:09.148169088 +0000 UTC m=+1.887808293" watchObservedRunningTime="2024-02-08 23:37:09.158527578 +0000 UTC m=+1.898166783" Feb 8 23:37:09.167413 kubelet[2446]: I0208 23:37:09.167390 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-baa4ff5fd1" podStartSLOduration=3.167347255 podCreationTimestamp="2024-02-08 23:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:09.159344385 +0000 UTC m=+1.898983590" watchObservedRunningTime="2024-02-08 23:37:09.167347255 +0000 UTC m=+1.906986460" Feb 8 23:37:09.175813 kubelet[2446]: I0208 23:37:09.175786 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-baa4ff5fd1" podStartSLOduration=1.175741728 podCreationTimestamp="2024-02-08 23:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:09.168334663 +0000 UTC m=+1.907973868" watchObservedRunningTime="2024-02-08 23:37:09.175741728 +0000 UTC m=+1.915380933" Feb 8 23:37:09.362028 sudo[2475]: pam_unix(sudo:session): session closed for user root Feb 8 23:37:10.928222 sudo[1677]: pam_unix(sudo:session): session closed for user root Feb 8 23:37:11.026735 sshd[1674]: pam_unix(sshd:session): session closed for user core Feb 8 23:37:11.029740 systemd[1]: sshd@4-10.200.8.36:22-10.200.12.6:60224.service: Deactivated successfully. Feb 8 23:37:11.030635 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:37:11.030845 systemd[1]: session-7.scope: Consumed 3.963s CPU time. Feb 8 23:37:11.031413 systemd-logind[1327]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:37:11.032224 systemd-logind[1327]: Removed session 7. Feb 8 23:37:21.552854 kubelet[2446]: I0208 23:37:21.552822 2446 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:37:21.553634 env[1338]: time="2024-02-08T23:37:21.553586633Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:37:21.553976 kubelet[2446]: I0208 23:37:21.553809 2446 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:37:22.093362 kubelet[2446]: I0208 23:37:22.093323 2446 topology_manager.go:215] "Topology Admit Handler" podUID="30d37b1c-dd61-4dce-9ff0-544dc42f79a7" podNamespace="kube-system" podName="kube-proxy-zfcnp" Feb 8 23:37:22.099953 systemd[1]: Created slice kubepods-besteffort-pod30d37b1c_dd61_4dce_9ff0_544dc42f79a7.slice. Feb 8 23:37:22.121088 kubelet[2446]: I0208 23:37:22.121059 2446 topology_manager.go:215] "Topology Admit Handler" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" podNamespace="kube-system" podName="cilium-52rv2" Feb 8 23:37:22.126812 systemd[1]: Created slice kubepods-burstable-pod815aab58_6d9b_44a5_a1ae_d621a0146a8e.slice. Feb 8 23:37:22.142257 kubelet[2446]: I0208 23:37:22.142231 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mncl9\" (UniqueName: \"kubernetes.io/projected/30d37b1c-dd61-4dce-9ff0-544dc42f79a7-kube-api-access-mncl9\") pod \"kube-proxy-zfcnp\" (UID: \"30d37b1c-dd61-4dce-9ff0-544dc42f79a7\") " pod="kube-system/kube-proxy-zfcnp" Feb 8 23:37:22.142455 kubelet[2446]: I0208 23:37:22.142441 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-run\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.142600 kubelet[2446]: I0208 23:37:22.142586 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30d37b1c-dd61-4dce-9ff0-544dc42f79a7-xtables-lock\") pod \"kube-proxy-zfcnp\" (UID: \"30d37b1c-dd61-4dce-9ff0-544dc42f79a7\") " pod="kube-system/kube-proxy-zfcnp" Feb 8 23:37:22.142716 kubelet[2446]: I0208 23:37:22.142705 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvqlv\" (UniqueName: \"kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-kube-api-access-wvqlv\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.142826 kubelet[2446]: I0208 23:37:22.142816 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-config-path\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.142952 kubelet[2446]: I0208 23:37:22.142941 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-etc-cni-netd\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143063 kubelet[2446]: I0208 23:37:22.143051 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-xtables-lock\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143185 kubelet[2446]: I0208 23:37:22.143174 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-net\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143293 kubelet[2446]: I0208 23:37:22.143283 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-bpf-maps\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143403 kubelet[2446]: I0208 23:37:22.143391 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cni-path\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143511 kubelet[2446]: I0208 23:37:22.143500 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/815aab58-6d9b-44a5-a1ae-d621a0146a8e-clustermesh-secrets\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143626 kubelet[2446]: I0208 23:37:22.143610 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-kernel\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143697 kubelet[2446]: I0208 23:37:22.143662 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hubble-tls\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143745 kubelet[2446]: I0208 23:37:22.143701 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hostproc\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143745 kubelet[2446]: I0208 23:37:22.143736 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-cgroup\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143833 kubelet[2446]: I0208 23:37:22.143776 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-lib-modules\") pod \"cilium-52rv2\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " pod="kube-system/cilium-52rv2" Feb 8 23:37:22.143833 kubelet[2446]: I0208 23:37:22.143804 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30d37b1c-dd61-4dce-9ff0-544dc42f79a7-kube-proxy\") pod \"kube-proxy-zfcnp\" (UID: \"30d37b1c-dd61-4dce-9ff0-544dc42f79a7\") " pod="kube-system/kube-proxy-zfcnp" Feb 8 23:37:22.143913 kubelet[2446]: I0208 23:37:22.143834 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30d37b1c-dd61-4dce-9ff0-544dc42f79a7-lib-modules\") pod \"kube-proxy-zfcnp\" (UID: \"30d37b1c-dd61-4dce-9ff0-544dc42f79a7\") " pod="kube-system/kube-proxy-zfcnp" Feb 8 23:37:22.409532 env[1338]: time="2024-02-08T23:37:22.409407352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfcnp,Uid:30d37b1c-dd61-4dce-9ff0-544dc42f79a7,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:22.435952 env[1338]: time="2024-02-08T23:37:22.430807391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52rv2,Uid:815aab58-6d9b-44a5-a1ae-d621a0146a8e,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:22.445115 env[1338]: time="2024-02-08T23:37:22.441758662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:22.445115 env[1338]: time="2024-02-08T23:37:22.441797962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:22.445115 env[1338]: time="2024-02-08T23:37:22.441812162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:22.445115 env[1338]: time="2024-02-08T23:37:22.441928263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1837e86e2e0e96a72040a1ba4db8962166ddd0addf5fdabd136979b99e33b07c pid=2528 runtime=io.containerd.runc.v2 Feb 8 23:37:22.460219 systemd[1]: Started cri-containerd-1837e86e2e0e96a72040a1ba4db8962166ddd0addf5fdabd136979b99e33b07c.scope. Feb 8 23:37:22.474895 env[1338]: time="2024-02-08T23:37:22.474816876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:22.474895 env[1338]: time="2024-02-08T23:37:22.474871477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:22.475193 env[1338]: time="2024-02-08T23:37:22.475145379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:22.475464 env[1338]: time="2024-02-08T23:37:22.475414280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5 pid=2559 runtime=io.containerd.runc.v2 Feb 8 23:37:22.499796 systemd[1]: Started cri-containerd-49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5.scope. Feb 8 23:37:22.523428 env[1338]: time="2024-02-08T23:37:22.523395392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfcnp,Uid:30d37b1c-dd61-4dce-9ff0-544dc42f79a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1837e86e2e0e96a72040a1ba4db8962166ddd0addf5fdabd136979b99e33b07c\"" Feb 8 23:37:22.528354 env[1338]: time="2024-02-08T23:37:22.528322424Z" level=info msg="CreateContainer within sandbox \"1837e86e2e0e96a72040a1ba4db8962166ddd0addf5fdabd136979b99e33b07c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:37:22.538481 env[1338]: time="2024-02-08T23:37:22.538451190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52rv2,Uid:815aab58-6d9b-44a5-a1ae-d621a0146a8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\"" Feb 8 23:37:22.541050 env[1338]: time="2024-02-08T23:37:22.540266901Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:37:22.574831 env[1338]: time="2024-02-08T23:37:22.574781625Z" level=info msg="CreateContainer within sandbox \"1837e86e2e0e96a72040a1ba4db8962166ddd0addf5fdabd136979b99e33b07c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05ba1f142e77072e95dc7fd4c3b374ce3456349aa352bb095b03427250906b9a\"" Feb 8 23:37:22.577079 env[1338]: time="2024-02-08T23:37:22.576416236Z" level=info msg="StartContainer for \"05ba1f142e77072e95dc7fd4c3b374ce3456349aa352bb095b03427250906b9a\"" Feb 8 23:37:22.597567 kubelet[2446]: I0208 23:37:22.597529 2446 topology_manager.go:215] "Topology Admit Handler" podUID="65591a0e-5ae4-4cb1-b827-2927911805da" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-n8hbc" Feb 8 23:37:22.603467 systemd[1]: Created slice kubepods-besteffort-pod65591a0e_5ae4_4cb1_b827_2927911805da.slice. Feb 8 23:37:22.613969 systemd[1]: Started cri-containerd-05ba1f142e77072e95dc7fd4c3b374ce3456349aa352bb095b03427250906b9a.scope. Feb 8 23:37:22.646642 kubelet[2446]: I0208 23:37:22.646606 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28ql\" (UniqueName: \"kubernetes.io/projected/65591a0e-5ae4-4cb1-b827-2927911805da-kube-api-access-c28ql\") pod \"cilium-operator-6bc8ccdb58-n8hbc\" (UID: \"65591a0e-5ae4-4cb1-b827-2927911805da\") " pod="kube-system/cilium-operator-6bc8ccdb58-n8hbc" Feb 8 23:37:22.646804 kubelet[2446]: I0208 23:37:22.646663 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65591a0e-5ae4-4cb1-b827-2927911805da-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-n8hbc\" (UID: \"65591a0e-5ae4-4cb1-b827-2927911805da\") " pod="kube-system/cilium-operator-6bc8ccdb58-n8hbc" Feb 8 23:37:22.678974 env[1338]: time="2024-02-08T23:37:22.678000695Z" level=info msg="StartContainer for \"05ba1f142e77072e95dc7fd4c3b374ce3456349aa352bb095b03427250906b9a\" returns successfully" Feb 8 23:37:22.909438 env[1338]: time="2024-02-08T23:37:22.909398898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-n8hbc,Uid:65591a0e-5ae4-4cb1-b827-2927911805da,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:22.949132 env[1338]: time="2024-02-08T23:37:22.948837254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:22.949333 env[1338]: time="2024-02-08T23:37:22.948880754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:22.949333 env[1338]: time="2024-02-08T23:37:22.948893554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:22.949507 env[1338]: time="2024-02-08T23:37:22.949380457Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a pid=2721 runtime=io.containerd.runc.v2 Feb 8 23:37:22.962188 systemd[1]: Started cri-containerd-cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a.scope. Feb 8 23:37:23.006331 env[1338]: time="2024-02-08T23:37:23.006276726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-n8hbc,Uid:65591a0e-5ae4-4cb1-b827-2927911805da,Namespace:kube-system,Attempt:0,} returns sandbox id \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\"" Feb 8 23:37:28.162783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466451253.mount: Deactivated successfully. Feb 8 23:37:30.898980 env[1338]: time="2024-02-08T23:37:30.898928803Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:30.906507 env[1338]: time="2024-02-08T23:37:30.906465845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:30.911504 env[1338]: time="2024-02-08T23:37:30.911469472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:30.912066 env[1338]: time="2024-02-08T23:37:30.912031976Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:37:30.913671 env[1338]: time="2024-02-08T23:37:30.913634984Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:37:30.914919 env[1338]: time="2024-02-08T23:37:30.914887991Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:37:30.944052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112452281.mount: Deactivated successfully. Feb 8 23:37:30.956277 env[1338]: time="2024-02-08T23:37:30.956236619Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\"" Feb 8 23:37:30.958662 env[1338]: time="2024-02-08T23:37:30.956955723Z" level=info msg="StartContainer for \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\"" Feb 8 23:37:30.981981 systemd[1]: Started cri-containerd-38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d.scope. Feb 8 23:37:31.014993 env[1338]: time="2024-02-08T23:37:31.014944042Z" level=info msg="StartContainer for \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\" returns successfully" Feb 8 23:37:31.021870 systemd[1]: cri-containerd-38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d.scope: Deactivated successfully. Feb 8 23:37:31.676388 kubelet[2446]: I0208 23:37:31.169830 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zfcnp" podStartSLOduration=9.169788879 podCreationTimestamp="2024-02-08 23:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:23.144596105 +0000 UTC m=+15.884235410" watchObservedRunningTime="2024-02-08 23:37:31.169788879 +0000 UTC m=+23.909428084" Feb 8 23:37:31.942339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d-rootfs.mount: Deactivated successfully. Feb 8 23:37:34.721733 env[1338]: time="2024-02-08T23:37:34.721667472Z" level=info msg="shim disconnected" id=38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d Feb 8 23:37:34.722434 env[1338]: time="2024-02-08T23:37:34.722400976Z" level=warning msg="cleaning up after shim disconnected" id=38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d namespace=k8s.io Feb 8 23:37:34.722434 env[1338]: time="2024-02-08T23:37:34.722425676Z" level=info msg="cleaning up dead shim" Feb 8 23:37:34.732376 env[1338]: time="2024-02-08T23:37:34.732339027Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:37:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2852 runtime=io.containerd.runc.v2\n" Feb 8 23:37:35.166696 env[1338]: time="2024-02-08T23:37:35.166643731Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:37:35.244256 env[1338]: time="2024-02-08T23:37:35.244199020Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\"" Feb 8 23:37:35.247276 env[1338]: time="2024-02-08T23:37:35.244823823Z" level=info msg="StartContainer for \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\"" Feb 8 23:37:35.273944 systemd[1]: Started cri-containerd-d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca.scope. Feb 8 23:37:35.311081 env[1338]: time="2024-02-08T23:37:35.311043356Z" level=info msg="StartContainer for \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\" returns successfully" Feb 8 23:37:35.318762 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:37:35.319472 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:37:35.319720 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:37:35.322304 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:37:35.327790 systemd[1]: cri-containerd-d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca.scope: Deactivated successfully. Feb 8 23:37:35.333446 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:37:35.366012 env[1338]: time="2024-02-08T23:37:35.365962031Z" level=info msg="shim disconnected" id=d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca Feb 8 23:37:35.366012 env[1338]: time="2024-02-08T23:37:35.366011931Z" level=warning msg="cleaning up after shim disconnected" id=d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca namespace=k8s.io Feb 8 23:37:35.366336 env[1338]: time="2024-02-08T23:37:35.366023232Z" level=info msg="cleaning up dead shim" Feb 8 23:37:35.374476 env[1338]: time="2024-02-08T23:37:35.374428774Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:37:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2914 runtime=io.containerd.runc.v2\n" Feb 8 23:37:36.186882 env[1338]: time="2024-02-08T23:37:36.186833834Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:37:36.234994 systemd[1]: run-containerd-runc-k8s.io-d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca-runc.yLDeTc.mount: Deactivated successfully. Feb 8 23:37:36.235168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca-rootfs.mount: Deactivated successfully. Feb 8 23:37:36.256004 env[1338]: time="2024-02-08T23:37:36.255951174Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\"" Feb 8 23:37:36.256718 env[1338]: time="2024-02-08T23:37:36.256683078Z" level=info msg="StartContainer for \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\"" Feb 8 23:37:36.293976 systemd[1]: Started cri-containerd-d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5.scope. Feb 8 23:37:36.349004 systemd[1]: cri-containerd-d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5.scope: Deactivated successfully. Feb 8 23:37:36.351897 env[1338]: time="2024-02-08T23:37:36.351858747Z" level=info msg="StartContainer for \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\" returns successfully" Feb 8 23:37:36.376418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5-rootfs.mount: Deactivated successfully. Feb 8 23:37:36.835646 env[1338]: time="2024-02-08T23:37:36.835577931Z" level=info msg="shim disconnected" id=d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5 Feb 8 23:37:36.835646 env[1338]: time="2024-02-08T23:37:36.835648431Z" level=warning msg="cleaning up after shim disconnected" id=d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5 namespace=k8s.io Feb 8 23:37:36.835927 env[1338]: time="2024-02-08T23:37:36.835659331Z" level=info msg="cleaning up dead shim" Feb 8 23:37:36.844102 env[1338]: time="2024-02-08T23:37:36.844060973Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:37:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2972 runtime=io.containerd.runc.v2\n" Feb 8 23:37:36.888336 env[1338]: time="2024-02-08T23:37:36.888291890Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:36.894555 env[1338]: time="2024-02-08T23:37:36.894515821Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:36.898539 env[1338]: time="2024-02-08T23:37:36.898509441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:37:36.898936 env[1338]: time="2024-02-08T23:37:36.898902243Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:37:36.902771 env[1338]: time="2024-02-08T23:37:36.902739962Z" level=info msg="CreateContainer within sandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:37:36.934380 env[1338]: time="2024-02-08T23:37:36.934285417Z" level=info msg="CreateContainer within sandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\"" Feb 8 23:37:36.935137 env[1338]: time="2024-02-08T23:37:36.935093321Z" level=info msg="StartContainer for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\"" Feb 8 23:37:36.956995 systemd[1]: Started cri-containerd-01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c.scope. Feb 8 23:37:36.992310 env[1338]: time="2024-02-08T23:37:36.992261003Z" level=info msg="StartContainer for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" returns successfully" Feb 8 23:37:37.176223 env[1338]: time="2024-02-08T23:37:37.176101394Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:37:37.213311 env[1338]: time="2024-02-08T23:37:37.213258473Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\"" Feb 8 23:37:37.214635 env[1338]: time="2024-02-08T23:37:37.214598280Z" level=info msg="StartContainer for \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\"" Feb 8 23:37:37.252537 systemd[1]: Started cri-containerd-58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95.scope. Feb 8 23:37:37.260971 systemd[1]: run-containerd-runc-k8s.io-58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95-runc.ZwpzNl.mount: Deactivated successfully. Feb 8 23:37:37.342106 systemd[1]: cri-containerd-58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95.scope: Deactivated successfully. Feb 8 23:37:37.344064 env[1338]: time="2024-02-08T23:37:37.343995906Z" level=info msg="StartContainer for \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\" returns successfully" Feb 8 23:37:37.365234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95-rootfs.mount: Deactivated successfully. Feb 8 23:37:37.382476 env[1338]: time="2024-02-08T23:37:37.382417992Z" level=info msg="shim disconnected" id=58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95 Feb 8 23:37:37.382719 env[1338]: time="2024-02-08T23:37:37.382480293Z" level=warning msg="cleaning up after shim disconnected" id=58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95 namespace=k8s.io Feb 8 23:37:37.382719 env[1338]: time="2024-02-08T23:37:37.382493293Z" level=info msg="cleaning up dead shim" Feb 8 23:37:37.393291 env[1338]: time="2024-02-08T23:37:37.393247045Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3064 runtime=io.containerd.runc.v2\n" Feb 8 23:37:38.188777 env[1338]: time="2024-02-08T23:37:38.188719379Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:37:38.209864 kubelet[2446]: I0208 23:37:38.209831 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-n8hbc" podStartSLOduration=2.318819175 podCreationTimestamp="2024-02-08 23:37:22 +0000 UTC" firstStartedPulling="2024-02-08 23:37:23.008290139 +0000 UTC m=+15.747929444" lastFinishedPulling="2024-02-08 23:37:36.899243344 +0000 UTC m=+29.638882649" observedRunningTime="2024-02-08 23:37:37.333756257 +0000 UTC m=+30.073395462" watchObservedRunningTime="2024-02-08 23:37:38.20977238 +0000 UTC m=+30.949411685" Feb 8 23:37:38.227684 env[1338]: time="2024-02-08T23:37:38.227638165Z" level=info msg="CreateContainer within sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\"" Feb 8 23:37:38.229960 env[1338]: time="2024-02-08T23:37:38.228392768Z" level=info msg="StartContainer for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\"" Feb 8 23:37:38.256034 systemd[1]: Started cri-containerd-4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6.scope. Feb 8 23:37:38.299778 env[1338]: time="2024-02-08T23:37:38.299285905Z" level=info msg="StartContainer for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" returns successfully" Feb 8 23:37:38.338217 systemd[1]: run-containerd-runc-k8s.io-4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6-runc.FvF9ah.mount: Deactivated successfully. Feb 8 23:37:38.506892 kubelet[2446]: I0208 23:37:38.506783 2446 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:37:38.536806 kubelet[2446]: I0208 23:37:38.536754 2446 topology_manager.go:215] "Topology Admit Handler" podUID="06094f61-1020-443b-bb18-452d2cdf5aa8" podNamespace="kube-system" podName="coredns-5dd5756b68-8pbt2" Feb 8 23:37:38.543957 systemd[1]: Created slice kubepods-burstable-pod06094f61_1020_443b_bb18_452d2cdf5aa8.slice. Feb 8 23:37:38.553845 kubelet[2446]: I0208 23:37:38.553815 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06094f61-1020-443b-bb18-452d2cdf5aa8-config-volume\") pod \"coredns-5dd5756b68-8pbt2\" (UID: \"06094f61-1020-443b-bb18-452d2cdf5aa8\") " pod="kube-system/coredns-5dd5756b68-8pbt2" Feb 8 23:37:38.553982 kubelet[2446]: I0208 23:37:38.553873 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8c4g\" (UniqueName: \"kubernetes.io/projected/06094f61-1020-443b-bb18-452d2cdf5aa8-kube-api-access-v8c4g\") pod \"coredns-5dd5756b68-8pbt2\" (UID: \"06094f61-1020-443b-bb18-452d2cdf5aa8\") " pod="kube-system/coredns-5dd5756b68-8pbt2" Feb 8 23:37:38.559174 kubelet[2446]: I0208 23:37:38.559149 2446 topology_manager.go:215] "Topology Admit Handler" podUID="683b0ca4-d119-44ee-a4b8-c05d04d8aa77" podNamespace="kube-system" podName="coredns-5dd5756b68-sdgl7" Feb 8 23:37:38.559439 kubelet[2446]: W0208 23:37:38.559418 2446 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-baa4ff5fd1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-baa4ff5fd1' and this object Feb 8 23:37:38.559542 kubelet[2446]: E0208 23:37:38.559461 2446 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-baa4ff5fd1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-baa4ff5fd1' and this object Feb 8 23:37:38.565324 systemd[1]: Created slice kubepods-burstable-pod683b0ca4_d119_44ee_a4b8_c05d04d8aa77.slice. Feb 8 23:37:38.654701 kubelet[2446]: I0208 23:37:38.654664 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdmg\" (UniqueName: \"kubernetes.io/projected/683b0ca4-d119-44ee-a4b8-c05d04d8aa77-kube-api-access-9jdmg\") pod \"coredns-5dd5756b68-sdgl7\" (UID: \"683b0ca4-d119-44ee-a4b8-c05d04d8aa77\") " pod="kube-system/coredns-5dd5756b68-sdgl7" Feb 8 23:37:38.654883 kubelet[2446]: I0208 23:37:38.654719 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683b0ca4-d119-44ee-a4b8-c05d04d8aa77-config-volume\") pod \"coredns-5dd5756b68-sdgl7\" (UID: \"683b0ca4-d119-44ee-a4b8-c05d04d8aa77\") " pod="kube-system/coredns-5dd5756b68-sdgl7" Feb 8 23:37:39.655777 kubelet[2446]: E0208 23:37:39.655729 2446 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:37:39.656319 kubelet[2446]: E0208 23:37:39.655844 2446 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06094f61-1020-443b-bb18-452d2cdf5aa8-config-volume podName:06094f61-1020-443b-bb18-452d2cdf5aa8 nodeName:}" failed. No retries permitted until 2024-02-08 23:37:40.155819403 +0000 UTC m=+32.895458708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/06094f61-1020-443b-bb18-452d2cdf5aa8-config-volume") pod "coredns-5dd5756b68-8pbt2" (UID: "06094f61-1020-443b-bb18-452d2cdf5aa8") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:37:39.756221 kubelet[2446]: E0208 23:37:39.756175 2446 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:37:39.756442 kubelet[2446]: E0208 23:37:39.756288 2446 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/683b0ca4-d119-44ee-a4b8-c05d04d8aa77-config-volume podName:683b0ca4-d119-44ee-a4b8-c05d04d8aa77 nodeName:}" failed. No retries permitted until 2024-02-08 23:37:40.256264773 +0000 UTC m=+32.995904078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/683b0ca4-d119-44ee-a4b8-c05d04d8aa77-config-volume") pod "coredns-5dd5756b68-sdgl7" (UID: "683b0ca4-d119-44ee-a4b8-c05d04d8aa77") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:37:40.353138 env[1338]: time="2024-02-08T23:37:40.353064435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8pbt2,Uid:06094f61-1020-443b-bb18-452d2cdf5aa8,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:40.369245 env[1338]: time="2024-02-08T23:37:40.369201909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sdgl7,Uid:683b0ca4-d119-44ee-a4b8-c05d04d8aa77,Namespace:kube-system,Attempt:0,}" Feb 8 23:37:40.769650 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:37:40.769781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:37:40.772027 systemd-networkd[1490]: cilium_host: Link UP Feb 8 23:37:40.775157 systemd-networkd[1490]: cilium_net: Link UP Feb 8 23:37:40.775391 systemd-networkd[1490]: cilium_net: Gained carrier Feb 8 23:37:40.775580 systemd-networkd[1490]: cilium_host: Gained carrier Feb 8 23:37:41.011264 systemd-networkd[1490]: cilium_net: Gained IPv6LL Feb 8 23:37:41.031996 systemd-networkd[1490]: cilium_vxlan: Link UP Feb 8 23:37:41.032005 systemd-networkd[1490]: cilium_vxlan: Gained carrier Feb 8 23:37:41.302158 kernel: NET: Registered PF_ALG protocol family Feb 8 23:37:41.411311 systemd-networkd[1490]: cilium_host: Gained IPv6LL Feb 8 23:37:42.073882 systemd-networkd[1490]: lxc_health: Link UP Feb 8 23:37:42.094146 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:37:42.095625 systemd-networkd[1490]: lxc_health: Gained carrier Feb 8 23:37:42.371266 systemd-networkd[1490]: cilium_vxlan: Gained IPv6LL Feb 8 23:37:42.411982 systemd-networkd[1490]: lxccf48a9206ec3: Link UP Feb 8 23:37:42.420241 kernel: eth0: renamed from tmp71775 Feb 8 23:37:42.431235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf48a9206ec3: link becomes ready Feb 8 23:37:42.430927 systemd-networkd[1490]: lxccf48a9206ec3: Gained carrier Feb 8 23:37:42.453207 systemd-networkd[1490]: lxcba9d96022719: Link UP Feb 8 23:37:42.459207 kernel: eth0: renamed from tmp902f1 Feb 8 23:37:42.486522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba9d96022719: link becomes ready Feb 8 23:37:42.486324 systemd-networkd[1490]: lxcba9d96022719: Gained carrier Feb 8 23:37:42.518339 kubelet[2446]: I0208 23:37:42.517818 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-52rv2" podStartSLOduration=12.144879945 podCreationTimestamp="2024-02-08 23:37:22 +0000 UTC" firstStartedPulling="2024-02-08 23:37:22.539750198 +0000 UTC m=+15.279389403" lastFinishedPulling="2024-02-08 23:37:30.912643279 +0000 UTC m=+23.652282484" observedRunningTime="2024-02-08 23:37:39.215521445 +0000 UTC m=+31.955160750" watchObservedRunningTime="2024-02-08 23:37:42.517773026 +0000 UTC m=+35.257412331" Feb 8 23:37:44.099328 systemd-networkd[1490]: lxccf48a9206ec3: Gained IPv6LL Feb 8 23:37:44.099682 systemd-networkd[1490]: lxc_health: Gained IPv6LL Feb 8 23:37:44.227264 systemd-networkd[1490]: lxcba9d96022719: Gained IPv6LL Feb 8 23:37:46.331587 env[1338]: time="2024-02-08T23:37:46.330832153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:46.331587 env[1338]: time="2024-02-08T23:37:46.330879654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:46.331587 env[1338]: time="2024-02-08T23:37:46.330895654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:46.331587 env[1338]: time="2024-02-08T23:37:46.331029754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/902f11ea790af336a5eb29217914f0f14fef391aba8276dfed36bb01820b4b72 pid=3617 runtime=io.containerd.runc.v2 Feb 8 23:37:46.368934 systemd[1]: Started cri-containerd-902f11ea790af336a5eb29217914f0f14fef391aba8276dfed36bb01820b4b72.scope. Feb 8 23:37:46.382348 env[1338]: time="2024-02-08T23:37:46.382257668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:37:46.382635 env[1338]: time="2024-02-08T23:37:46.382592669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:37:46.382787 env[1338]: time="2024-02-08T23:37:46.382759370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:37:46.383141 env[1338]: time="2024-02-08T23:37:46.383087371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71775e740bb4b0f6f92f509d3972b8117e5be0f1e41190aeeae373d2ad97505a pid=3639 runtime=io.containerd.runc.v2 Feb 8 23:37:46.421771 systemd[1]: Started cri-containerd-71775e740bb4b0f6f92f509d3972b8117e5be0f1e41190aeeae373d2ad97505a.scope. Feb 8 23:37:46.498577 env[1338]: time="2024-02-08T23:37:46.498519852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sdgl7,Uid:683b0ca4-d119-44ee-a4b8-c05d04d8aa77,Namespace:kube-system,Attempt:0,} returns sandbox id \"902f11ea790af336a5eb29217914f0f14fef391aba8276dfed36bb01820b4b72\"" Feb 8 23:37:46.502442 env[1338]: time="2024-02-08T23:37:46.502388768Z" level=info msg="CreateContainer within sandbox \"902f11ea790af336a5eb29217914f0f14fef391aba8276dfed36bb01820b4b72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:37:46.519536 env[1338]: time="2024-02-08T23:37:46.519458740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8pbt2,Uid:06094f61-1020-443b-bb18-452d2cdf5aa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"71775e740bb4b0f6f92f509d3972b8117e5be0f1e41190aeeae373d2ad97505a\"" Feb 8 23:37:46.525887 env[1338]: time="2024-02-08T23:37:46.525845066Z" level=info msg="CreateContainer within sandbox \"71775e740bb4b0f6f92f509d3972b8117e5be0f1e41190aeeae373d2ad97505a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:37:46.548435 env[1338]: time="2024-02-08T23:37:46.548384660Z" level=info msg="CreateContainer within sandbox \"902f11ea790af336a5eb29217914f0f14fef391aba8276dfed36bb01820b4b72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3493173161fa066792b98da4e044f24401304ca83c203876453fffada2df67a5\"" Feb 8 23:37:46.549689 env[1338]: time="2024-02-08T23:37:46.549359864Z" level=info msg="StartContainer for \"3493173161fa066792b98da4e044f24401304ca83c203876453fffada2df67a5\"" Feb 8 23:37:46.571593 systemd[1]: Started cri-containerd-3493173161fa066792b98da4e044f24401304ca83c203876453fffada2df67a5.scope. Feb 8 23:37:46.573221 env[1338]: time="2024-02-08T23:37:46.573113263Z" level=info msg="CreateContainer within sandbox \"71775e740bb4b0f6f92f509d3972b8117e5be0f1e41190aeeae373d2ad97505a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12db2f4c2da7721e72870d045e018baa849150f2b8540895357b9f2d860ddb87\"" Feb 8 23:37:46.574667 env[1338]: time="2024-02-08T23:37:46.574637469Z" level=info msg="StartContainer for \"12db2f4c2da7721e72870d045e018baa849150f2b8540895357b9f2d860ddb87\"" Feb 8 23:37:46.608683 systemd[1]: Started cri-containerd-12db2f4c2da7721e72870d045e018baa849150f2b8540895357b9f2d860ddb87.scope. Feb 8 23:37:46.657860 env[1338]: time="2024-02-08T23:37:46.657812816Z" level=info msg="StartContainer for \"3493173161fa066792b98da4e044f24401304ca83c203876453fffada2df67a5\" returns successfully" Feb 8 23:37:46.673889 env[1338]: time="2024-02-08T23:37:46.673837983Z" level=info msg="StartContainer for \"12db2f4c2da7721e72870d045e018baa849150f2b8540895357b9f2d860ddb87\" returns successfully" Feb 8 23:37:47.222427 kubelet[2446]: I0208 23:37:47.222394 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sdgl7" podStartSLOduration=25.222353055 podCreationTimestamp="2024-02-08 23:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:47.219572944 +0000 UTC m=+39.959212249" watchObservedRunningTime="2024-02-08 23:37:47.222353055 +0000 UTC m=+39.961992260" Feb 8 23:37:47.233115 kubelet[2446]: I0208 23:37:47.233074 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8pbt2" podStartSLOduration=25.233029199 podCreationTimestamp="2024-02-08 23:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:37:47.230625889 +0000 UTC m=+39.970265194" watchObservedRunningTime="2024-02-08 23:37:47.233029199 +0000 UTC m=+39.972668504" Feb 8 23:37:47.339420 systemd[1]: run-containerd-runc-k8s.io-71775e740bb4b0f6f92f509d3972b8117e5be0f1e41190aeeae373d2ad97505a-runc.GrmBhW.mount: Deactivated successfully. Feb 8 23:39:37.184930 systemd[1]: Started sshd@5-10.200.8.36:22-10.200.12.6:50072.service. Feb 8 23:39:37.828642 sshd[3791]: Accepted publickey for core from 10.200.12.6 port 50072 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:37.830364 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:37.835208 systemd-logind[1327]: New session 8 of user core. Feb 8 23:39:37.835367 systemd[1]: Started session-8.scope. Feb 8 23:39:38.515886 sshd[3791]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:38.519158 systemd[1]: sshd@5-10.200.8.36:22-10.200.12.6:50072.service: Deactivated successfully. Feb 8 23:39:38.520146 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:39:38.521287 systemd-logind[1327]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:39:38.522352 systemd-logind[1327]: Removed session 8. Feb 8 23:39:43.622691 systemd[1]: Started sshd@6-10.200.8.36:22-10.200.12.6:50082.service. Feb 8 23:39:44.242721 sshd[3825]: Accepted publickey for core from 10.200.12.6 port 50082 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:44.244454 sshd[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:44.250390 systemd-logind[1327]: New session 9 of user core. Feb 8 23:39:44.250994 systemd[1]: Started session-9.scope. Feb 8 23:39:44.773356 sshd[3825]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:44.776609 systemd[1]: sshd@6-10.200.8.36:22-10.200.12.6:50082.service: Deactivated successfully. Feb 8 23:39:44.777584 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:39:44.778331 systemd-logind[1327]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:39:44.779101 systemd-logind[1327]: Removed session 9. Feb 8 23:39:49.836102 systemd[1]: Started sshd@7-10.200.8.36:22-10.200.12.6:55368.service. Feb 8 23:39:50.449160 sshd[3838]: Accepted publickey for core from 10.200.12.6 port 55368 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:50.450721 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:50.456016 systemd[1]: Started session-10.scope. Feb 8 23:39:50.456182 systemd-logind[1327]: New session 10 of user core. Feb 8 23:39:50.943852 sshd[3838]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:50.947400 systemd[1]: sshd@7-10.200.8.36:22-10.200.12.6:55368.service: Deactivated successfully. Feb 8 23:39:50.948484 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:39:50.949397 systemd-logind[1327]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:39:50.950195 systemd-logind[1327]: Removed session 10. Feb 8 23:39:56.049229 systemd[1]: Started sshd@8-10.200.8.36:22-10.200.12.6:55376.service. Feb 8 23:39:56.664550 sshd[3855]: Accepted publickey for core from 10.200.12.6 port 55376 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:56.666377 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:56.672089 systemd[1]: Started session-11.scope. Feb 8 23:39:56.672736 systemd-logind[1327]: New session 11 of user core. Feb 8 23:39:57.155390 sshd[3855]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:57.158186 systemd[1]: sshd@8-10.200.8.36:22-10.200.12.6:55376.service: Deactivated successfully. Feb 8 23:39:57.159380 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:39:57.159884 systemd-logind[1327]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:39:57.160778 systemd-logind[1327]: Removed session 11. Feb 8 23:39:57.261375 systemd[1]: Started sshd@9-10.200.8.36:22-10.200.12.6:53246.service. Feb 8 23:39:57.879511 sshd[3867]: Accepted publickey for core from 10.200.12.6 port 53246 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:57.881098 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:57.886083 systemd[1]: Started session-12.scope. Feb 8 23:39:57.886742 systemd-logind[1327]: New session 12 of user core. Feb 8 23:39:59.053884 sshd[3867]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:59.057239 systemd[1]: sshd@9-10.200.8.36:22-10.200.12.6:53246.service: Deactivated successfully. Feb 8 23:39:59.058201 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:39:59.059020 systemd-logind[1327]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:39:59.059883 systemd-logind[1327]: Removed session 12. Feb 8 23:39:59.160417 systemd[1]: Started sshd@10-10.200.8.36:22-10.200.12.6:53256.service. Feb 8 23:39:59.774135 sshd[3878]: Accepted publickey for core from 10.200.12.6 port 53256 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:59.775603 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:59.780189 systemd-logind[1327]: New session 13 of user core. Feb 8 23:39:59.780838 systemd[1]: Started session-13.scope. Feb 8 23:40:00.266289 sshd[3878]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:00.269393 systemd[1]: sshd@10-10.200.8.36:22-10.200.12.6:53256.service: Deactivated successfully. Feb 8 23:40:00.270391 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:40:00.271159 systemd-logind[1327]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:40:00.271920 systemd-logind[1327]: Removed session 13. Feb 8 23:40:05.375709 systemd[1]: Started sshd@11-10.200.8.36:22-10.200.12.6:53258.service. Feb 8 23:40:05.993895 sshd[3890]: Accepted publickey for core from 10.200.12.6 port 53258 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:05.996044 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:06.001006 systemd[1]: Started session-14.scope. Feb 8 23:40:06.001644 systemd-logind[1327]: New session 14 of user core. Feb 8 23:40:06.484859 sshd[3890]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:06.488464 systemd[1]: sshd@11-10.200.8.36:22-10.200.12.6:53258.service: Deactivated successfully. Feb 8 23:40:06.489649 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:40:06.490403 systemd-logind[1327]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:40:06.491310 systemd-logind[1327]: Removed session 14. Feb 8 23:40:11.592145 systemd[1]: Started sshd@12-10.200.8.36:22-10.200.12.6:34324.service. Feb 8 23:40:12.211733 sshd[3903]: Accepted publickey for core from 10.200.12.6 port 34324 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:12.213221 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:12.218452 systemd[1]: Started session-15.scope. Feb 8 23:40:12.219102 systemd-logind[1327]: New session 15 of user core. Feb 8 23:40:12.702804 sshd[3903]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:12.706216 systemd[1]: sshd@12-10.200.8.36:22-10.200.12.6:34324.service: Deactivated successfully. Feb 8 23:40:12.707172 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:40:12.707907 systemd-logind[1327]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:40:12.708799 systemd-logind[1327]: Removed session 15. Feb 8 23:40:17.807979 systemd[1]: Started sshd@13-10.200.8.36:22-10.200.12.6:44558.service. Feb 8 23:40:18.427544 sshd[3915]: Accepted publickey for core from 10.200.12.6 port 44558 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:18.428924 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:18.434204 systemd[1]: Started session-16.scope. Feb 8 23:40:18.434808 systemd-logind[1327]: New session 16 of user core. Feb 8 23:40:18.932759 sshd[3915]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:18.936189 systemd[1]: sshd@13-10.200.8.36:22-10.200.12.6:44558.service: Deactivated successfully. Feb 8 23:40:18.937353 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:40:18.938209 systemd-logind[1327]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:40:18.939007 systemd-logind[1327]: Removed session 16. Feb 8 23:40:19.039105 systemd[1]: Started sshd@14-10.200.8.36:22-10.200.12.6:44564.service. Feb 8 23:40:19.685876 sshd[3927]: Accepted publickey for core from 10.200.12.6 port 44564 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:19.687650 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:19.693220 systemd-logind[1327]: New session 17 of user core. Feb 8 23:40:19.693258 systemd[1]: Started session-17.scope. Feb 8 23:40:20.257202 sshd[3927]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:20.260651 systemd[1]: sshd@14-10.200.8.36:22-10.200.12.6:44564.service: Deactivated successfully. Feb 8 23:40:20.261843 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:40:20.262752 systemd-logind[1327]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:40:20.263709 systemd-logind[1327]: Removed session 17. Feb 8 23:40:20.362016 systemd[1]: Started sshd@15-10.200.8.36:22-10.200.12.6:44576.service. Feb 8 23:40:20.985858 sshd[3936]: Accepted publickey for core from 10.200.12.6 port 44576 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:20.987353 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:20.992463 systemd[1]: Started session-18.scope. Feb 8 23:40:20.993266 systemd-logind[1327]: New session 18 of user core. Feb 8 23:40:22.462606 sshd[3936]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:22.466284 systemd[1]: sshd@15-10.200.8.36:22-10.200.12.6:44576.service: Deactivated successfully. Feb 8 23:40:22.467443 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:40:22.468310 systemd-logind[1327]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:40:22.469331 systemd-logind[1327]: Removed session 18. Feb 8 23:40:22.567247 systemd[1]: Started sshd@16-10.200.8.36:22-10.200.12.6:44584.service. Feb 8 23:40:23.185893 sshd[3954]: Accepted publickey for core from 10.200.12.6 port 44584 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:23.187370 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:23.192552 systemd[1]: Started session-19.scope. Feb 8 23:40:23.193723 systemd-logind[1327]: New session 19 of user core. Feb 8 23:40:23.852461 sshd[3954]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:23.855734 systemd[1]: sshd@16-10.200.8.36:22-10.200.12.6:44584.service: Deactivated successfully. Feb 8 23:40:23.857189 systemd-logind[1327]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:40:23.857288 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:40:23.858783 systemd-logind[1327]: Removed session 19. Feb 8 23:40:23.959492 systemd[1]: Started sshd@17-10.200.8.36:22-10.200.12.6:44590.service. Feb 8 23:40:24.585446 sshd[3967]: Accepted publickey for core from 10.200.12.6 port 44590 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:24.586890 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:24.592147 systemd[1]: Started session-20.scope. Feb 8 23:40:24.593219 systemd-logind[1327]: New session 20 of user core. Feb 8 23:40:25.080442 sshd[3967]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:25.084160 systemd[1]: sshd@17-10.200.8.36:22-10.200.12.6:44590.service: Deactivated successfully. Feb 8 23:40:25.084180 systemd-logind[1327]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:40:25.085049 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:40:25.085896 systemd-logind[1327]: Removed session 20. Feb 8 23:40:30.184438 systemd[1]: Started sshd@18-10.200.8.36:22-10.200.12.6:43000.service. Feb 8 23:40:30.799341 sshd[3982]: Accepted publickey for core from 10.200.12.6 port 43000 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:30.800990 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:30.806684 systemd-logind[1327]: New session 21 of user core. Feb 8 23:40:30.807489 systemd[1]: Started session-21.scope. Feb 8 23:40:31.297930 sshd[3982]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:31.301886 systemd-logind[1327]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:40:31.303513 systemd[1]: sshd@18-10.200.8.36:22-10.200.12.6:43000.service: Deactivated successfully. Feb 8 23:40:31.304665 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:40:31.305752 systemd-logind[1327]: Removed session 21. Feb 8 23:40:36.404183 systemd[1]: Started sshd@19-10.200.8.36:22-10.200.12.6:43010.service. Feb 8 23:40:37.025033 sshd[3994]: Accepted publickey for core from 10.200.12.6 port 43010 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:37.026430 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:37.032797 systemd[1]: Started session-22.scope. Feb 8 23:40:37.033434 systemd-logind[1327]: New session 22 of user core. Feb 8 23:40:37.516151 sshd[3994]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:37.519462 systemd-logind[1327]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:40:37.519881 systemd[1]: sshd@19-10.200.8.36:22-10.200.12.6:43010.service: Deactivated successfully. Feb 8 23:40:37.521054 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:40:37.522631 systemd-logind[1327]: Removed session 22. Feb 8 23:40:42.623662 systemd[1]: Started sshd@20-10.200.8.36:22-10.200.12.6:58476.service. Feb 8 23:40:43.249625 sshd[4009]: Accepted publickey for core from 10.200.12.6 port 58476 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:43.251002 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:43.256407 systemd[1]: Started session-23.scope. Feb 8 23:40:43.257193 systemd-logind[1327]: New session 23 of user core. Feb 8 23:40:43.742900 sshd[4009]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:43.745895 systemd[1]: sshd@20-10.200.8.36:22-10.200.12.6:58476.service: Deactivated successfully. Feb 8 23:40:43.746835 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:40:43.747589 systemd-logind[1327]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:40:43.748370 systemd-logind[1327]: Removed session 23. Feb 8 23:40:43.846903 systemd[1]: Started sshd@21-10.200.8.36:22-10.200.12.6:58480.service. Feb 8 23:40:44.461231 sshd[4022]: Accepted publickey for core from 10.200.12.6 port 58480 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:44.462685 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:44.468159 systemd[1]: Started session-24.scope. Feb 8 23:40:44.469062 systemd-logind[1327]: New session 24 of user core. Feb 8 23:40:46.197592 env[1338]: time="2024-02-08T23:40:46.197544924Z" level=info msg="StopContainer for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" with timeout 30 (s)" Feb 8 23:40:46.198478 env[1338]: time="2024-02-08T23:40:46.198434129Z" level=info msg="Stop container \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" with signal terminated" Feb 8 23:40:46.210217 systemd[1]: run-containerd-runc-k8s.io-4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6-runc.wIw1Wc.mount: Deactivated successfully. Feb 8 23:40:46.222659 systemd[1]: cri-containerd-01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c.scope: Deactivated successfully. Feb 8 23:40:46.237930 env[1338]: time="2024-02-08T23:40:46.237869658Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:40:46.245142 env[1338]: time="2024-02-08T23:40:46.245088001Z" level=info msg="StopContainer for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" with timeout 2 (s)" Feb 8 23:40:46.245637 env[1338]: time="2024-02-08T23:40:46.245599104Z" level=info msg="Stop container \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" with signal terminated" Feb 8 23:40:46.254201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c-rootfs.mount: Deactivated successfully. Feb 8 23:40:46.257061 systemd-networkd[1490]: lxc_health: Link DOWN Feb 8 23:40:46.257069 systemd-networkd[1490]: lxc_health: Lost carrier Feb 8 23:40:46.276595 systemd[1]: cri-containerd-4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6.scope: Deactivated successfully. Feb 8 23:40:46.276794 systemd[1]: cri-containerd-4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6.scope: Consumed 7.458s CPU time. Feb 8 23:40:46.298529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6-rootfs.mount: Deactivated successfully. Feb 8 23:40:46.323076 env[1338]: time="2024-02-08T23:40:46.323023755Z" level=info msg="shim disconnected" id=4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6 Feb 8 23:40:46.323076 env[1338]: time="2024-02-08T23:40:46.323074955Z" level=warning msg="cleaning up after shim disconnected" id=4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6 namespace=k8s.io Feb 8 23:40:46.323449 env[1338]: time="2024-02-08T23:40:46.323089355Z" level=info msg="cleaning up dead shim" Feb 8 23:40:46.323449 env[1338]: time="2024-02-08T23:40:46.323337456Z" level=info msg="shim disconnected" id=01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c Feb 8 23:40:46.323449 env[1338]: time="2024-02-08T23:40:46.323379757Z" level=warning msg="cleaning up after shim disconnected" id=01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c namespace=k8s.io Feb 8 23:40:46.323449 env[1338]: time="2024-02-08T23:40:46.323391357Z" level=info msg="cleaning up dead shim" Feb 8 23:40:46.335331 env[1338]: time="2024-02-08T23:40:46.335272926Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n" Feb 8 23:40:46.337109 env[1338]: time="2024-02-08T23:40:46.337084637Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4089 runtime=io.containerd.runc.v2\n" Feb 8 23:40:46.339515 env[1338]: time="2024-02-08T23:40:46.339481051Z" level=info msg="StopContainer for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" returns successfully" Feb 8 23:40:46.341491 env[1338]: time="2024-02-08T23:40:46.340224855Z" level=info msg="StopPodSandbox for \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\"" Feb 8 23:40:46.341491 env[1338]: time="2024-02-08T23:40:46.340281255Z" level=info msg="Container to stop \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:46.341491 env[1338]: time="2024-02-08T23:40:46.340294555Z" level=info msg="Container to stop \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:46.341491 env[1338]: time="2024-02-08T23:40:46.340312055Z" level=info msg="Container to stop \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:46.341491 env[1338]: time="2024-02-08T23:40:46.340321955Z" level=info msg="Container to stop \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:46.341491 env[1338]: time="2024-02-08T23:40:46.340331356Z" level=info msg="Container to stop \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:46.342470 env[1338]: time="2024-02-08T23:40:46.342438568Z" level=info msg="StopContainer for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" returns successfully" Feb 8 23:40:46.343098 env[1338]: time="2024-02-08T23:40:46.343069171Z" level=info msg="StopPodSandbox for \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\"" Feb 8 23:40:46.343283 env[1338]: time="2024-02-08T23:40:46.343255173Z" level=info msg="Container to stop \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:46.352724 systemd[1]: cri-containerd-49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5.scope: Deactivated successfully. Feb 8 23:40:46.363644 systemd[1]: cri-containerd-cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a.scope: Deactivated successfully. Feb 8 23:40:46.396682 env[1338]: time="2024-02-08T23:40:46.396620484Z" level=info msg="shim disconnected" id=49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5 Feb 8 23:40:46.396682 env[1338]: time="2024-02-08T23:40:46.396680784Z" level=warning msg="cleaning up after shim disconnected" id=49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5 namespace=k8s.io Feb 8 23:40:46.396952 env[1338]: time="2024-02-08T23:40:46.396692984Z" level=info msg="cleaning up dead shim" Feb 8 23:40:46.396952 env[1338]: time="2024-02-08T23:40:46.396859985Z" level=info msg="shim disconnected" id=cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a Feb 8 23:40:46.396952 env[1338]: time="2024-02-08T23:40:46.396895085Z" level=warning msg="cleaning up after shim disconnected" id=cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a namespace=k8s.io Feb 8 23:40:46.396952 env[1338]: time="2024-02-08T23:40:46.396904585Z" level=info msg="cleaning up dead shim" Feb 8 23:40:46.409359 env[1338]: time="2024-02-08T23:40:46.409319058Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4154 runtime=io.containerd.runc.v2\n" Feb 8 23:40:46.409672 env[1338]: time="2024-02-08T23:40:46.409640559Z" level=info msg="TearDown network for sandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" successfully" Feb 8 23:40:46.409774 env[1338]: time="2024-02-08T23:40:46.409671960Z" level=info msg="StopPodSandbox for \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" returns successfully" Feb 8 23:40:46.411165 env[1338]: time="2024-02-08T23:40:46.411050468Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4153 runtime=io.containerd.runc.v2\n" Feb 8 23:40:46.411910 env[1338]: time="2024-02-08T23:40:46.411360269Z" level=info msg="TearDown network for sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" successfully" Feb 8 23:40:46.411910 env[1338]: time="2024-02-08T23:40:46.411389670Z" level=info msg="StopPodSandbox for \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" returns successfully" Feb 8 23:40:46.426871 kubelet[2446]: I0208 23:40:46.426200 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-config-path\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.426871 kubelet[2446]: I0208 23:40:46.426270 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-net\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.426871 kubelet[2446]: I0208 23:40:46.426315 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-kernel\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.426871 kubelet[2446]: I0208 23:40:46.426344 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hubble-tls\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.426871 kubelet[2446]: I0208 23:40:46.426372 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hostproc\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.426871 kubelet[2446]: I0208 23:40:46.426416 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-xtables-lock\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427492 kubelet[2446]: I0208 23:40:46.426444 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65591a0e-5ae4-4cb1-b827-2927911805da-cilium-config-path\") pod \"65591a0e-5ae4-4cb1-b827-2927911805da\" (UID: \"65591a0e-5ae4-4cb1-b827-2927911805da\") " Feb 8 23:40:46.427492 kubelet[2446]: I0208 23:40:46.426487 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-run\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427492 kubelet[2446]: I0208 23:40:46.426515 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-bpf-maps\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427492 kubelet[2446]: I0208 23:40:46.426553 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-lib-modules\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427492 kubelet[2446]: I0208 23:40:46.426578 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-etc-cni-netd\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427492 kubelet[2446]: I0208 23:40:46.426607 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/815aab58-6d9b-44a5-a1ae-d621a0146a8e-clustermesh-secrets\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427718 kubelet[2446]: I0208 23:40:46.426606 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:40:46.427718 kubelet[2446]: I0208 23:40:46.426646 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-cgroup\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427718 kubelet[2446]: I0208 23:40:46.426674 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c28ql\" (UniqueName: \"kubernetes.io/projected/65591a0e-5ae4-4cb1-b827-2927911805da-kube-api-access-c28ql\") pod \"65591a0e-5ae4-4cb1-b827-2927911805da\" (UID: \"65591a0e-5ae4-4cb1-b827-2927911805da\") " Feb 8 23:40:46.427718 kubelet[2446]: I0208 23:40:46.426721 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvqlv\" (UniqueName: \"kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-kube-api-access-wvqlv\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427718 kubelet[2446]: I0208 23:40:46.426750 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cni-path\") pod \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\" (UID: \"815aab58-6d9b-44a5-a1ae-d621a0146a8e\") " Feb 8 23:40:46.427718 kubelet[2446]: I0208 23:40:46.426817 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-config-path\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.427938 kubelet[2446]: I0208 23:40:46.426842 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cni-path" (OuterVolumeSpecName: "cni-path") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.428421 kubelet[2446]: I0208 23:40:46.428040 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.428421 kubelet[2446]: I0208 23:40:46.428093 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.428786 kubelet[2446]: I0208 23:40:46.428761 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.429416 kubelet[2446]: I0208 23:40:46.429372 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hostproc" (OuterVolumeSpecName: "hostproc") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.429416 kubelet[2446]: I0208 23:40:46.429413 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.431873 kubelet[2446]: I0208 23:40:46.431475 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65591a0e-5ae4-4cb1-b827-2927911805da-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65591a0e-5ae4-4cb1-b827-2927911805da" (UID: "65591a0e-5ae4-4cb1-b827-2927911805da"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:40:46.431873 kubelet[2446]: I0208 23:40:46.431518 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.431873 kubelet[2446]: I0208 23:40:46.431531 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.431873 kubelet[2446]: I0208 23:40:46.431545 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.433959 kubelet[2446]: I0208 23:40:46.433934 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:46.440252 kubelet[2446]: I0208 23:40:46.440229 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:46.444995 kubelet[2446]: I0208 23:40:46.444968 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65591a0e-5ae4-4cb1-b827-2927911805da-kube-api-access-c28ql" (OuterVolumeSpecName: "kube-api-access-c28ql") pod "65591a0e-5ae4-4cb1-b827-2927911805da" (UID: "65591a0e-5ae4-4cb1-b827-2927911805da"). InnerVolumeSpecName "kube-api-access-c28ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:46.445185 kubelet[2446]: I0208 23:40:46.445164 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815aab58-6d9b-44a5-a1ae-d621a0146a8e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:40:46.447484 kubelet[2446]: I0208 23:40:46.447461 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-kube-api-access-wvqlv" (OuterVolumeSpecName: "kube-api-access-wvqlv") pod "815aab58-6d9b-44a5-a1ae-d621a0146a8e" (UID: "815aab58-6d9b-44a5-a1ae-d621a0146a8e"). InnerVolumeSpecName "kube-api-access-wvqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.527932 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-cgroup\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.527973 2446 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c28ql\" (UniqueName: \"kubernetes.io/projected/65591a0e-5ae4-4cb1-b827-2927911805da-kube-api-access-c28ql\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.527989 2446 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wvqlv\" (UniqueName: \"kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-kube-api-access-wvqlv\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.528002 2446 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cni-path\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.528017 2446 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.528030 2446 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hubble-tls\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.528044 2446 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-hostproc\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529170 kubelet[2446]: I0208 23:40:46.528058 2446 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-host-proc-sys-net\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528070 2446 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-xtables-lock\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528084 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65591a0e-5ae4-4cb1-b827-2927911805da-cilium-config-path\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528101 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-cilium-run\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528113 2446 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-bpf-maps\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528147 2446 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-lib-modules\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528162 2446 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/815aab58-6d9b-44a5-a1ae-d621a0146a8e-clustermesh-secrets\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.529696 kubelet[2446]: I0208 23:40:46.528177 2446 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/815aab58-6d9b-44a5-a1ae-d621a0146a8e-etc-cni-netd\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:46.594199 kubelet[2446]: I0208 23:40:46.594172 2446 scope.go:117] "RemoveContainer" containerID="4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6" Feb 8 23:40:46.599184 systemd[1]: Removed slice kubepods-burstable-pod815aab58_6d9b_44a5_a1ae_d621a0146a8e.slice. Feb 8 23:40:46.599322 systemd[1]: kubepods-burstable-pod815aab58_6d9b_44a5_a1ae_d621a0146a8e.slice: Consumed 7.561s CPU time. Feb 8 23:40:46.601574 env[1338]: time="2024-02-08T23:40:46.601202476Z" level=info msg="RemoveContainer for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\"" Feb 8 23:40:46.605930 systemd[1]: Removed slice kubepods-besteffort-pod65591a0e_5ae4_4cb1_b827_2927911805da.slice. Feb 8 23:40:46.613728 env[1338]: time="2024-02-08T23:40:46.613690448Z" level=info msg="RemoveContainer for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" returns successfully" Feb 8 23:40:46.613932 kubelet[2446]: I0208 23:40:46.613908 2446 scope.go:117] "RemoveContainer" containerID="58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95" Feb 8 23:40:46.614897 env[1338]: time="2024-02-08T23:40:46.614851855Z" level=info msg="RemoveContainer for \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\"" Feb 8 23:40:46.625161 env[1338]: time="2024-02-08T23:40:46.624232910Z" level=info msg="RemoveContainer for \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\" returns successfully" Feb 8 23:40:46.625268 kubelet[2446]: I0208 23:40:46.624413 2446 scope.go:117] "RemoveContainer" containerID="d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5" Feb 8 23:40:46.626365 env[1338]: time="2024-02-08T23:40:46.626329722Z" level=info msg="RemoveContainer for \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\"" Feb 8 23:40:46.642891 env[1338]: time="2024-02-08T23:40:46.642850018Z" level=info msg="RemoveContainer for \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\" returns successfully" Feb 8 23:40:46.643065 kubelet[2446]: I0208 23:40:46.643044 2446 scope.go:117] "RemoveContainer" containerID="d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca" Feb 8 23:40:46.644322 env[1338]: time="2024-02-08T23:40:46.644247926Z" level=info msg="RemoveContainer for \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\"" Feb 8 23:40:46.654103 env[1338]: time="2024-02-08T23:40:46.654067584Z" level=info msg="RemoveContainer for \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\" returns successfully" Feb 8 23:40:46.654344 kubelet[2446]: I0208 23:40:46.654325 2446 scope.go:117] "RemoveContainer" containerID="38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d" Feb 8 23:40:46.655419 env[1338]: time="2024-02-08T23:40:46.655391691Z" level=info msg="RemoveContainer for \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\"" Feb 8 23:40:46.663741 env[1338]: time="2024-02-08T23:40:46.663706540Z" level=info msg="RemoveContainer for \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\" returns successfully" Feb 8 23:40:46.663891 kubelet[2446]: I0208 23:40:46.663873 2446 scope.go:117] "RemoveContainer" containerID="4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6" Feb 8 23:40:46.664190 env[1338]: time="2024-02-08T23:40:46.664100742Z" level=error msg="ContainerStatus for \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\": not found" Feb 8 23:40:46.664390 kubelet[2446]: E0208 23:40:46.664370 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\": not found" containerID="4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6" Feb 8 23:40:46.664486 kubelet[2446]: I0208 23:40:46.664472 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6"} err="failed to get container status \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\": rpc error: code = NotFound desc = an error occurred when try to find container \"4fe322466a555a5c0e9f8f2667c7409f2e533fe75028d31d43599fb0a15c4ce6\": not found" Feb 8 23:40:46.664539 kubelet[2446]: I0208 23:40:46.664490 2446 scope.go:117] "RemoveContainer" containerID="58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95" Feb 8 23:40:46.664737 env[1338]: time="2024-02-08T23:40:46.664689446Z" level=error msg="ContainerStatus for \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\": not found" Feb 8 23:40:46.664865 kubelet[2446]: E0208 23:40:46.664847 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\": not found" containerID="58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95" Feb 8 23:40:46.664940 kubelet[2446]: I0208 23:40:46.664896 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95"} err="failed to get container status \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\": rpc error: code = NotFound desc = an error occurred when try to find container \"58e327262d8cec095ac9fb4385c39bdc4bd7c1d9e3d973bb14d9933514194d95\": not found" Feb 8 23:40:46.664940 kubelet[2446]: I0208 23:40:46.664911 2446 scope.go:117] "RemoveContainer" containerID="d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5" Feb 8 23:40:46.665149 env[1338]: time="2024-02-08T23:40:46.665082048Z" level=error msg="ContainerStatus for \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\": not found" Feb 8 23:40:46.665319 kubelet[2446]: E0208 23:40:46.665293 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\": not found" containerID="d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5" Feb 8 23:40:46.665394 kubelet[2446]: I0208 23:40:46.665323 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5"} err="failed to get container status \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3543a40c44132241e0cdb73e09e47e9fff15468e017d3c309e10cd730e9f7c5\": not found" Feb 8 23:40:46.665394 kubelet[2446]: I0208 23:40:46.665336 2446 scope.go:117] "RemoveContainer" containerID="d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca" Feb 8 23:40:46.665564 env[1338]: time="2024-02-08T23:40:46.665505250Z" level=error msg="ContainerStatus for \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\": not found" Feb 8 23:40:46.665726 kubelet[2446]: E0208 23:40:46.665704 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\": not found" containerID="d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca" Feb 8 23:40:46.665806 kubelet[2446]: I0208 23:40:46.665731 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca"} err="failed to get container status \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"d58dbf6a53f04c99fe4d38db382abf777edf3620c3be2ec767e983e7f34c58ca\": not found" Feb 8 23:40:46.665806 kubelet[2446]: I0208 23:40:46.665753 2446 scope.go:117] "RemoveContainer" containerID="38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d" Feb 8 23:40:46.665953 env[1338]: time="2024-02-08T23:40:46.665903153Z" level=error msg="ContainerStatus for \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\": not found" Feb 8 23:40:46.666146 kubelet[2446]: E0208 23:40:46.666116 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\": not found" containerID="38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d" Feb 8 23:40:46.666260 kubelet[2446]: I0208 23:40:46.666243 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d"} err="failed to get container status \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\": rpc error: code = NotFound desc = an error occurred when try to find container \"38959049bf650532e75d2903a1a1cee01e534451bbbaf74e9f476e3683d9554d\": not found" Feb 8 23:40:46.666260 kubelet[2446]: I0208 23:40:46.666261 2446 scope.go:117] "RemoveContainer" containerID="01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c" Feb 8 23:40:46.667248 env[1338]: time="2024-02-08T23:40:46.667223260Z" level=info msg="RemoveContainer for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\"" Feb 8 23:40:46.676564 env[1338]: time="2024-02-08T23:40:46.676533715Z" level=info msg="RemoveContainer for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" returns successfully" Feb 8 23:40:46.676754 kubelet[2446]: I0208 23:40:46.676736 2446 scope.go:117] "RemoveContainer" containerID="01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c" Feb 8 23:40:46.677004 env[1338]: time="2024-02-08T23:40:46.676959317Z" level=error msg="ContainerStatus for \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\": not found" Feb 8 23:40:46.677210 kubelet[2446]: E0208 23:40:46.677194 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\": not found" containerID="01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c" Feb 8 23:40:46.677290 kubelet[2446]: I0208 23:40:46.677226 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c"} err="failed to get container status \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\": rpc error: code = NotFound desc = an error occurred when try to find container \"01e78822e8823b47bfccc4d433105bad5b4a3d29603d94d7eb7818855e74446c\": not found" Feb 8 23:40:47.203314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a-rootfs.mount: Deactivated successfully. Feb 8 23:40:47.203714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a-shm.mount: Deactivated successfully. Feb 8 23:40:47.203849 systemd[1]: var-lib-kubelet-pods-65591a0e\x2d5ae4\x2d4cb1\x2db827\x2d2927911805da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc28ql.mount: Deactivated successfully. Feb 8 23:40:47.203948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5-rootfs.mount: Deactivated successfully. Feb 8 23:40:47.204048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5-shm.mount: Deactivated successfully. Feb 8 23:40:47.204158 systemd[1]: var-lib-kubelet-pods-815aab58\x2d6d9b\x2d44a5\x2da1ae\x2dd621a0146a8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvqlv.mount: Deactivated successfully. Feb 8 23:40:47.204280 systemd[1]: var-lib-kubelet-pods-815aab58\x2d6d9b\x2d44a5\x2da1ae\x2dd621a0146a8e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:40:47.204384 systemd[1]: var-lib-kubelet-pods-815aab58\x2d6d9b\x2d44a5\x2da1ae\x2dd621a0146a8e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:40:48.058999 kubelet[2446]: I0208 23:40:48.058961 2446 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="65591a0e-5ae4-4cb1-b827-2927911805da" path="/var/lib/kubelet/pods/65591a0e-5ae4-4cb1-b827-2927911805da/volumes" Feb 8 23:40:48.059544 kubelet[2446]: I0208 23:40:48.059520 2446 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" path="/var/lib/kubelet/pods/815aab58-6d9b-44a5-a1ae-d621a0146a8e/volumes" Feb 8 23:40:48.193020 kubelet[2446]: E0208 23:40:48.192986 2446 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:40:48.246758 sshd[4022]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:48.249842 systemd[1]: sshd@21-10.200.8.36:22-10.200.12.6:58480.service: Deactivated successfully. Feb 8 23:40:48.250751 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:40:48.251449 systemd-logind[1327]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:40:48.252374 systemd-logind[1327]: Removed session 24. Feb 8 23:40:48.351888 systemd[1]: Started sshd@22-10.200.8.36:22-10.200.12.6:35160.service. Feb 8 23:40:48.971056 sshd[4186]: Accepted publickey for core from 10.200.12.6 port 35160 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:48.972515 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:48.977614 systemd[1]: Started session-25.scope. Feb 8 23:40:48.978266 systemd-logind[1327]: New session 25 of user core. Feb 8 23:40:49.801884 kubelet[2446]: I0208 23:40:49.801836 2446 topology_manager.go:215] "Topology Admit Handler" podUID="7da0a698-0e4b-4943-9033-6de571527905" podNamespace="kube-system" podName="cilium-ht6bj" Feb 8 23:40:49.802419 kubelet[2446]: E0208 23:40:49.801912 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" containerName="apply-sysctl-overwrites" Feb 8 23:40:49.802419 kubelet[2446]: E0208 23:40:49.801926 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65591a0e-5ae4-4cb1-b827-2927911805da" containerName="cilium-operator" Feb 8 23:40:49.802419 kubelet[2446]: E0208 23:40:49.801936 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" containerName="mount-cgroup" Feb 8 23:40:49.802419 kubelet[2446]: E0208 23:40:49.801944 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" containerName="mount-bpf-fs" Feb 8 23:40:49.802419 kubelet[2446]: E0208 23:40:49.801952 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" containerName="clean-cilium-state" Feb 8 23:40:49.802419 kubelet[2446]: E0208 23:40:49.801960 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" containerName="cilium-agent" Feb 8 23:40:49.802419 kubelet[2446]: I0208 23:40:49.801988 2446 memory_manager.go:346] "RemoveStaleState removing state" podUID="815aab58-6d9b-44a5-a1ae-d621a0146a8e" containerName="cilium-agent" Feb 8 23:40:49.802419 kubelet[2446]: I0208 23:40:49.802000 2446 memory_manager.go:346] "RemoveStaleState removing state" podUID="65591a0e-5ae4-4cb1-b827-2927911805da" containerName="cilium-operator" Feb 8 23:40:49.808792 systemd[1]: Created slice kubepods-burstable-pod7da0a698_0e4b_4943_9033_6de571527905.slice. Feb 8 23:40:49.849356 kubelet[2446]: I0208 23:40:49.849298 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-run\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849356 kubelet[2446]: I0208 23:40:49.849364 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-hostproc\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849574 kubelet[2446]: I0208 23:40:49.849391 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-bpf-maps\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849574 kubelet[2446]: I0208 23:40:49.849430 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-cilium-ipsec-secrets\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849574 kubelet[2446]: I0208 23:40:49.849457 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-hubble-tls\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849574 kubelet[2446]: I0208 23:40:49.849496 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-cgroup\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849574 kubelet[2446]: I0208 23:40:49.849525 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-lib-modules\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849574 kubelet[2446]: I0208 23:40:49.849555 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dpj\" (UniqueName: \"kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-kube-api-access-59dpj\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849825 kubelet[2446]: I0208 23:40:49.849594 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cni-path\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849825 kubelet[2446]: I0208 23:40:49.849623 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-xtables-lock\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849825 kubelet[2446]: I0208 23:40:49.849664 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-kernel\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849825 kubelet[2446]: I0208 23:40:49.849692 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-etc-cni-netd\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849825 kubelet[2446]: I0208 23:40:49.849735 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-clustermesh-secrets\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.849825 kubelet[2446]: I0208 23:40:49.849765 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7da0a698-0e4b-4943-9033-6de571527905-cilium-config-path\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.850070 kubelet[2446]: I0208 23:40:49.849795 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-net\") pod \"cilium-ht6bj\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " pod="kube-system/cilium-ht6bj" Feb 8 23:40:49.895423 sshd[4186]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:49.898980 systemd[1]: sshd@22-10.200.8.36:22-10.200.12.6:35160.service: Deactivated successfully. Feb 8 23:40:49.900019 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:40:49.900753 systemd-logind[1327]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:40:49.901698 systemd-logind[1327]: Removed session 25. Feb 8 23:40:50.000862 systemd[1]: Started sshd@23-10.200.8.36:22-10.200.12.6:35164.service. Feb 8 23:40:50.113804 env[1338]: time="2024-02-08T23:40:50.113666338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ht6bj,Uid:7da0a698-0e4b-4943-9033-6de571527905,Namespace:kube-system,Attempt:0,}" Feb 8 23:40:50.158072 env[1338]: time="2024-02-08T23:40:50.158003991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:50.158245 env[1338]: time="2024-02-08T23:40:50.158040491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:50.158245 env[1338]: time="2024-02-08T23:40:50.158053991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:50.158551 env[1338]: time="2024-02-08T23:40:50.158497494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8 pid=4211 runtime=io.containerd.runc.v2 Feb 8 23:40:50.177912 systemd[1]: Started cri-containerd-8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8.scope. Feb 8 23:40:50.203308 env[1338]: time="2024-02-08T23:40:50.203251549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ht6bj,Uid:7da0a698-0e4b-4943-9033-6de571527905,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\"" Feb 8 23:40:50.208016 env[1338]: time="2024-02-08T23:40:50.207258572Z" level=info msg="CreateContainer within sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:40:50.244023 env[1338]: time="2024-02-08T23:40:50.243980181Z" level=info msg="CreateContainer within sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\"" Feb 8 23:40:50.246414 env[1338]: time="2024-02-08T23:40:50.244637885Z" level=info msg="StartContainer for \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\"" Feb 8 23:40:50.261520 systemd[1]: Started cri-containerd-d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240.scope. Feb 8 23:40:50.273421 systemd[1]: cri-containerd-d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240.scope: Deactivated successfully. Feb 8 23:40:50.335603 env[1338]: time="2024-02-08T23:40:50.335546103Z" level=info msg="shim disconnected" id=d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240 Feb 8 23:40:50.335603 env[1338]: time="2024-02-08T23:40:50.335604204Z" level=warning msg="cleaning up after shim disconnected" id=d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240 namespace=k8s.io Feb 8 23:40:50.335921 env[1338]: time="2024-02-08T23:40:50.335615004Z" level=info msg="cleaning up dead shim" Feb 8 23:40:50.344411 env[1338]: time="2024-02-08T23:40:50.344365054Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4271 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:40:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:40:50.344734 env[1338]: time="2024-02-08T23:40:50.344625955Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 8 23:40:50.345230 env[1338]: time="2024-02-08T23:40:50.345182558Z" level=error msg="Failed to pipe stdout of container \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\"" error="reading from a closed fifo" Feb 8 23:40:50.345319 env[1338]: time="2024-02-08T23:40:50.345272859Z" level=error msg="Failed to pipe stderr of container \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\"" error="reading from a closed fifo" Feb 8 23:40:50.349283 env[1338]: time="2024-02-08T23:40:50.349231681Z" level=error msg="StartContainer for \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:40:50.349549 kubelet[2446]: E0208 23:40:50.349526 2446 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240" Feb 8 23:40:50.349693 kubelet[2446]: E0208 23:40:50.349675 2446 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:40:50.349693 kubelet[2446]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:40:50.349693 kubelet[2446]: rm /hostbin/cilium-mount Feb 8 23:40:50.349836 kubelet[2446]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-59dpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-ht6bj_kube-system(7da0a698-0e4b-4943-9033-6de571527905): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:40:50.349836 kubelet[2446]: E0208 23:40:50.349733 2446 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ht6bj" podUID="7da0a698-0e4b-4943-9033-6de571527905" Feb 8 23:40:50.623537 sshd[4201]: Accepted publickey for core from 10.200.12.6 port 35164 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:50.625971 env[1338]: time="2024-02-08T23:40:50.625929559Z" level=info msg="CreateContainer within sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:40:50.627554 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:50.640081 systemd[1]: Started session-26.scope. Feb 8 23:40:50.641364 systemd-logind[1327]: New session 26 of user core. Feb 8 23:40:50.674548 env[1338]: time="2024-02-08T23:40:50.674512036Z" level=info msg="CreateContainer within sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\"" Feb 8 23:40:50.675238 env[1338]: time="2024-02-08T23:40:50.675206240Z" level=info msg="StartContainer for \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\"" Feb 8 23:40:50.693153 systemd[1]: Started cri-containerd-c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161.scope. Feb 8 23:40:50.707771 systemd[1]: cri-containerd-c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161.scope: Deactivated successfully. Feb 8 23:40:50.729874 env[1338]: time="2024-02-08T23:40:50.729814651Z" level=info msg="shim disconnected" id=c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161 Feb 8 23:40:50.729874 env[1338]: time="2024-02-08T23:40:50.729874752Z" level=warning msg="cleaning up after shim disconnected" id=c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161 namespace=k8s.io Feb 8 23:40:50.730189 env[1338]: time="2024-02-08T23:40:50.729886752Z" level=info msg="cleaning up dead shim" Feb 8 23:40:50.738718 env[1338]: time="2024-02-08T23:40:50.738680302Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4311 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:40:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:40:50.738981 env[1338]: time="2024-02-08T23:40:50.738919303Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 8 23:40:50.739245 env[1338]: time="2024-02-08T23:40:50.739210005Z" level=error msg="Failed to pipe stderr of container \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\"" error="reading from a closed fifo" Feb 8 23:40:50.744313 env[1338]: time="2024-02-08T23:40:50.744270734Z" level=error msg="Failed to pipe stdout of container \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\"" error="reading from a closed fifo" Feb 8 23:40:50.750793 env[1338]: time="2024-02-08T23:40:50.750750271Z" level=error msg="StartContainer for \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:40:50.751028 kubelet[2446]: E0208 23:40:50.751007 2446 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161" Feb 8 23:40:50.751205 kubelet[2446]: E0208 23:40:50.751185 2446 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:40:50.751205 kubelet[2446]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:40:50.751205 kubelet[2446]: rm /hostbin/cilium-mount Feb 8 23:40:50.751205 kubelet[2446]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-59dpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-ht6bj_kube-system(7da0a698-0e4b-4943-9033-6de571527905): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:40:50.751497 kubelet[2446]: E0208 23:40:50.751243 2446 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ht6bj" podUID="7da0a698-0e4b-4943-9033-6de571527905" Feb 8 23:40:51.133566 sshd[4201]: pam_unix(sshd:session): session closed for user core Feb 8 23:40:51.136934 systemd[1]: sshd@23-10.200.8.36:22-10.200.12.6:35164.service: Deactivated successfully. Feb 8 23:40:51.138079 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:40:51.138954 systemd-logind[1327]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:40:51.140014 systemd-logind[1327]: Removed session 26. Feb 8 23:40:51.238548 systemd[1]: Started sshd@24-10.200.8.36:22-10.200.12.6:35166.service. Feb 8 23:40:51.622013 kubelet[2446]: I0208 23:40:51.621980 2446 scope.go:117] "RemoveContainer" containerID="d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240" Feb 8 23:40:51.622780 env[1338]: time="2024-02-08T23:40:51.622737124Z" level=info msg="StopPodSandbox for \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\"" Feb 8 23:40:51.623164 env[1338]: time="2024-02-08T23:40:51.622804624Z" level=info msg="Container to stop \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:51.623164 env[1338]: time="2024-02-08T23:40:51.622824325Z" level=info msg="Container to stop \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:51.625001 env[1338]: time="2024-02-08T23:40:51.624971537Z" level=info msg="RemoveContainer for \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\"" Feb 8 23:40:51.627060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8-shm.mount: Deactivated successfully. Feb 8 23:40:51.636131 env[1338]: time="2024-02-08T23:40:51.636083500Z" level=info msg="RemoveContainer for \"d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240\" returns successfully" Feb 8 23:40:51.639608 systemd[1]: cri-containerd-8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8.scope: Deactivated successfully. Feb 8 23:40:51.676852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8-rootfs.mount: Deactivated successfully. Feb 8 23:40:51.712768 env[1338]: time="2024-02-08T23:40:51.712718434Z" level=info msg="shim disconnected" id=8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8 Feb 8 23:40:51.712984 env[1338]: time="2024-02-08T23:40:51.712953236Z" level=warning msg="cleaning up after shim disconnected" id=8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8 namespace=k8s.io Feb 8 23:40:51.712984 env[1338]: time="2024-02-08T23:40:51.712975736Z" level=info msg="cleaning up dead shim" Feb 8 23:40:51.721591 env[1338]: time="2024-02-08T23:40:51.721554785Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4351 runtime=io.containerd.runc.v2\n" Feb 8 23:40:51.721857 env[1338]: time="2024-02-08T23:40:51.721826586Z" level=info msg="TearDown network for sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" successfully" Feb 8 23:40:51.721949 env[1338]: time="2024-02-08T23:40:51.721855586Z" level=info msg="StopPodSandbox for \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" returns successfully" Feb 8 23:40:51.860083 sshd[4332]: Accepted publickey for core from 10.200.12.6 port 35166 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:40:51.861474 sshd[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862193 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dpj\" (UniqueName: \"kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-kube-api-access-59dpj\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862241 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-cgroup\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862271 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-hostproc\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862297 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-net\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862328 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7da0a698-0e4b-4943-9033-6de571527905-cilium-config-path\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862353 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-etc-cni-netd\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862385 kubelet[2446]: I0208 23:40:51.862380 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-hubble-tls\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862409 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-clustermesh-secrets\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862437 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-cilium-ipsec-secrets\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862460 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-lib-modules\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862481 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-bpf-maps\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862506 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-kernel\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862549 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-run\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862580 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cni-path\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862607 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-xtables-lock\") pod \"7da0a698-0e4b-4943-9033-6de571527905\" (UID: \"7da0a698-0e4b-4943-9033-6de571527905\") " Feb 8 23:40:51.862749 kubelet[2446]: I0208 23:40:51.862681 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.863618 kubelet[2446]: I0208 23:40:51.863593 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.863784 kubelet[2446]: I0208 23:40:51.863766 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-hostproc" (OuterVolumeSpecName: "hostproc") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.863911 kubelet[2446]: I0208 23:40:51.863893 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.866898 kubelet[2446]: I0208 23:40:51.866874 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.867317 kubelet[2446]: I0208 23:40:51.867294 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.867700 kubelet[2446]: I0208 23:40:51.867678 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.867835 kubelet[2446]: I0208 23:40:51.867817 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.867955 kubelet[2446]: I0208 23:40:51.867937 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.868035 systemd[1]: Started session-27.scope. Feb 8 23:40:51.868245 kubelet[2446]: I0208 23:40:51.868226 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cni-path" (OuterVolumeSpecName: "cni-path") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.869879 systemd-logind[1327]: New session 27 of user core. Feb 8 23:40:51.875650 kubelet[2446]: I0208 23:40:51.875550 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da0a698-0e4b-4943-9033-6de571527905-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:40:51.876412 systemd[1]: var-lib-kubelet-pods-7da0a698\x2d0e4b\x2d4943\x2d9033\x2d6de571527905-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59dpj.mount: Deactivated successfully. Feb 8 23:40:51.876529 systemd[1]: var-lib-kubelet-pods-7da0a698\x2d0e4b\x2d4943\x2d9033\x2d6de571527905-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:40:51.881851 kubelet[2446]: I0208 23:40:51.881821 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:40:51.882096 kubelet[2446]: I0208 23:40:51.882075 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-kube-api-access-59dpj" (OuterVolumeSpecName: "kube-api-access-59dpj") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "kube-api-access-59dpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:51.886680 systemd[1]: var-lib-kubelet-pods-7da0a698\x2d0e4b\x2d4943\x2d9033\x2d6de571527905-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:40:51.887409 kubelet[2446]: I0208 23:40:51.887213 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:40:51.887409 kubelet[2446]: I0208 23:40:51.887252 2446 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7da0a698-0e4b-4943-9033-6de571527905" (UID: "7da0a698-0e4b-4943-9033-6de571527905"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:51.962995 systemd[1]: var-lib-kubelet-pods-7da0a698\x2d0e4b\x2d4943\x2d9033\x2d6de571527905-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:40:51.963826 kubelet[2446]: I0208 23:40:51.963795 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964046 kubelet[2446]: I0208 23:40:51.963980 2446 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-lib-modules\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964046 kubelet[2446]: I0208 23:40:51.964027 2446 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-bpf-maps\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964049 2446 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964071 2446 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-xtables-lock\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964089 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-run\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964107 2446 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cni-path\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964168 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-cilium-cgroup\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964192 2446 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-59dpj\" (UniqueName: \"kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-kube-api-access-59dpj\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964214 kubelet[2446]: I0208 23:40:51.964215 2446 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-host-proc-sys-net\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964550 kubelet[2446]: I0208 23:40:51.964233 2446 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-hostproc\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964550 kubelet[2446]: I0208 23:40:51.964251 2446 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7da0a698-0e4b-4943-9033-6de571527905-etc-cni-netd\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964550 kubelet[2446]: I0208 23:40:51.964291 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7da0a698-0e4b-4943-9033-6de571527905-cilium-config-path\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964550 kubelet[2446]: I0208 23:40:51.964312 2446 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7da0a698-0e4b-4943-9033-6de571527905-hubble-tls\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:51.964550 kubelet[2446]: I0208 23:40:51.964331 2446 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7da0a698-0e4b-4943-9033-6de571527905-clustermesh-secrets\") on node \"ci-3510.3.2-a-baa4ff5fd1\" DevicePath \"\"" Feb 8 23:40:52.068118 systemd[1]: Removed slice kubepods-burstable-pod7da0a698_0e4b_4943_9033_6de571527905.slice. Feb 8 23:40:52.625972 kubelet[2446]: I0208 23:40:52.625940 2446 scope.go:117] "RemoveContainer" containerID="c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161" Feb 8 23:40:52.629398 env[1338]: time="2024-02-08T23:40:52.629345715Z" level=info msg="RemoveContainer for \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\"" Feb 8 23:40:52.639825 env[1338]: time="2024-02-08T23:40:52.639782574Z" level=info msg="RemoveContainer for \"c8a1d4e74e1d4463cf704745917eae6427302049831c157289ae56169c537161\" returns successfully" Feb 8 23:40:52.663024 kubelet[2446]: I0208 23:40:52.662998 2446 topology_manager.go:215] "Topology Admit Handler" podUID="f6cbd4f0-cdee-4788-8900-ebd314367988" podNamespace="kube-system" podName="cilium-q4bwn" Feb 8 23:40:52.663253 kubelet[2446]: E0208 23:40:52.663235 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7da0a698-0e4b-4943-9033-6de571527905" containerName="mount-cgroup" Feb 8 23:40:52.663377 kubelet[2446]: E0208 23:40:52.663365 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7da0a698-0e4b-4943-9033-6de571527905" containerName="mount-cgroup" Feb 8 23:40:52.663501 kubelet[2446]: I0208 23:40:52.663489 2446 memory_manager.go:346] "RemoveStaleState removing state" podUID="7da0a698-0e4b-4943-9033-6de571527905" containerName="mount-cgroup" Feb 8 23:40:52.663647 kubelet[2446]: I0208 23:40:52.663633 2446 memory_manager.go:346] "RemoveStaleState removing state" podUID="7da0a698-0e4b-4943-9033-6de571527905" containerName="mount-cgroup" Feb 8 23:40:52.671029 systemd[1]: Created slice kubepods-burstable-podf6cbd4f0_cdee_4788_8900_ebd314367988.slice. Feb 8 23:40:52.768639 kubelet[2446]: I0208 23:40:52.768567 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-host-proc-sys-kernel\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.768914 kubelet[2446]: I0208 23:40:52.768886 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-xtables-lock\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769035 kubelet[2446]: I0208 23:40:52.768938 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6cbd4f0-cdee-4788-8900-ebd314367988-clustermesh-secrets\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769035 kubelet[2446]: I0208 23:40:52.768966 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-host-proc-sys-net\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769035 kubelet[2446]: I0208 23:40:52.768994 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvsr\" (UniqueName: \"kubernetes.io/projected/f6cbd4f0-cdee-4788-8900-ebd314367988-kube-api-access-6kvsr\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769035 kubelet[2446]: I0208 23:40:52.769023 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-hostproc\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769240 kubelet[2446]: I0208 23:40:52.769050 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-bpf-maps\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769240 kubelet[2446]: I0208 23:40:52.769079 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-lib-modules\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769240 kubelet[2446]: I0208 23:40:52.769110 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-cilium-run\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769240 kubelet[2446]: I0208 23:40:52.769167 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-cilium-cgroup\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769240 kubelet[2446]: I0208 23:40:52.769196 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-cni-path\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769240 kubelet[2446]: I0208 23:40:52.769222 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6cbd4f0-cdee-4788-8900-ebd314367988-etc-cni-netd\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769507 kubelet[2446]: I0208 23:40:52.769250 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6cbd4f0-cdee-4788-8900-ebd314367988-cilium-config-path\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769507 kubelet[2446]: I0208 23:40:52.769279 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6cbd4f0-cdee-4788-8900-ebd314367988-cilium-ipsec-secrets\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.769507 kubelet[2446]: I0208 23:40:52.769309 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6cbd4f0-cdee-4788-8900-ebd314367988-hubble-tls\") pod \"cilium-q4bwn\" (UID: \"f6cbd4f0-cdee-4788-8900-ebd314367988\") " pod="kube-system/cilium-q4bwn" Feb 8 23:40:52.856137 kubelet[2446]: I0208 23:40:52.856086 2446 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-baa4ff5fd1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-08T23:40:52Z","lastTransitionTime":"2024-02-08T23:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 8 23:40:52.980688 env[1338]: time="2024-02-08T23:40:52.979558791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4bwn,Uid:f6cbd4f0-cdee-4788-8900-ebd314367988,Namespace:kube-system,Attempt:0,}" Feb 8 23:40:53.034052 env[1338]: time="2024-02-08T23:40:53.033975697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:53.034052 env[1338]: time="2024-02-08T23:40:53.034018597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:53.034052 env[1338]: time="2024-02-08T23:40:53.034033397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:53.034533 env[1338]: time="2024-02-08T23:40:53.034479399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9 pid=4388 runtime=io.containerd.runc.v2 Feb 8 23:40:53.051028 systemd[1]: Started cri-containerd-00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9.scope. Feb 8 23:40:53.057584 systemd[1]: run-containerd-runc-k8s.io-00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9-runc.hFeBb6.mount: Deactivated successfully. Feb 8 23:40:53.085244 env[1338]: time="2024-02-08T23:40:53.085109684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q4bwn,Uid:f6cbd4f0-cdee-4788-8900-ebd314367988,Namespace:kube-system,Attempt:0,} returns sandbox id \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\"" Feb 8 23:40:53.088828 env[1338]: time="2024-02-08T23:40:53.088768304Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:40:53.123879 env[1338]: time="2024-02-08T23:40:53.123848701Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666\"" Feb 8 23:40:53.125958 env[1338]: time="2024-02-08T23:40:53.124559705Z" level=info msg="StartContainer for \"76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666\"" Feb 8 23:40:53.141204 systemd[1]: Started cri-containerd-76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666.scope. Feb 8 23:40:53.175611 env[1338]: time="2024-02-08T23:40:53.175561391Z" level=info msg="StartContainer for \"76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666\" returns successfully" Feb 8 23:40:53.183796 systemd[1]: cri-containerd-76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666.scope: Deactivated successfully. Feb 8 23:40:53.194078 kubelet[2446]: E0208 23:40:53.194052 2446 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:40:53.231903 env[1338]: time="2024-02-08T23:40:53.230978402Z" level=info msg="shim disconnected" id=76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666 Feb 8 23:40:53.231903 env[1338]: time="2024-02-08T23:40:53.231027103Z" level=warning msg="cleaning up after shim disconnected" id=76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666 namespace=k8s.io Feb 8 23:40:53.231903 env[1338]: time="2024-02-08T23:40:53.231038403Z" level=info msg="cleaning up dead shim" Feb 8 23:40:53.239022 env[1338]: time="2024-02-08T23:40:53.238986847Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4471 runtime=io.containerd.runc.v2\n" Feb 8 23:40:53.448128 kubelet[2446]: W0208 23:40:53.448056 2446 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7da0a698_0e4b_4943_9033_6de571527905.slice/cri-containerd-d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240.scope WatchSource:0}: container "d9d4300c32350beecfb6102c873e464addb8c33a185d6d9752c32165e98b5240" in namespace "k8s.io": not found Feb 8 23:40:53.632552 env[1338]: time="2024-02-08T23:40:53.632505356Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:40:53.663657 env[1338]: time="2024-02-08T23:40:53.663610931Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7\"" Feb 8 23:40:53.665862 env[1338]: time="2024-02-08T23:40:53.664301935Z" level=info msg="StartContainer for \"9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7\"" Feb 8 23:40:53.681305 systemd[1]: Started cri-containerd-9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7.scope. Feb 8 23:40:53.715737 env[1338]: time="2024-02-08T23:40:53.715694423Z" level=info msg="StartContainer for \"9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7\" returns successfully" Feb 8 23:40:53.717006 systemd[1]: cri-containerd-9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7.scope: Deactivated successfully. Feb 8 23:40:53.747558 env[1338]: time="2024-02-08T23:40:53.747508902Z" level=info msg="shim disconnected" id=9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7 Feb 8 23:40:53.747796 env[1338]: time="2024-02-08T23:40:53.747559602Z" level=warning msg="cleaning up after shim disconnected" id=9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7 namespace=k8s.io Feb 8 23:40:53.747796 env[1338]: time="2024-02-08T23:40:53.747571002Z" level=info msg="cleaning up dead shim" Feb 8 23:40:53.754793 env[1338]: time="2024-02-08T23:40:53.754734942Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4530 runtime=io.containerd.runc.v2\n" Feb 8 23:40:54.026110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount297397345.mount: Deactivated successfully. Feb 8 23:40:54.059664 kubelet[2446]: I0208 23:40:54.059606 2446 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7da0a698-0e4b-4943-9033-6de571527905" path="/var/lib/kubelet/pods/7da0a698-0e4b-4943-9033-6de571527905/volumes" Feb 8 23:40:54.642438 env[1338]: time="2024-02-08T23:40:54.642302906Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:40:54.679323 env[1338]: time="2024-02-08T23:40:54.679276412Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913\"" Feb 8 23:40:54.679933 env[1338]: time="2024-02-08T23:40:54.679888216Z" level=info msg="StartContainer for \"3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913\"" Feb 8 23:40:54.706894 systemd[1]: Started cri-containerd-3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913.scope. Feb 8 23:40:54.743375 systemd[1]: cri-containerd-3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913.scope: Deactivated successfully. Feb 8 23:40:54.748749 env[1338]: time="2024-02-08T23:40:54.748707600Z" level=info msg="StartContainer for \"3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913\" returns successfully" Feb 8 23:40:54.779942 env[1338]: time="2024-02-08T23:40:54.779887974Z" level=info msg="shim disconnected" id=3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913 Feb 8 23:40:54.779942 env[1338]: time="2024-02-08T23:40:54.779942674Z" level=warning msg="cleaning up after shim disconnected" id=3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913 namespace=k8s.io Feb 8 23:40:54.780274 env[1338]: time="2024-02-08T23:40:54.779956374Z" level=info msg="cleaning up dead shim" Feb 8 23:40:54.793278 env[1338]: time="2024-02-08T23:40:54.793238949Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4587 runtime=io.containerd.runc.v2\n" Feb 8 23:40:55.026628 systemd[1]: run-containerd-runc-k8s.io-3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913-runc.Vy4LoD.mount: Deactivated successfully. Feb 8 23:40:55.027066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913-rootfs.mount: Deactivated successfully. Feb 8 23:40:55.647287 env[1338]: time="2024-02-08T23:40:55.647234099Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:40:55.682635 env[1338]: time="2024-02-08T23:40:55.682591496Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1\"" Feb 8 23:40:55.683349 env[1338]: time="2024-02-08T23:40:55.683308900Z" level=info msg="StartContainer for \"50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1\"" Feb 8 23:40:55.711304 systemd[1]: Started cri-containerd-50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1.scope. Feb 8 23:40:55.741474 systemd[1]: cri-containerd-50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1.scope: Deactivated successfully. Feb 8 23:40:55.744752 env[1338]: time="2024-02-08T23:40:55.744612040Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cbd4f0_cdee_4788_8900_ebd314367988.slice/cri-containerd-50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1.scope/memory.events\": no such file or directory" Feb 8 23:40:55.748455 env[1338]: time="2024-02-08T23:40:55.748416262Z" level=info msg="StartContainer for \"50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1\" returns successfully" Feb 8 23:40:55.784647 env[1338]: time="2024-02-08T23:40:55.784598163Z" level=info msg="shim disconnected" id=50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1 Feb 8 23:40:55.784647 env[1338]: time="2024-02-08T23:40:55.784645763Z" level=warning msg="cleaning up after shim disconnected" id=50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1 namespace=k8s.io Feb 8 23:40:55.784950 env[1338]: time="2024-02-08T23:40:55.784656963Z" level=info msg="cleaning up dead shim" Feb 8 23:40:55.797162 env[1338]: time="2024-02-08T23:40:55.797100332Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4650 runtime=io.containerd.runc.v2\n" Feb 8 23:40:56.027019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1-rootfs.mount: Deactivated successfully. Feb 8 23:40:56.558574 kubelet[2446]: W0208 23:40:56.558497 2446 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cbd4f0_cdee_4788_8900_ebd314367988.slice/cri-containerd-76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666.scope WatchSource:0}: task 76e3a2bd646c225f04efd789603cc7d4e02958e0a4a5c4d07a22892fcd91b666 not found: not found Feb 8 23:40:56.653490 env[1338]: time="2024-02-08T23:40:56.653208170Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:40:56.690645 env[1338]: time="2024-02-08T23:40:56.690545177Z" level=info msg="CreateContainer within sandbox \"00393a85298d0f2af0c39a0f8cb70136de74ede9cf9afa19cd3c0a9883f1f7b9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac\"" Feb 8 23:40:56.691525 env[1338]: time="2024-02-08T23:40:56.691485882Z" level=info msg="StartContainer for \"225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac\"" Feb 8 23:40:56.719234 systemd[1]: Started cri-containerd-225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac.scope. Feb 8 23:40:56.752675 env[1338]: time="2024-02-08T23:40:56.752613920Z" level=info msg="StartContainer for \"225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac\" returns successfully" Feb 8 23:40:57.026386 systemd[1]: run-containerd-runc-k8s.io-225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac-runc.rdRqqs.mount: Deactivated successfully. Feb 8 23:40:57.131156 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:40:58.391390 systemd[1]: run-containerd-runc-k8s.io-225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac-runc.SPf8sY.mount: Deactivated successfully. Feb 8 23:40:59.666074 kubelet[2446]: W0208 23:40:59.666033 2446 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cbd4f0_cdee_4788_8900_ebd314367988.slice/cri-containerd-9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7.scope WatchSource:0}: task 9e89b707ae310828656903f778175099f0718a40e96408eec8c1c7a276cc33c7 not found: not found Feb 8 23:40:59.746785 systemd-networkd[1490]: lxc_health: Link UP Feb 8 23:40:59.770328 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:40:59.767749 systemd-networkd[1490]: lxc_health: Gained carrier Feb 8 23:41:00.553745 systemd[1]: run-containerd-runc-k8s.io-225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac-runc.W45Zzq.mount: Deactivated successfully. Feb 8 23:41:00.963307 systemd-networkd[1490]: lxc_health: Gained IPv6LL Feb 8 23:41:01.093167 kubelet[2446]: I0208 23:41:01.093116 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-q4bwn" podStartSLOduration=9.093067634 podCreationTimestamp="2024-02-08 23:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:40:57.669523971 +0000 UTC m=+230.409163176" watchObservedRunningTime="2024-02-08 23:41:01.093067634 +0000 UTC m=+233.832706839" Feb 8 23:41:02.777048 kubelet[2446]: W0208 23:41:02.776992 2446 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cbd4f0_cdee_4788_8900_ebd314367988.slice/cri-containerd-3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913.scope WatchSource:0}: task 3fa178599e1ccd4edaefa840694243f74726d753e29f0ac95bede813ba57b913 not found: not found Feb 8 23:41:02.797869 systemd[1]: run-containerd-runc-k8s.io-225fd9f9b5deb1e2be9dff975229154100391b14133505297dfcce391d48abac-runc.sMGesL.mount: Deactivated successfully. Feb 8 23:41:05.885890 kubelet[2446]: W0208 23:41:05.885839 2446 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cbd4f0_cdee_4788_8900_ebd314367988.slice/cri-containerd-50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1.scope WatchSource:0}: task 50927b2c35d266098568b8f201ccddd249a9dd94edb0a5aa87e033292db0cdc1 not found: not found Feb 8 23:41:07.215179 sshd[4332]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:07.219364 systemd-logind[1327]: Session 27 logged out. Waiting for processes to exit. Feb 8 23:41:07.219768 systemd[1]: sshd@24-10.200.8.36:22-10.200.12.6:35166.service: Deactivated successfully. Feb 8 23:41:07.220662 systemd[1]: session-27.scope: Deactivated successfully. Feb 8 23:41:07.222143 systemd-logind[1327]: Removed session 27. Feb 8 23:41:08.103281 env[1338]: time="2024-02-08T23:41:08.103226523Z" level=info msg="StopPodSandbox for \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\"" Feb 8 23:41:08.103846 env[1338]: time="2024-02-08T23:41:08.103352924Z" level=info msg="TearDown network for sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" successfully" Feb 8 23:41:08.103846 env[1338]: time="2024-02-08T23:41:08.103407424Z" level=info msg="StopPodSandbox for \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" returns successfully" Feb 8 23:41:08.104029 env[1338]: time="2024-02-08T23:41:08.103991227Z" level=info msg="RemovePodSandbox for \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\"" Feb 8 23:41:08.104115 env[1338]: time="2024-02-08T23:41:08.104043427Z" level=info msg="Forcibly stopping sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\"" Feb 8 23:41:08.104218 env[1338]: time="2024-02-08T23:41:08.104177028Z" level=info msg="TearDown network for sandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" successfully" Feb 8 23:41:08.123069 env[1338]: time="2024-02-08T23:41:08.123025527Z" level=info msg="RemovePodSandbox \"8b68594bb75409b7b21f93fa220aad636c663ba8bd6e284df73287c12396a2e8\" returns successfully" Feb 8 23:41:08.123520 env[1338]: time="2024-02-08T23:41:08.123486929Z" level=info msg="StopPodSandbox for \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\"" Feb 8 23:41:08.123641 env[1338]: time="2024-02-08T23:41:08.123603330Z" level=info msg="TearDown network for sandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" successfully" Feb 8 23:41:08.123693 env[1338]: time="2024-02-08T23:41:08.123646030Z" level=info msg="StopPodSandbox for \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" returns successfully" Feb 8 23:41:08.124062 env[1338]: time="2024-02-08T23:41:08.124029232Z" level=info msg="RemovePodSandbox for \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\"" Feb 8 23:41:08.124174 env[1338]: time="2024-02-08T23:41:08.124064932Z" level=info msg="Forcibly stopping sandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\"" Feb 8 23:41:08.124236 env[1338]: time="2024-02-08T23:41:08.124164033Z" level=info msg="TearDown network for sandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" successfully" Feb 8 23:41:08.135497 env[1338]: time="2024-02-08T23:41:08.135449292Z" level=info msg="RemovePodSandbox \"cae54b4c77fa4a0bef012fda91ac91a3444a3f74ee97f3dba1f17742a277a29a\" returns successfully" Feb 8 23:41:08.136462 env[1338]: time="2024-02-08T23:41:08.136430597Z" level=info msg="StopPodSandbox for \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\"" Feb 8 23:41:08.136550 env[1338]: time="2024-02-08T23:41:08.136516097Z" level=info msg="TearDown network for sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" successfully" Feb 8 23:41:08.136601 env[1338]: time="2024-02-08T23:41:08.136553397Z" level=info msg="StopPodSandbox for \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" returns successfully" Feb 8 23:41:08.136880 env[1338]: time="2024-02-08T23:41:08.136853499Z" level=info msg="RemovePodSandbox for \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\"" Feb 8 23:41:08.136958 env[1338]: time="2024-02-08T23:41:08.136886499Z" level=info msg="Forcibly stopping sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\"" Feb 8 23:41:08.137007 env[1338]: time="2024-02-08T23:41:08.136961200Z" level=info msg="TearDown network for sandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" successfully" Feb 8 23:41:08.148884 env[1338]: time="2024-02-08T23:41:08.148854662Z" level=info msg="RemovePodSandbox \"49d15bcb666a39f1e086d2b043a11abdcd1276e45767c1049074eff0e3b823e5\" returns successfully" Feb 8 23:41:11.403757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.417399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.431531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.446771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.461146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.474648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.488660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.488968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.489237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.500335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.506130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.506347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.522447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.522709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.522852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.533795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.544638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.572807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.573004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.573158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.573290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.573417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.573545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.573672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.585026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.585301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.596535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.596791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.608336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.608594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.625548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.625798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.625931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.637357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.637605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.648638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.648897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.660140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.677285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.677444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.677576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.677709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.694712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.695047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.695267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.705975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.706318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.717076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.717391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.728395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.728688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.739539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.751570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.751818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.751963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.762730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.762979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.773870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.774101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.785025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.785282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.792699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.807753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.808019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.808234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.819091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.819329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.830400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.830619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.843271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.844295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.856657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.906859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.907222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.907363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.907497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.907665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.907818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.907948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.908074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.908196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.908344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.924698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.966898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967138 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.967949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.968073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.986977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.988675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:11.988863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.000607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.029354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.029512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.029644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.029773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.029905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.048180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.071178 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.071331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.071470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.071601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.071729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.071856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.072029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.081622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.081862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.098906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.127373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.133464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.133607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.133738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.133874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.134008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.134206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.134486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.144766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.145083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.156355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.178598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.190294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.190460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.190599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.190732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.190861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.190987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.201528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.207099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.235141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.246167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.246393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.246543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.246678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.246806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.246937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.247063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.263244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.263559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.263742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.274625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.286084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.303421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.303602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.303737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.303863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.303993 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.314544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.314775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.325702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.331550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.348522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.348686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.348838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.348980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.360150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.360464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.371257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.382489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.388632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.408928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.409147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.409326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.409499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.409668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.420977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.426918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.469262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.469482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.469622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.469754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.469919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.470048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.470185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.480558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.514315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.530560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.530703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.530855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.531332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.531725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.531890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.532024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.534192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.534338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.548234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.582213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.582445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.582582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.582712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.582865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.583027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.583227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.583362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.599015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.599401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.599575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.610469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.633548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.633783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.633923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.634050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.634209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.639198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.656297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.690233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.690494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.690631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.690759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.690892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.691015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.691200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.691326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.707288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.707603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.707752 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.724476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.724720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.730663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.742020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.742198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.742325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.754310 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.776771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.854256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.854448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.854568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.854690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.854813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.854940 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.855999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.856313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.856453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.856582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.856711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.862142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.873374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.890488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.913638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.913793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.913924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.914050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.914207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.914335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.914460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.931558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.932145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.932404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.947989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.970140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.970580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.970836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.970972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.971102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.971243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:12.981448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.008927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.014623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.014789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.014929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.015072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.015210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.015334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.032094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.048924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.083271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.083529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.083674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.083825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.084001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.084184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.084341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.084467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.084597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.084722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.100382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.100651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.100787 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.111800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.140351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.140507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.140649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.140777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.140904 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:13.141030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#45 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001