Feb  8 23:12:57.016571 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024
Feb  8 23:12:57.016594 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  8 23:12:57.016606 kernel: BIOS-provided physical RAM map:
Feb  8 23:12:57.016612 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Feb  8 23:12:57.016617 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved
Feb  8 23:12:57.016626 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable
Feb  8 23:12:57.016635 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved
Feb  8 23:12:57.016642 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data
Feb  8 23:12:57.016650 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS
Feb  8 23:12:57.016655 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable
Feb  8 23:12:57.016661 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable
Feb  8 23:12:57.016669 kernel: printk: bootconsole [earlyser0] enabled
Feb  8 23:12:57.016675 kernel: NX (Execute Disable) protection: active
Feb  8 23:12:57.016681 kernel: efi: EFI v2.70 by Microsoft
Feb  8 23:12:57.016693 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 
Feb  8 23:12:57.016700 kernel: random: crng init done
Feb  8 23:12:57.016706 kernel: SMBIOS 3.1.0 present.
Feb  8 23:12:57.016715 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023
Feb  8 23:12:57.016722 kernel: Hypervisor detected: Microsoft Hyper-V
Feb  8 23:12:57.016728 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2
Feb  8 23:12:57.016737 kernel: Hyper-V Host Build:20348-10.0-1-0.1544
Feb  8 23:12:57.016743 kernel: Hyper-V: Nested features: 0x1e0101
Feb  8 23:12:57.016751 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40
Feb  8 23:12:57.016760 kernel: Hyper-V: Using hypercall for remote TLB flush
Feb  8 23:12:57.016767 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
Feb  8 23:12:57.016773 kernel: tsc: Marking TSC unstable due to running on Hyper-V
Feb  8 23:12:57.016781 kernel: tsc: Detected 2593.907 MHz processor
Feb  8 23:12:57.016789 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb  8 23:12:57.016795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb  8 23:12:57.016802 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000
Feb  8 23:12:57.016811 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  8 23:12:57.016818 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved
Feb  8 23:12:57.016826 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000
Feb  8 23:12:57.016835 kernel: Using GB pages for direct mapping
Feb  8 23:12:57.016842 kernel: Secure boot disabled
Feb  8 23:12:57.016852 kernel: ACPI: Early table checksum verification disabled
Feb  8 23:12:57.016858 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL)
Feb  8 23:12:57.016867 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016874 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016882 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01   00000001 MSFT 05000000)
Feb  8 23:12:57.016896 kernel: ACPI: FACS 0x000000003FFFE000 000040
Feb  8 23:12:57.016904 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016914 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016920 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016927 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016937 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016946 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016954 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001)
Feb  8 23:12:57.016963 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113]
Feb  8 23:12:57.016970 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183]
Feb  8 23:12:57.016978 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f]
Feb  8 23:12:57.016986 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063]
Feb  8 23:12:57.016993 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f]
Feb  8 23:12:57.017000 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027]
Feb  8 23:12:57.017012 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057]
Feb  8 23:12:57.017018 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf]
Feb  8 23:12:57.017027 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037]
Feb  8 23:12:57.017035 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033]
Feb  8 23:12:57.017042 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb  8 23:12:57.017051 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb  8 23:12:57.017059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug
Feb  8 23:12:57.017065 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug
Feb  8 23:12:57.017074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug
Feb  8 23:12:57.017084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug
Feb  8 23:12:57.017091 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug
Feb  8 23:12:57.017101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug
Feb  8 23:12:57.017108 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug
Feb  8 23:12:57.017115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug
Feb  8 23:12:57.017125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug
Feb  8 23:12:57.017131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug
Feb  8 23:12:57.017139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug
Feb  8 23:12:57.017148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug
Feb  8 23:12:57.017157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug
Feb  8 23:12:57.017167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug
Feb  8 23:12:57.017174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug
Feb  8 23:12:57.017181 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug
Feb  8 23:12:57.017191 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff]
Feb  8 23:12:57.017198 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff]
Feb  8 23:12:57.017204 kernel: Zone ranges:
Feb  8 23:12:57.017215 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  8 23:12:57.017221 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb  8 23:12:57.017232 kernel:   Normal   [mem 0x0000000100000000-0x00000002bfffffff]
Feb  8 23:12:57.017240 kernel: Movable zone start for each node
Feb  8 23:12:57.017247 kernel: Early memory node ranges
Feb  8 23:12:57.017255 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Feb  8 23:12:57.017264 kernel:   node   0: [mem 0x0000000000100000-0x000000003ff40fff]
Feb  8 23:12:57.017272 kernel:   node   0: [mem 0x000000003ffff000-0x000000003fffffff]
Feb  8 23:12:57.017280 kernel:   node   0: [mem 0x0000000100000000-0x00000002bfffffff]
Feb  8 23:12:57.017288 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff]
Feb  8 23:12:57.017297 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  8 23:12:57.017306 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Feb  8 23:12:57.017312 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges
Feb  8 23:12:57.017319 kernel: ACPI: PM-Timer IO Port: 0x408
Feb  8 23:12:57.017329 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
Feb  8 23:12:57.017336 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
Feb  8 23:12:57.017342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  8 23:12:57.017349 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  8 23:12:57.017358 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200
Feb  8 23:12:57.017365 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb  8 23:12:57.017377 kernel: [mem 0x40000000-0xffffffff] available for PCI devices
Feb  8 23:12:57.017385 kernel: Booting paravirtualized kernel on Hyper-V
Feb  8 23:12:57.017394 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  8 23:12:57.017403 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Feb  8 23:12:57.017413 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Feb  8 23:12:57.019184 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Feb  8 23:12:57.019199 kernel: pcpu-alloc: [0] 0 1 
Feb  8 23:12:57.019212 kernel: Hyper-V: PV spinlocks enabled
Feb  8 23:12:57.019225 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb  8 23:12:57.019243 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2062618
Feb  8 23:12:57.019256 kernel: Policy zone: Normal
Feb  8 23:12:57.019272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  8 23:12:57.019286 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb  8 23:12:57.019298 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb  8 23:12:57.019311 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  8 23:12:57.019324 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  8 23:12:57.019337 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved)
Feb  8 23:12:57.019353 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb  8 23:12:57.019366 kernel: ftrace: allocating 34475 entries in 135 pages
Feb  8 23:12:57.019389 kernel: ftrace: allocated 135 pages with 4 groups
Feb  8 23:12:57.019405 kernel: rcu: Hierarchical RCU implementation.
Feb  8 23:12:57.019428 kernel: rcu:         RCU event tracing is enabled.
Feb  8 23:12:57.019442 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb  8 23:12:57.019456 kernel:         Rude variant of Tasks RCU enabled.
Feb  8 23:12:57.019469 kernel:         Tracing variant of Tasks RCU enabled.
Feb  8 23:12:57.019483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  8 23:12:57.019496 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb  8 23:12:57.019510 kernel: Using NULL legacy PIC
Feb  8 23:12:57.019527 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0
Feb  8 23:12:57.019540 kernel: Console: colour dummy device 80x25
Feb  8 23:12:57.019554 kernel: printk: console [tty1] enabled
Feb  8 23:12:57.019567 kernel: printk: console [ttyS0] enabled
Feb  8 23:12:57.019580 kernel: printk: bootconsole [earlyser0] disabled
Feb  8 23:12:57.019596 kernel: ACPI: Core revision 20210730
Feb  8 23:12:57.019610 kernel: Failed to register legacy timer interrupt
Feb  8 23:12:57.019624 kernel: APIC: Switch to symmetric I/O mode setup
Feb  8 23:12:57.019637 kernel: Hyper-V: Using IPI hypercalls
Feb  8 23:12:57.019651 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907)
Feb  8 23:12:57.019665 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Feb  8 23:12:57.019679 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Feb  8 23:12:57.019692 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  8 23:12:57.019706 kernel: Spectre V2 : Mitigation: Retpolines
Feb  8 23:12:57.019719 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb  8 23:12:57.019735 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb  8 23:12:57.019749 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Feb  8 23:12:57.019762 kernel: RETBleed: Vulnerable
Feb  8 23:12:57.019775 kernel: Speculative Store Bypass: Vulnerable
Feb  8 23:12:57.019789 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode
Feb  8 23:12:57.019802 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb  8 23:12:57.019815 kernel: GDS: Unknown: Dependent on hypervisor status
Feb  8 23:12:57.019829 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  8 23:12:57.019843 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  8 23:12:57.019857 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  8 23:12:57.019872 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Feb  8 23:12:57.019886 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Feb  8 23:12:57.019899 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Feb  8 23:12:57.019913 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  8 23:12:57.019926 kernel: x86/fpu: xstate_offset[5]:  832, xstate_sizes[5]:   64
Feb  8 23:12:57.019940 kernel: x86/fpu: xstate_offset[6]:  896, xstate_sizes[6]:  512
Feb  8 23:12:57.019953 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024
Feb  8 23:12:57.019967 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format.
Feb  8 23:12:57.019980 kernel: Freeing SMP alternatives memory: 32K
Feb  8 23:12:57.019994 kernel: pid_max: default: 32768 minimum: 301
Feb  8 23:12:57.020007 kernel: LSM: Security Framework initializing
Feb  8 23:12:57.020020 kernel: SELinux:  Initializing.
Feb  8 23:12:57.020036 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  8 23:12:57.020049 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  8 23:12:57.020063 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7)
Feb  8 23:12:57.020077 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Feb  8 23:12:57.020091 kernel: signal: max sigframe size: 3632
Feb  8 23:12:57.020104 kernel: rcu: Hierarchical SRCU implementation.
Feb  8 23:12:57.020118 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb  8 23:12:57.020132 kernel: smp: Bringing up secondary CPUs ...
Feb  8 23:12:57.020145 kernel: x86: Booting SMP configuration:
Feb  8 23:12:57.020159 kernel: .... node  #0, CPUs:      #1
Feb  8 23:12:57.020175 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
Feb  8 23:12:57.020190 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb  8 23:12:57.020213 kernel: smp: Brought up 1 node, 2 CPUs
Feb  8 23:12:57.020227 kernel: smpboot: Max logical packages: 1
Feb  8 23:12:57.020240 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS)
Feb  8 23:12:57.020265 kernel: devtmpfs: initialized
Feb  8 23:12:57.020277 kernel: x86/mm: Memory block size: 128MB
Feb  8 23:12:57.020289 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes)
Feb  8 23:12:57.020305 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  8 23:12:57.020317 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb  8 23:12:57.020330 kernel: pinctrl core: initialized pinctrl subsystem
Feb  8 23:12:57.020343 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  8 23:12:57.020355 kernel: audit: initializing netlink subsys (disabled)
Feb  8 23:12:57.020367 kernel: audit: type=2000 audit(1707433976.023:1): state=initialized audit_enabled=0 res=1
Feb  8 23:12:57.020380 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  8 23:12:57.020392 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  8 23:12:57.020405 kernel: cpuidle: using governor menu
Feb  8 23:12:57.020427 kernel: ACPI: bus type PCI registered
Feb  8 23:12:57.020440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  8 23:12:57.020453 kernel: dca service started, version 1.12.1
Feb  8 23:12:57.020465 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  8 23:12:57.020478 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb  8 23:12:57.020490 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb  8 23:12:57.020503 kernel: ACPI: Added _OSI(Module Device)
Feb  8 23:12:57.020515 kernel: ACPI: Added _OSI(Processor Device)
Feb  8 23:12:57.020528 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb  8 23:12:57.020543 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  8 23:12:57.020556 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb  8 23:12:57.020568 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb  8 23:12:57.020580 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb  8 23:12:57.020593 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  8 23:12:57.020605 kernel: ACPI: Interpreter enabled
Feb  8 23:12:57.020618 kernel: ACPI: PM: (supports S0 S5)
Feb  8 23:12:57.020630 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  8 23:12:57.020643 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  8 23:12:57.020658 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F
Feb  8 23:12:57.020670 kernel: iommu: Default domain type: Translated 
Feb  8 23:12:57.020683 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb  8 23:12:57.020695 kernel: vgaarb: loaded
Feb  8 23:12:57.020707 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  8 23:12:57.020720 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  8 23:12:57.020733 kernel: PTP clock support registered
Feb  8 23:12:57.020745 kernel: Registered efivars operations
Feb  8 23:12:57.020758 kernel: PCI: Using ACPI for IRQ routing
Feb  8 23:12:57.020770 kernel: PCI: System does not support PCI
Feb  8 23:12:57.020785 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page
Feb  8 23:12:57.020797 kernel: VFS: Disk quotas dquot_6.6.0
Feb  8 23:12:57.020810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  8 23:12:57.020822 kernel: pnp: PnP ACPI init
Feb  8 23:12:57.020834 kernel: pnp: PnP ACPI: found 3 devices
Feb  8 23:12:57.020847 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  8 23:12:57.020860 kernel: NET: Registered PF_INET protocol family
Feb  8 23:12:57.020872 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  8 23:12:57.020887 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb  8 23:12:57.020900 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  8 23:12:57.020912 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  8 23:12:57.020925 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb  8 23:12:57.020938 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb  8 23:12:57.020950 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  8 23:12:57.020963 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb  8 23:12:57.020975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  8 23:12:57.020987 kernel: NET: Registered PF_XDP protocol family
Feb  8 23:12:57.021002 kernel: PCI: CLS 0 bytes, default 64
Feb  8 23:12:57.021015 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb  8 23:12:57.021028 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB)
Feb  8 23:12:57.021040 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb  8 23:12:57.021052 kernel: Initialise system trusted keyrings
Feb  8 23:12:57.021065 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Feb  8 23:12:57.021077 kernel: Key type asymmetric registered
Feb  8 23:12:57.021089 kernel: Asymmetric key parser 'x509' registered
Feb  8 23:12:57.021102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb  8 23:12:57.021116 kernel: io scheduler mq-deadline registered
Feb  8 23:12:57.021129 kernel: io scheduler kyber registered
Feb  8 23:12:57.021141 kernel: io scheduler bfq registered
Feb  8 23:12:57.021154 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb  8 23:12:57.021167 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  8 23:12:57.021179 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  8 23:12:57.021191 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Feb  8 23:12:57.021204 kernel: i8042: PNP: No PS/2 controller found.
Feb  8 23:12:57.021350 kernel: rtc_cmos 00:02: registered as rtc0
Feb  8 23:12:57.021468 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:12:56 UTC (1707433976)
Feb  8 23:12:57.021568 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram
Feb  8 23:12:57.021584 kernel: fail to initialize ptp_kvm
Feb  8 23:12:57.021597 kernel: intel_pstate: CPU model not supported
Feb  8 23:12:57.021610 kernel: efifb: probing for efifb
Feb  8 23:12:57.021622 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k
Feb  8 23:12:57.021635 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1
Feb  8 23:12:57.021648 kernel: efifb: scrolling: redraw
Feb  8 23:12:57.021663 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Feb  8 23:12:57.021676 kernel: Console: switching to colour frame buffer device 128x48
Feb  8 23:12:57.021688 kernel: fb0: EFI VGA frame buffer device
Feb  8 23:12:57.021700 kernel: pstore: Registered efi as persistent store backend
Feb  8 23:12:57.021712 kernel: NET: Registered PF_INET6 protocol family
Feb  8 23:12:57.021725 kernel: Segment Routing with IPv6
Feb  8 23:12:57.021737 kernel: In-situ OAM (IOAM) with IPv6
Feb  8 23:12:57.021750 kernel: NET: Registered PF_PACKET protocol family
Feb  8 23:12:57.021762 kernel: Key type dns_resolver registered
Feb  8 23:12:57.021776 kernel: IPI shorthand broadcast: enabled
Feb  8 23:12:57.021789 kernel: sched_clock: Marking stable (753040500, 22260300)->(964126100, -188825300)
Feb  8 23:12:57.021801 kernel: registered taskstats version 1
Feb  8 23:12:57.021814 kernel: Loading compiled-in X.509 certificates
Feb  8 23:12:57.021826 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6'
Feb  8 23:12:57.021838 kernel: Key type .fscrypt registered
Feb  8 23:12:57.021850 kernel: Key type fscrypt-provisioning registered
Feb  8 23:12:57.021863 kernel: pstore: Using crash dump compression: deflate
Feb  8 23:12:57.021877 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  8 23:12:57.021891 kernel: ima: Allocated hash algorithm: sha1
Feb  8 23:12:57.021903 kernel: ima: No architecture policies found
Feb  8 23:12:57.021915 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb  8 23:12:57.021928 kernel: Write protecting the kernel read-only data: 28672k
Feb  8 23:12:57.021941 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb  8 23:12:57.021953 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb  8 23:12:57.021966 kernel: Run /init as init process
Feb  8 23:12:57.021978 kernel:   with arguments:
Feb  8 23:12:57.021991 kernel:     /init
Feb  8 23:12:57.022005 kernel:   with environment:
Feb  8 23:12:57.022017 kernel:     HOME=/
Feb  8 23:12:57.022029 kernel:     TERM=linux
Feb  8 23:12:57.022042 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb  8 23:12:57.022056 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  8 23:12:57.022071 systemd[1]: Detected virtualization microsoft.
Feb  8 23:12:57.022084 systemd[1]: Detected architecture x86-64.
Feb  8 23:12:57.022099 systemd[1]: Running in initrd.
Feb  8 23:12:57.022112 systemd[1]: No hostname configured, using default hostname.
Feb  8 23:12:57.022124 systemd[1]: Hostname set to <localhost>.
Feb  8 23:12:57.022138 systemd[1]: Initializing machine ID from random generator.
Feb  8 23:12:57.022151 systemd[1]: Queued start job for default target initrd.target.
Feb  8 23:12:57.022164 systemd[1]: Started systemd-ask-password-console.path.
Feb  8 23:12:57.022177 systemd[1]: Reached target cryptsetup.target.
Feb  8 23:12:57.022189 systemd[1]: Reached target paths.target.
Feb  8 23:12:57.022202 systemd[1]: Reached target slices.target.
Feb  8 23:12:57.022217 systemd[1]: Reached target swap.target.
Feb  8 23:12:57.022229 systemd[1]: Reached target timers.target.
Feb  8 23:12:57.022244 systemd[1]: Listening on iscsid.socket.
Feb  8 23:12:57.022257 systemd[1]: Listening on iscsiuio.socket.
Feb  8 23:12:57.022270 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  8 23:12:57.022283 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  8 23:12:57.022296 systemd[1]: Listening on systemd-journald.socket.
Feb  8 23:12:57.022312 systemd[1]: Listening on systemd-networkd.socket.
Feb  8 23:12:57.022325 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  8 23:12:57.022338 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  8 23:12:57.022352 systemd[1]: Reached target sockets.target.
Feb  8 23:12:57.022365 systemd[1]: Starting kmod-static-nodes.service...
Feb  8 23:12:57.022378 systemd[1]: Finished network-cleanup.service.
Feb  8 23:12:57.022391 systemd[1]: Starting systemd-fsck-usr.service...
Feb  8 23:12:57.022404 systemd[1]: Starting systemd-journald.service...
Feb  8 23:12:57.022431 systemd[1]: Starting systemd-modules-load.service...
Feb  8 23:12:57.022444 systemd[1]: Starting systemd-resolved.service...
Feb  8 23:12:57.022451 systemd[1]: Starting systemd-vconsole-setup.service...
Feb  8 23:12:57.022458 systemd[1]: Finished kmod-static-nodes.service.
Feb  8 23:12:57.022468 systemd-journald[183]: Journal started
Feb  8 23:12:57.022507 systemd-journald[183]: Runtime Journal (/run/log/journal/071cfb6a96e54b978c54fa200180fb72) is 8.0M, max 159.0M, 151.0M free.
Feb  8 23:12:57.017440 systemd-modules-load[184]: Inserted module 'overlay'
Feb  8 23:12:57.040603 kernel: audit: type=1130 audit(1707433977.024:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.040625 systemd[1]: Started systemd-journald.service.
Feb  8 23:12:57.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.060086 systemd[1]: Finished systemd-fsck-usr.service.
Feb  8 23:12:57.072512 kernel: audit: type=1130 audit(1707433977.048:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.072530 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  8 23:12:57.070466 systemd[1]: Finished systemd-vconsole-setup.service.
Feb  8 23:12:57.080551 kernel: Bridge firewalling registered
Feb  8 23:12:57.080644 systemd-modules-load[184]: Inserted module 'br_netfilter'
Feb  8 23:12:57.086591 systemd[1]: Starting dracut-cmdline-ask.service...
Feb  8 23:12:57.093934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  8 23:12:57.137359 kernel: audit: type=1130 audit(1707433977.062:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.137383 kernel: audit: type=1130 audit(1707433977.076:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.137399 kernel: audit: type=1130 audit(1707433977.121:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.111992 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  8 23:12:57.117546 systemd-resolved[185]: Positive Trust Anchors:
Feb  8 23:12:57.155993 kernel: SCSI subsystem initialized
Feb  8 23:12:57.117557 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  8 23:12:57.117603 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  8 23:12:57.121227 systemd-resolved[185]: Defaulting to hostname 'linux'.
Feb  8 23:12:57.194580 kernel: audit: type=1130 audit(1707433977.175:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.194631 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  8 23:12:57.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.132834 systemd[1]: Started systemd-resolved.service.
Feb  8 23:12:57.217646 kernel: device-mapper: uevent: version 1.0.3
Feb  8 23:12:57.217673 kernel: audit: type=1130 audit(1707433977.200:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.217691 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb  8 23:12:57.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.175309 systemd[1]: Finished dracut-cmdline-ask.service.
Feb  8 23:12:57.200877 systemd[1]: Reached target nss-lookup.target.
Feb  8 23:12:57.223599 systemd-modules-load[184]: Inserted module 'dm_multipath'
Feb  8 23:12:57.226676 systemd[1]: Starting dracut-cmdline.service...
Feb  8 23:12:57.229371 systemd[1]: Finished systemd-modules-load.service.
Feb  8 23:12:57.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.249192 kernel: audit: type=1130 audit(1707433977.230:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.247528 systemd[1]: Starting systemd-sysctl.service...
Feb  8 23:12:57.256383 systemd[1]: Finished systemd-sysctl.service.
Feb  8 23:12:57.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.271906 dracut-cmdline[204]: dracut-dracut-053
Feb  8 23:12:57.274130 kernel: audit: type=1130 audit(1707433977.260:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.275789 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  8 23:12:57.337434 kernel: Loading iSCSI transport class v2.0-870.
Feb  8 23:12:57.350445 kernel: iscsi: registered transport (tcp)
Feb  8 23:12:57.375072 kernel: iscsi: registered transport (qla4xxx)
Feb  8 23:12:57.375111 kernel: QLogic iSCSI HBA Driver
Feb  8 23:12:57.403642 systemd[1]: Finished dracut-cmdline.service.
Feb  8 23:12:57.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.406718 systemd[1]: Starting dracut-pre-udev.service...
Feb  8 23:12:57.458434 kernel: raid6: avx512x4 gen() 18326 MB/s
Feb  8 23:12:57.478429 kernel: raid6: avx512x4 xor()  8494 MB/s
Feb  8 23:12:57.498425 kernel: raid6: avx512x2 gen() 18206 MB/s
Feb  8 23:12:57.519431 kernel: raid6: avx512x2 xor() 29835 MB/s
Feb  8 23:12:57.539425 kernel: raid6: avx512x1 gen() 18199 MB/s
Feb  8 23:12:57.559425 kernel: raid6: avx512x1 xor() 26953 MB/s
Feb  8 23:12:57.580432 kernel: raid6: avx2x4   gen() 18177 MB/s
Feb  8 23:12:57.600428 kernel: raid6: avx2x4   xor()  7964 MB/s
Feb  8 23:12:57.620427 kernel: raid6: avx2x2   gen() 18310 MB/s
Feb  8 23:12:57.641428 kernel: raid6: avx2x2   xor() 22438 MB/s
Feb  8 23:12:57.661425 kernel: raid6: avx2x1   gen() 13923 MB/s
Feb  8 23:12:57.681426 kernel: raid6: avx2x1   xor() 19828 MB/s
Feb  8 23:12:57.702428 kernel: raid6: sse2x4   gen() 11720 MB/s
Feb  8 23:12:57.722425 kernel: raid6: sse2x4   xor()  7304 MB/s
Feb  8 23:12:57.741426 kernel: raid6: sse2x2   gen() 12818 MB/s
Feb  8 23:12:57.761427 kernel: raid6: sse2x2   xor()  7564 MB/s
Feb  8 23:12:57.780429 kernel: raid6: sse2x1   gen() 11582 MB/s
Feb  8 23:12:57.803802 kernel: raid6: sse2x1   xor()  5903 MB/s
Feb  8 23:12:57.803822 kernel: raid6: using algorithm avx512x4 gen() 18326 MB/s
Feb  8 23:12:57.803834 kernel: raid6: .... xor() 8494 MB/s, rmw enabled
Feb  8 23:12:57.807149 kernel: raid6: using avx512x2 recovery algorithm
Feb  8 23:12:57.825438 kernel: xor: automatically using best checksumming function   avx       
Feb  8 23:12:57.920448 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb  8 23:12:57.928375 systemd[1]: Finished dracut-pre-udev.service.
Feb  8 23:12:57.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.932000 audit: BPF prog-id=7 op=LOAD
Feb  8 23:12:57.932000 audit: BPF prog-id=8 op=LOAD
Feb  8 23:12:57.933199 systemd[1]: Starting systemd-udevd.service...
Feb  8 23:12:57.946652 systemd-udevd[384]: Using default interface naming scheme 'v252'.
Feb  8 23:12:57.951234 systemd[1]: Started systemd-udevd.service.
Feb  8 23:12:57.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:57.954098 systemd[1]: Starting dracut-pre-trigger.service...
Feb  8 23:12:57.975639 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation
Feb  8 23:12:58.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:58.001925 systemd[1]: Finished dracut-pre-trigger.service.
Feb  8 23:12:58.007714 systemd[1]: Starting systemd-udev-trigger.service...
Feb  8 23:12:58.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:12:58.039390 systemd[1]: Finished systemd-udev-trigger.service.
Feb  8 23:12:58.084436 kernel: cryptd: max_cpu_qlen set to 1000
Feb  8 23:12:58.123433 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  8 23:12:58.123522 kernel: AES CTR mode by8 optimization enabled
Feb  8 23:12:58.128433 kernel: hv_vmbus: Vmbus version:5.2
Feb  8 23:12:58.164889 kernel: hv_vmbus: registering driver hv_netvsc
Feb  8 23:12:58.164933 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb  8 23:12:58.164952 kernel: hv_vmbus: registering driver hyperv_keyboard
Feb  8 23:12:58.168119 kernel: hv_vmbus: registering driver hv_storvsc
Feb  8 23:12:58.168516 kernel: hv_vmbus: registering driver hid_hyperv
Feb  8 23:12:58.174887 kernel: scsi host0: storvsc_host_t
Feb  8 23:12:58.180511 kernel: scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
Feb  8 23:12:58.193714 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0
Feb  8 23:12:58.193743 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1
Feb  8 23:12:58.193755 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on 
Feb  8 23:12:58.202972 kernel: scsi host1: storvsc_host_t
Feb  8 23:12:58.207653 kernel: scsi 0:0:0:2: CD-ROM            Msft     Virtual DVD-ROM  1.0  PQ: 0 ANSI: 0
Feb  8 23:12:58.236321 kernel: sr 0:0:0:2: [sr0] scsi-1 drive
Feb  8 23:12:58.236550 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  8 23:12:58.245089 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB)
Feb  8 23:12:58.245356 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks
Feb  8 23:12:58.248933 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0
Feb  8 23:12:58.249073 kernel: sd 0:0:0:0: [sda] Write Protect is off
Feb  8 23:12:58.249173 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
Feb  8 23:12:58.254248 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
Feb  8 23:12:58.259432 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb  8 23:12:58.264628 kernel: sd 0:0:0:0: [sda] Attached SCSI disk
Feb  8 23:12:58.382719 kernel: hv_netvsc 000d3a67-5ef5-000d-3a67-5ef5000d3a67 eth0: VF slot 1 added
Feb  8 23:12:58.392439 kernel: hv_vmbus: registering driver hv_pci
Feb  8 23:12:58.400049 kernel: hv_pci 6a312528-1b28-437e-a1db-47c8946414a6: PCI VMBus probing: Using version 0x10004
Feb  8 23:12:58.400206 kernel: hv_pci 6a312528-1b28-437e-a1db-47c8946414a6: PCI host bridge to bus 1b28:00
Feb  8 23:12:58.409258 kernel: pci_bus 1b28:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
Feb  8 23:12:58.409426 kernel: pci_bus 1b28:00: No busn resource found for root bus, will use [bus 00-ff]
Feb  8 23:12:58.419814 kernel: pci 1b28:00:02.0: [15b3:1016] type 00 class 0x020000
Feb  8 23:12:58.428337 kernel: pci 1b28:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
Feb  8 23:12:58.444471 kernel: pci 1b28:00:02.0: enabling Extended Tags
Feb  8 23:12:58.458598 kernel: pci 1b28:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1b28:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
Feb  8 23:12:58.468979 kernel: pci_bus 1b28:00: busn_res: [bus 00-ff] end is updated to 00
Feb  8 23:12:58.469134 kernel: pci 1b28:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
Feb  8 23:12:58.564440 kernel: mlx5_core 1b28:00:02.0: firmware version: 14.30.1224
Feb  8 23:12:58.716851 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb  8 23:12:58.734439 kernel: mlx5_core 1b28:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
Feb  8 23:12:58.740454 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (450)
Feb  8 23:12:58.753933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  8 23:12:58.882104 kernel: mlx5_core 1b28:00:02.0: Supported tc offload range - chains: 1, prios: 1
Feb  8 23:12:58.882321 kernel: mlx5_core 1b28:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing
Feb  8 23:12:58.894245 kernel: hv_netvsc 000d3a67-5ef5-000d-3a67-5ef5000d3a67 eth0: VF registering: eth1
Feb  8 23:12:58.894473 kernel: mlx5_core 1b28:00:02.0 eth1: joined to eth0
Feb  8 23:12:58.905050 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb  8 23:12:58.918433 kernel: mlx5_core 1b28:00:02.0 enP6952s1: renamed from eth1
Feb  8 23:12:58.933430 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb  8 23:12:58.938996 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb  8 23:12:58.945622 systemd[1]: Starting disk-uuid.service...
Feb  8 23:12:58.959441 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb  8 23:12:58.967430 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb  8 23:12:59.977219 disk-uuid[565]: The operation has completed successfully.
Feb  8 23:12:59.979728 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb  8 23:13:00.041951 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb  8 23:13:00.042053 systemd[1]: Finished disk-uuid.service.
Feb  8 23:13:00.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.058232 systemd[1]: Starting verity-setup.service...
Feb  8 23:13:00.093189 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb  8 23:13:00.372683 systemd[1]: Found device dev-mapper-usr.device.
Feb  8 23:13:00.378163 systemd[1]: Mounting sysusr-usr.mount...
Feb  8 23:13:00.382333 systemd[1]: Finished verity-setup.service.
Feb  8 23:13:00.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.457318 systemd[1]: Mounted sysusr-usr.mount.
Feb  8 23:13:00.460999 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb  8 23:13:00.461087 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb  8 23:13:00.465156 systemd[1]: Starting ignition-setup.service...
Feb  8 23:13:00.470080 systemd[1]: Starting parse-ip-for-networkd.service...
Feb  8 23:13:00.493165 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb  8 23:13:00.493208 kernel: BTRFS info (device sda6): using free space tree
Feb  8 23:13:00.493227 kernel: BTRFS info (device sda6): has skinny extents
Feb  8 23:13:00.535284 systemd[1]: Finished parse-ip-for-networkd.service.
Feb  8 23:13:00.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.537000 audit: BPF prog-id=9 op=LOAD
Feb  8 23:13:00.538806 systemd[1]: Starting systemd-networkd.service...
Feb  8 23:13:00.564547 systemd-networkd[803]: lo: Link UP
Feb  8 23:13:00.564555 systemd-networkd[803]: lo: Gained carrier
Feb  8 23:13:00.568656 systemd-networkd[803]: Enumeration completed
Feb  8 23:13:00.568736 systemd[1]: Started systemd-networkd.service.
Feb  8 23:13:00.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.574451 systemd[1]: Reached target network.target.
Feb  8 23:13:00.578476 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  8 23:13:00.578913 systemd[1]: Starting iscsiuio.service...
Feb  8 23:13:00.590013 systemd[1]: Started iscsiuio.service.
Feb  8 23:13:00.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.594427 systemd[1]: Starting iscsid.service...
Feb  8 23:13:00.600412 iscsid[811]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb  8 23:13:00.600412 iscsid[811]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb  8 23:13:00.600412 iscsid[811]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb  8 23:13:00.600412 iscsid[811]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb  8 23:13:00.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.625598 iscsid[811]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb  8 23:13:00.625598 iscsid[811]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb  8 23:13:00.619408 systemd[1]: Started iscsid.service.
Feb  8 23:13:00.623797 systemd[1]: Starting dracut-initqueue.service...
Feb  8 23:13:00.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.633875 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb  8 23:13:00.653666 kernel: mlx5_core 1b28:00:02.0 enP6952s1: Link up
Feb  8 23:13:00.642321 systemd[1]: Finished dracut-initqueue.service.
Feb  8 23:13:00.644687 systemd[1]: Reached target remote-fs-pre.target.
Feb  8 23:13:00.646733 systemd[1]: Reached target remote-cryptsetup.target.
Feb  8 23:13:00.653638 systemd[1]: Reached target remote-fs.target.
Feb  8 23:13:00.656631 systemd[1]: Starting dracut-pre-mount.service...
Feb  8 23:13:00.671355 systemd[1]: Finished dracut-pre-mount.service.
Feb  8 23:13:00.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.718480 kernel: hv_netvsc 000d3a67-5ef5-000d-3a67-5ef5000d3a67 eth0: Data path switched to VF: enP6952s1
Feb  8 23:13:00.718680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  8 23:13:00.723030 systemd-networkd[803]: enP6952s1: Link UP
Feb  8 23:13:00.723160 systemd-networkd[803]: eth0: Link UP
Feb  8 23:13:00.724927 systemd-networkd[803]: eth0: Gained carrier
Feb  8 23:13:00.729580 systemd-networkd[803]: enP6952s1: Gained carrier
Feb  8 23:13:00.759570 systemd-networkd[803]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16
Feb  8 23:13:00.842643 systemd[1]: Finished ignition-setup.service.
Feb  8 23:13:00.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:00.846251 systemd[1]: Starting ignition-fetch-offline.service...
Feb  8 23:13:01.854639 systemd-networkd[803]: eth0: Gained IPv6LL
Feb  8 23:13:04.388916 ignition[830]: Ignition 2.14.0
Feb  8 23:13:04.388933 ignition[830]: Stage: fetch-offline
Feb  8 23:13:04.389023 ignition[830]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:13:04.389073 ignition[830]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:13:04.495171 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:13:04.495358 ignition[830]: parsed url from cmdline: ""
Feb  8 23:13:04.496629 systemd[1]: Finished ignition-fetch-offline.service.
Feb  8 23:13:04.514781 kernel: kauditd_printk_skb: 18 callbacks suppressed
Feb  8 23:13:04.514821 kernel: audit: type=1130 audit(1707433984.502:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.495361 ignition[830]: no config URL provided
Feb  8 23:13:04.504219 systemd[1]: Starting ignition-fetch.service...
Feb  8 23:13:04.495367 ignition[830]: reading system config file "/usr/lib/ignition/user.ign"
Feb  8 23:13:04.495375 ignition[830]: no config at "/usr/lib/ignition/user.ign"
Feb  8 23:13:04.495380 ignition[830]: failed to fetch config: resource requires networking
Feb  8 23:13:04.495638 ignition[830]: Ignition finished successfully
Feb  8 23:13:04.512310 ignition[836]: Ignition 2.14.0
Feb  8 23:13:04.512315 ignition[836]: Stage: fetch
Feb  8 23:13:04.512460 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:13:04.512486 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:13:04.517544 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:13:04.517879 ignition[836]: parsed url from cmdline: ""
Feb  8 23:13:04.517889 ignition[836]: no config URL provided
Feb  8 23:13:04.517992 ignition[836]: reading system config file "/usr/lib/ignition/user.ign"
Feb  8 23:13:04.518004 ignition[836]: no config at "/usr/lib/ignition/user.ign"
Feb  8 23:13:04.518052 ignition[836]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1
Feb  8 23:13:04.626456 ignition[836]: GET result: OK
Feb  8 23:13:04.626589 ignition[836]: config has been read from IMDS userdata
Feb  8 23:13:04.626632 ignition[836]: parsing config with SHA512: d05eb042ae478cd57589f241634cdba4df80c1bb2f70a8d349ed9dc7abdf110c5e5f815b62af6477ddc8fbb7395a7dfc901b11b9655e6fd636604050fa7041b0
Feb  8 23:13:04.658545 unknown[836]: fetched base config from "system"
Feb  8 23:13:04.658557 unknown[836]: fetched base config from "system"
Feb  8 23:13:04.659187 ignition[836]: fetch: fetch complete
Feb  8 23:13:04.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.658565 unknown[836]: fetched user config from "azure"
Feb  8 23:13:04.682222 kernel: audit: type=1130 audit(1707433984.664:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.659191 ignition[836]: fetch: fetch passed
Feb  8 23:13:04.662713 systemd[1]: Finished ignition-fetch.service.
Feb  8 23:13:04.659229 ignition[836]: Ignition finished successfully
Feb  8 23:13:04.666076 systemd[1]: Starting ignition-kargs.service...
Feb  8 23:13:04.687906 ignition[842]: Ignition 2.14.0
Feb  8 23:13:04.687912 ignition[842]: Stage: kargs
Feb  8 23:13:04.688007 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:13:04.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.694546 systemd[1]: Finished ignition-kargs.service.
Feb  8 23:13:04.715779 kernel: audit: type=1130 audit(1707433984.698:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.688028 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:13:04.704321 systemd[1]: Starting ignition-disks.service...
Feb  8 23:13:04.690821 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:13:04.693825 ignition[842]: kargs: kargs passed
Feb  8 23:13:04.693868 ignition[842]: Ignition finished successfully
Feb  8 23:13:04.715076 ignition[848]: Ignition 2.14.0
Feb  8 23:13:04.715081 ignition[848]: Stage: disks
Feb  8 23:13:04.715180 ignition[848]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:13:04.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.727054 systemd[1]: Finished ignition-disks.service.
Feb  8 23:13:04.746504 kernel: audit: type=1130 audit(1707433984.729:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.715208 ignition[848]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:13:04.730004 systemd[1]: Reached target initrd-root-device.target.
Feb  8 23:13:04.724133 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:13:04.746481 systemd[1]: Reached target local-fs-pre.target.
Feb  8 23:13:04.725519 ignition[848]: disks: disks passed
Feb  8 23:13:04.750855 systemd[1]: Reached target local-fs.target.
Feb  8 23:13:04.725558 ignition[848]: Ignition finished successfully
Feb  8 23:13:04.762276 systemd[1]: Reached target sysinit.target.
Feb  8 23:13:04.765885 systemd[1]: Reached target basic.target.
Feb  8 23:13:04.770316 systemd[1]: Starting systemd-fsck-root.service...
Feb  8 23:13:04.832526 systemd-fsck[856]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks
Feb  8 23:13:04.846081 systemd[1]: Finished systemd-fsck-root.service.
Feb  8 23:13:04.863935 kernel: audit: type=1130 audit(1707433984.849:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:04.859919 systemd[1]: Mounting sysroot.mount...
Feb  8 23:13:04.880431 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb  8 23:13:04.880746 systemd[1]: Mounted sysroot.mount.
Feb  8 23:13:04.882480 systemd[1]: Reached target initrd-root-fs.target.
Feb  8 23:13:04.926321 systemd[1]: Mounting sysroot-usr.mount...
Feb  8 23:13:04.929953 systemd[1]: Starting flatcar-metadata-hostname.service...
Feb  8 23:13:04.935597 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb  8 23:13:04.935645 systemd[1]: Reached target ignition-diskful.target.
Feb  8 23:13:04.945107 systemd[1]: Mounted sysroot-usr.mount.
Feb  8 23:13:04.997512 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  8 23:13:05.003937 systemd[1]: Starting initrd-setup-root.service...
Feb  8 23:13:05.013198 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (867)
Feb  8 23:13:05.024017 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb  8 23:13:05.024054 kernel: BTRFS info (device sda6): using free space tree
Feb  8 23:13:05.024067 kernel: BTRFS info (device sda6): has skinny extents
Feb  8 23:13:05.031485 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  8 23:13:05.041466 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory
Feb  8 23:13:05.070737 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory
Feb  8 23:13:05.076521 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory
Feb  8 23:13:05.097608 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory
Feb  8 23:13:05.582656 systemd[1]: Finished initrd-setup-root.service.
Feb  8 23:13:05.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:05.588610 systemd[1]: Starting ignition-mount.service...
Feb  8 23:13:05.604136 kernel: audit: type=1130 audit(1707433985.587:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:05.604878 systemd[1]: Starting sysroot-boot.service...
Feb  8 23:13:05.607529 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Feb  8 23:13:05.607640 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Feb  8 23:13:05.629016 ignition[933]: INFO     : Ignition 2.14.0
Feb  8 23:13:05.631574 ignition[933]: INFO     : Stage: mount
Feb  8 23:13:05.633458 ignition[933]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:13:05.633458 ignition[933]: DEBUG    : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:13:05.636800 systemd[1]: Finished sysroot-boot.service.
Feb  8 23:13:05.658508 kernel: audit: type=1130 audit(1707433985.645:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:05.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:05.658587 ignition[933]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:13:05.658587 ignition[933]: INFO     : mount: mount passed
Feb  8 23:13:05.658587 ignition[933]: INFO     : Ignition finished successfully
Feb  8 23:13:05.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:05.646311 systemd[1]: Finished ignition-mount.service.
Feb  8 23:13:05.679591 kernel: audit: type=1130 audit(1707433985.664:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:06.533797 coreos-metadata[866]: Feb 08 23:13:06.533 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1
Feb  8 23:13:06.567606 coreos-metadata[866]: Feb 08 23:13:06.567 INFO Fetch successful
Feb  8 23:13:06.604023 coreos-metadata[866]: Feb 08 23:13:06.603 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1
Feb  8 23:13:06.622944 coreos-metadata[866]: Feb 08 23:13:06.622 INFO Fetch successful
Feb  8 23:13:06.640740 coreos-metadata[866]: Feb 08 23:13:06.640 INFO wrote hostname ci-3510.3.2-a-4203397181 to /sysroot/etc/hostname
Feb  8 23:13:06.647209 systemd[1]: Finished flatcar-metadata-hostname.service.
Feb  8 23:13:06.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:06.653137 systemd[1]: Starting ignition-files.service...
Feb  8 23:13:06.666407 kernel: audit: type=1130 audit(1707433986.652:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:13:06.671485 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  8 23:13:06.682494 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (945)
Feb  8 23:13:06.691812 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb  8 23:13:06.691842 kernel: BTRFS info (device sda6): using free space tree
Feb  8 23:13:06.691860 kernel: BTRFS info (device sda6): has skinny extents
Feb  8 23:13:06.700137 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  8 23:13:06.713885 ignition[964]: INFO     : Ignition 2.14.0
Feb  8 23:13:06.713885 ignition[964]: INFO     : Stage: files
Feb  8 23:13:06.717670 ignition[964]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:13:06.717670 ignition[964]: DEBUG    : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:13:06.730259 ignition[964]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:13:06.751020 ignition[964]: DEBUG    : files: compiled without relabeling support, skipping
Feb  8 23:13:06.754294 ignition[964]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb  8 23:13:06.754294 ignition[964]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb  8 23:13:06.826804 ignition[964]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb  8 23:13:06.831149 ignition[964]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb  8 23:13:06.843065 unknown[964]: wrote ssh authorized keys file for user: core
Feb  8 23:13:06.845730 ignition[964]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb  8 23:13:06.868334 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb  8 23:13:06.873665 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1
Feb  8 23:13:07.627801 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb  8 23:13:08.028006 ignition[964]: DEBUG    : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a
Feb  8 23:13:08.037222 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb  8 23:13:08.037222 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb  8 23:13:08.037222 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Feb  8 23:13:08.414166 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb  8 23:13:08.575879 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb  8 23:13:08.582136 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb  8 23:13:08.582136 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1
Feb  8 23:13:09.160306 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb  8 23:13:09.389617 ignition[964]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540
Feb  8 23:13:09.397693 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb  8 23:13:09.397693 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubectl"
Feb  8 23:13:09.397693 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1
Feb  8 23:13:10.236818 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb  8 23:13:27.937114 ignition[964]: DEBUG    : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3
Feb  8 23:13:27.945524 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl"
Feb  8 23:13:27.945524 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb  8 23:13:27.945524 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1
Feb  8 23:13:28.788191 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb  8 23:13:53.716000 ignition[964]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1
Feb  8 23:13:53.725290 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb  8 23:13:53.725290 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb  8 23:13:53.725290 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1
Feb  8 23:13:54.449530 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET result: OK
Feb  8 23:14:51.754918 ignition[964]: DEBUG    : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75
Feb  8 23:14:51.763351 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb  8 23:14:51.763351 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb  8 23:14:51.763351 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb  8 23:14:51.763351 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb  8 23:14:51.763351 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Feb  8 23:14:52.396664 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Feb  8 23:14:53.072524 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/home/core/install.sh"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb  8 23:14:53.078499 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb  8 23:14:53.109398 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb  8 23:14:53.109398 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb  8 23:14:53.109398 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb  8 23:14:53.109398 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [started]  writing file "/sysroot/etc/systemd/system/waagent.service"
Feb  8 23:14:53.127013 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition
Feb  8 23:14:53.135750 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): op(11): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem833441106"
Feb  8 23:14:53.135750 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem833441106": device or resource busy
Feb  8 23:14:53.135750 ignition[964]: ERROR    : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem833441106", trying btrfs: device or resource busy
Feb  8 23:14:53.135750 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): op(12): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem833441106"
Feb  8 23:14:53.165679 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (967)
Feb  8 23:14:53.155371 systemd[1]: mnt-oem833441106.mount: Deactivated successfully.
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem833441106"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): op(13): [started]  unmounting "/mnt/oem833441106"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem833441106"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): [started]  writing file "/sysroot/etc/systemd/system/nvidia.service"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): op(15): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3172167485"
Feb  8 23:14:53.168344 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3172167485": device or resource busy
Feb  8 23:14:53.168344 ignition[964]: ERROR    : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3172167485", trying btrfs: device or resource busy
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): op(16): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3172167485"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3172167485"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): op(17): [started]  unmounting "/mnt/oem3172167485"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3172167485"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: op(18): [started]  processing unit "nvidia.service"
Feb  8 23:14:53.168344 ignition[964]: INFO     : files: op(18): [finished] processing unit "nvidia.service"
Feb  8 23:14:53.211182 kernel: audit: type=1130 audit(1707434093.175:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.169287 systemd[1]: Finished ignition-files.service.
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(19): [started]  processing unit "waagent.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(19): [finished] processing unit "waagent.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1a): [started]  processing unit "prepare-cni-plugins.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1a): op(1b): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1a): [finished] processing unit "prepare-cni-plugins.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1c): [started]  processing unit "prepare-critools.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1c): op(1d): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1c): [finished] processing unit "prepare-critools.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1e): [started]  processing unit "prepare-helm.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1e): op(1f): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(1e): [finished] processing unit "prepare-helm.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(20): [started]  setting preset to enabled for "prepare-critools.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(20): [finished] setting preset to enabled for "prepare-critools.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(21): [started]  setting preset to enabled for "prepare-helm.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(21): [finished] setting preset to enabled for "prepare-helm.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(22): [started]  setting preset to enabled for "nvidia.service"
Feb  8 23:14:53.258695 ignition[964]: INFO     : files: op(22): [finished] setting preset to enabled for "nvidia.service"
Feb  8 23:14:53.350963 kernel: audit: type=1130 audit(1707434093.260:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.190165 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: op(23): [started]  setting preset to enabled for "waagent.service"
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: op(23): [finished] setting preset to enabled for "waagent.service"
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: op(24): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: createResultFile: createFiles: op(25): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb  8 23:14:53.353465 ignition[964]: INFO     : files: files passed
Feb  8 23:14:53.353465 ignition[964]: INFO     : Ignition finished successfully
Feb  8 23:14:53.441198 kernel: audit: type=1130 audit(1707434093.358:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.441235 kernel: audit: type=1131 audit(1707434093.358:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.441255 kernel: audit: type=1130 audit(1707434093.396:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.441273 kernel: audit: type=1131 audit(1707434093.396:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.200481 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb  8 23:14:53.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.448627 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb  8 23:14:53.464111 kernel: audit: type=1130 audit(1707434093.448:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.205060 systemd[1]: Starting ignition-quench.service...
Feb  8 23:14:53.218376 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb  8 23:14:53.262547 systemd[1]: Reached target ignition-complete.target.
Feb  8 23:14:53.353726 systemd[1]: Starting initrd-parse-etc.service...
Feb  8 23:14:53.356111 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb  8 23:14:53.356220 systemd[1]: Finished ignition-quench.service.
Feb  8 23:14:53.392714 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  8 23:14:53.392811 systemd[1]: Finished initrd-parse-etc.service.
Feb  8 23:14:53.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.396913 systemd[1]: Reached target initrd-fs.target.
Feb  8 23:14:53.505444 kernel: audit: type=1131 audit(1707434093.488:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.424314 systemd[1]: Reached target initrd.target.
Feb  8 23:14:53.430625 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb  8 23:14:53.431352 systemd[1]: Starting dracut-pre-pivot.service...
Feb  8 23:14:53.446382 systemd[1]: Finished dracut-pre-pivot.service.
Feb  8 23:14:53.465515 systemd[1]: Starting initrd-cleanup.service...
Feb  8 23:14:53.473387 systemd[1]: Stopped target nss-lookup.target.
Feb  8 23:14:53.476263 systemd[1]: Stopped target remote-cryptsetup.target.
Feb  8 23:14:53.480956 systemd[1]: Stopped target timers.target.
Feb  8 23:14:53.484701 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  8 23:14:53.484836 systemd[1]: Stopped dracut-pre-pivot.service.
Feb  8 23:14:53.500742 systemd[1]: Stopped target initrd.target.
Feb  8 23:14:53.505575 systemd[1]: Stopped target basic.target.
Feb  8 23:14:53.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.509186 systemd[1]: Stopped target ignition-complete.target.
Feb  8 23:14:53.566546 kernel: audit: type=1131 audit(1707434093.549:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.514494 systemd[1]: Stopped target ignition-diskful.target.
Feb  8 23:14:53.518523 systemd[1]: Stopped target initrd-root-device.target.
Feb  8 23:14:53.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.522731 systemd[1]: Stopped target remote-fs.target.
Feb  8 23:14:53.587150 kernel: audit: type=1131 audit(1707434093.570:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.526638 systemd[1]: Stopped target remote-fs-pre.target.
Feb  8 23:14:53.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.530819 systemd[1]: Stopped target sysinit.target.
Feb  8 23:14:53.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.534747 systemd[1]: Stopped target local-fs.target.
Feb  8 23:14:53.538576 systemd[1]: Stopped target local-fs-pre.target.
Feb  8 23:14:53.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.618201 iscsid[811]: iscsid shutting down.
Feb  8 23:14:53.542849 systemd[1]: Stopped target swap.target.
Feb  8 23:14:53.622132 ignition[1002]: INFO     : Ignition 2.14.0
Feb  8 23:14:53.622132 ignition[1002]: INFO     : Stage: umount
Feb  8 23:14:53.622132 ignition[1002]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:14:53.622132 ignition[1002]: DEBUG    : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63
Feb  8 23:14:53.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.546382 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  8 23:14:53.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.652958 ignition[1002]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/azure"
Feb  8 23:14:53.652958 ignition[1002]: INFO     : umount: umount passed
Feb  8 23:14:53.652958 ignition[1002]: INFO     : Ignition finished successfully
Feb  8 23:14:53.546530 systemd[1]: Stopped dracut-pre-mount.service.
Feb  8 23:14:53.561221 systemd[1]: Stopped target cryptsetup.target.
Feb  8 23:14:53.566623 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  8 23:14:53.566757 systemd[1]: Stopped dracut-initqueue.service.
Feb  8 23:14:53.582145 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb  8 23:14:53.582291 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb  8 23:14:53.587210 systemd[1]: ignition-files.service: Deactivated successfully.
Feb  8 23:14:53.587343 systemd[1]: Stopped ignition-files.service.
Feb  8 23:14:53.591168 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb  8 23:14:53.591288 systemd[1]: Stopped flatcar-metadata-hostname.service.
Feb  8 23:14:53.596059 systemd[1]: Stopping ignition-mount.service...
Feb  8 23:14:53.599011 systemd[1]: Stopping iscsid.service...
Feb  8 23:14:53.600798 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  8 23:14:53.600967 systemd[1]: Stopped kmod-static-nodes.service.
Feb  8 23:14:53.604231 systemd[1]: Stopping sysroot-boot.service...
Feb  8 23:14:53.606278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  8 23:14:53.606490 systemd[1]: Stopped systemd-udev-trigger.service.
Feb  8 23:14:53.609025 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  8 23:14:53.609181 systemd[1]: Stopped dracut-pre-trigger.service.
Feb  8 23:14:53.613746 systemd[1]: iscsid.service: Deactivated successfully.
Feb  8 23:14:53.613857 systemd[1]: Stopped iscsid.service.
Feb  8 23:14:53.626927 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  8 23:14:53.627028 systemd[1]: Finished initrd-cleanup.service.
Feb  8 23:14:53.629747 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb  8 23:14:53.629813 systemd[1]: Stopped ignition-mount.service.
Feb  8 23:14:53.635199 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb  8 23:14:53.635239 systemd[1]: Stopped ignition-disks.service.
Feb  8 23:14:53.642466 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb  8 23:14:53.642513 systemd[1]: Stopped ignition-kargs.service.
Feb  8 23:14:53.644470 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb  8 23:14:53.644515 systemd[1]: Stopped ignition-fetch.service.
Feb  8 23:14:53.646483 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb  8 23:14:53.646531 systemd[1]: Stopped ignition-fetch-offline.service.
Feb  8 23:14:53.648762 systemd[1]: Stopped target paths.target.
Feb  8 23:14:53.652874 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  8 23:14:53.658452 systemd[1]: Stopped systemd-ask-password-console.path.
Feb  8 23:14:53.664562 systemd[1]: Stopped target slices.target.
Feb  8 23:14:53.668144 systemd[1]: Stopped target sockets.target.
Feb  8 23:14:53.675511 systemd[1]: iscsid.socket: Deactivated successfully.
Feb  8 23:14:53.675557 systemd[1]: Closed iscsid.socket.
Feb  8 23:14:53.679240 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb  8 23:14:53.682009 systemd[1]: Stopped ignition-setup.service.
Feb  8 23:14:53.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.755004 systemd[1]: Stopping iscsiuio.service...
Feb  8 23:14:53.759605 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb  8 23:14:53.762176 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb  8 23:14:53.764389 systemd[1]: Stopped iscsiuio.service.
Feb  8 23:14:53.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.768002 systemd[1]: Stopped target network.target.
Feb  8 23:14:53.771614 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb  8 23:14:53.771657 systemd[1]: Closed iscsiuio.socket.
Feb  8 23:14:53.777177 systemd[1]: Stopping systemd-networkd.service...
Feb  8 23:14:53.780923 systemd[1]: Stopping systemd-resolved.service...
Feb  8 23:14:53.784656 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb  8 23:14:53.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.784729 systemd[1]: Stopped sysroot-boot.service.
Feb  8 23:14:53.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.786912 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb  8 23:14:53.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.786953 systemd[1]: Stopped initrd-setup-root.service.
Feb  8 23:14:53.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.791574 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb  8 23:14:53.791667 systemd[1]: Stopped systemd-resolved.service.
Feb  8 23:14:53.803000 audit: BPF prog-id=6 op=UNLOAD
Feb  8 23:14:53.793630 systemd-networkd[803]: eth0: DHCPv6 lease lost
Feb  8 23:14:53.807000 audit: BPF prog-id=9 op=UNLOAD
Feb  8 23:14:53.797976 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  8 23:14:53.798069 systemd[1]: Stopped systemd-networkd.service.
Feb  8 23:14:53.804509 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb  8 23:14:53.804542 systemd[1]: Closed systemd-networkd.socket.
Feb  8 23:14:53.808875 systemd[1]: Stopping network-cleanup.service...
Feb  8 23:14:53.821730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb  8 23:14:53.823094 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb  8 23:14:53.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.828441 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  8 23:14:53.829367 systemd[1]: Stopped systemd-sysctl.service.
Feb  8 23:14:53.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.834671 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  8 23:14:53.836022 systemd[1]: Stopped systemd-modules-load.service.
Feb  8 23:14:53.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.841840 systemd[1]: Stopping systemd-udevd.service...
Feb  8 23:14:53.846660 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  8 23:14:53.850298 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  8 23:14:53.852883 systemd[1]: Stopped systemd-udevd.service.
Feb  8 23:14:53.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.857285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  8 23:14:53.857347 systemd[1]: Closed systemd-udevd-control.socket.
Feb  8 23:14:53.863859 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  8 23:14:53.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.863903 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb  8 23:14:53.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.868430 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  8 23:14:53.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.868467 systemd[1]: Stopped dracut-pre-udev.service.
Feb  8 23:14:53.870474 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  8 23:14:53.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.870524 systemd[1]: Stopped dracut-cmdline.service.
Feb  8 23:14:53.874766 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb  8 23:14:53.874813 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb  8 23:14:53.879923 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb  8 23:14:53.883466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  8 23:14:53.883535 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb  8 23:14:53.888445 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  8 23:14:53.888540 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb  8 23:14:53.921477 kernel: hv_netvsc 000d3a67-5ef5-000d-3a67-5ef5000d3a67 eth0: Data path switched from VF: enP6952s1
Feb  8 23:14:53.941871 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb  8 23:14:53.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:14:53.941990 systemd[1]: Stopped network-cleanup.service.
Feb  8 23:14:53.947247 systemd[1]: Reached target initrd-switch-root.target.
Feb  8 23:14:53.952242 systemd[1]: Starting initrd-switch-root.service...
Feb  8 23:14:53.964649 systemd[1]: Switching root.
Feb  8 23:14:53.992830 systemd-journald[183]: Journal stopped
Feb  8 23:15:08.312262 systemd-journald[183]: Received SIGTERM from PID 1 (systemd).
Feb  8 23:15:08.312289 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb  8 23:15:08.312302 kernel: SELinux:  Class anon_inode not defined in policy.
Feb  8 23:15:08.312316 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb  8 23:15:08.312325 kernel: SELinux:  policy capability network_peer_controls=1
Feb  8 23:15:08.312333 kernel: SELinux:  policy capability open_perms=1
Feb  8 23:15:08.312347 kernel: SELinux:  policy capability extended_socket_class=1
Feb  8 23:15:08.312357 kernel: SELinux:  policy capability always_check_network=0
Feb  8 23:15:08.312366 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  8 23:15:08.312374 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  8 23:15:08.312385 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb  8 23:15:08.312395 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb  8 23:15:08.312405 systemd[1]: Successfully loaded SELinux policy in 335.957ms.
Feb  8 23:15:08.312424 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.477ms.
Feb  8 23:15:08.312441 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  8 23:15:08.312451 systemd[1]: Detected virtualization microsoft.
Feb  8 23:15:08.312462 systemd[1]: Detected architecture x86-64.
Feb  8 23:15:08.312472 systemd[1]: Detected first boot.
Feb  8 23:15:08.312486 systemd[1]: Hostname set to <ci-3510.3.2-a-4203397181>.
Feb  8 23:15:08.312500 systemd[1]: Initializing machine ID from random generator.
Feb  8 23:15:08.312516 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb  8 23:15:08.312532 kernel: kauditd_printk_skb: 41 callbacks suppressed
Feb  8 23:15:08.312549 kernel: audit: type=1400 audit(1707434098.894:89): avc:  denied  { associate } for  pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Feb  8 23:15:08.312569 kernel: audit: type=1300 audit(1707434098.894:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:15:08.312594 kernel: audit: type=1327 audit(1707434098.894:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb  8 23:15:08.312612 kernel: audit: type=1400 audit(1707434098.902:90): avc:  denied  { associate } for  pid=1035 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Feb  8 23:15:08.312632 kernel: audit: type=1300 audit(1707434098.902:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:15:08.312649 kernel: audit: type=1307 audit(1707434098.902:90): cwd="/"
Feb  8 23:15:08.312666 kernel: audit: type=1302 audit(1707434098.902:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:08.312683 kernel: audit: type=1302 audit(1707434098.902:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:08.312704 kernel: audit: type=1327 audit(1707434098.902:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb  8 23:15:08.312725 systemd[1]: Populated /etc with preset unit settings.
Feb  8 23:15:08.312742 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:15:08.312759 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:15:08.312784 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:15:08.312802 kernel: audit: type=1334 audit(1707434107.833:91): prog-id=12 op=LOAD
Feb  8 23:15:08.312821 kernel: audit: type=1334 audit(1707434107.833:92): prog-id=3 op=UNLOAD
Feb  8 23:15:08.312838 kernel: audit: type=1334 audit(1707434107.844:93): prog-id=13 op=LOAD
Feb  8 23:15:08.312862 kernel: audit: type=1334 audit(1707434107.849:94): prog-id=14 op=LOAD
Feb  8 23:15:08.312883 kernel: audit: type=1334 audit(1707434107.849:95): prog-id=4 op=UNLOAD
Feb  8 23:15:08.312903 kernel: audit: type=1334 audit(1707434107.849:96): prog-id=5 op=UNLOAD
Feb  8 23:15:08.312920 kernel: audit: type=1334 audit(1707434107.854:97): prog-id=15 op=LOAD
Feb  8 23:15:08.312938 kernel: audit: type=1334 audit(1707434107.854:98): prog-id=12 op=UNLOAD
Feb  8 23:15:08.312954 kernel: audit: type=1334 audit(1707434107.859:99): prog-id=16 op=LOAD
Feb  8 23:15:08.312970 kernel: audit: type=1334 audit(1707434107.864:100): prog-id=17 op=LOAD
Feb  8 23:15:08.312988 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb  8 23:15:08.313008 systemd[1]: Stopped initrd-switch-root.service.
Feb  8 23:15:08.313033 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb  8 23:15:08.313054 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb  8 23:15:08.313074 systemd[1]: Created slice system-addon\x2drun.slice.
Feb  8 23:15:08.313094 systemd[1]: Created slice system-getty.slice.
Feb  8 23:15:08.313111 systemd[1]: Created slice system-modprobe.slice.
Feb  8 23:15:08.313131 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb  8 23:15:08.313151 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb  8 23:15:08.313171 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb  8 23:15:08.313194 systemd[1]: Created slice user.slice.
Feb  8 23:15:08.313211 systemd[1]: Started systemd-ask-password-console.path.
Feb  8 23:15:08.313230 systemd[1]: Started systemd-ask-password-wall.path.
Feb  8 23:15:08.313248 systemd[1]: Set up automount boot.automount.
Feb  8 23:15:08.313266 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb  8 23:15:08.313287 systemd[1]: Stopped target initrd-switch-root.target.
Feb  8 23:15:08.313305 systemd[1]: Stopped target initrd-fs.target.
Feb  8 23:15:08.313324 systemd[1]: Stopped target initrd-root-fs.target.
Feb  8 23:15:08.313344 systemd[1]: Reached target integritysetup.target.
Feb  8 23:15:08.313358 systemd[1]: Reached target remote-cryptsetup.target.
Feb  8 23:15:08.313372 systemd[1]: Reached target remote-fs.target.
Feb  8 23:15:08.313386 systemd[1]: Reached target slices.target.
Feb  8 23:15:08.313401 systemd[1]: Reached target swap.target.
Feb  8 23:15:08.318358 systemd[1]: Reached target torcx.target.
Feb  8 23:15:08.318386 systemd[1]: Reached target veritysetup.target.
Feb  8 23:15:08.318405 systemd[1]: Listening on systemd-coredump.socket.
Feb  8 23:15:08.318461 systemd[1]: Listening on systemd-initctl.socket.
Feb  8 23:15:08.318472 systemd[1]: Listening on systemd-networkd.socket.
Feb  8 23:15:08.318486 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  8 23:15:08.318499 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  8 23:15:08.318512 systemd[1]: Listening on systemd-userdbd.socket.
Feb  8 23:15:08.318524 systemd[1]: Mounting dev-hugepages.mount...
Feb  8 23:15:08.318536 systemd[1]: Mounting dev-mqueue.mount...
Feb  8 23:15:08.318547 systemd[1]: Mounting media.mount...
Feb  8 23:15:08.318559 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  8 23:15:08.318570 systemd[1]: Mounting sys-kernel-debug.mount...
Feb  8 23:15:08.318582 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb  8 23:15:08.318592 systemd[1]: Mounting tmp.mount...
Feb  8 23:15:08.318605 systemd[1]: Starting flatcar-tmpfiles.service...
Feb  8 23:15:08.318620 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  8 23:15:08.318630 systemd[1]: Starting kmod-static-nodes.service...
Feb  8 23:15:08.318643 systemd[1]: Starting modprobe@configfs.service...
Feb  8 23:15:08.318655 systemd[1]: Starting modprobe@dm_mod.service...
Feb  8 23:15:08.318665 systemd[1]: Starting modprobe@drm.service...
Feb  8 23:15:08.318677 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  8 23:15:08.318688 systemd[1]: Starting modprobe@fuse.service...
Feb  8 23:15:08.318701 systemd[1]: Starting modprobe@loop.service...
Feb  8 23:15:08.318711 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  8 23:15:08.318726 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb  8 23:15:08.318739 systemd[1]: Stopped systemd-fsck-root.service.
Feb  8 23:15:08.318749 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb  8 23:15:08.318761 systemd[1]: Stopped systemd-fsck-usr.service.
Feb  8 23:15:08.318773 systemd[1]: Stopped systemd-journald.service.
Feb  8 23:15:08.318784 kernel: loop: module loaded
Feb  8 23:15:08.318795 systemd[1]: Starting systemd-journald.service...
Feb  8 23:15:08.318806 systemd[1]: Starting systemd-modules-load.service...
Feb  8 23:15:08.318819 systemd[1]: Starting systemd-network-generator.service...
Feb  8 23:15:08.318831 systemd[1]: Starting systemd-remount-fs.service...
Feb  8 23:15:08.318843 systemd[1]: Starting systemd-udev-trigger.service...
Feb  8 23:15:08.318856 systemd[1]: verity-setup.service: Deactivated successfully.
Feb  8 23:15:08.318866 systemd[1]: Stopped verity-setup.service.
Feb  8 23:15:08.318878 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  8 23:15:08.318889 systemd[1]: Mounted dev-hugepages.mount.
Feb  8 23:15:08.318901 systemd[1]: Mounted dev-mqueue.mount.
Feb  8 23:15:08.318911 systemd[1]: Mounted media.mount.
Feb  8 23:15:08.318923 systemd[1]: Mounted sys-kernel-debug.mount.
Feb  8 23:15:08.318938 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb  8 23:15:08.318948 systemd[1]: Mounted tmp.mount.
Feb  8 23:15:08.318961 systemd[1]: Finished flatcar-tmpfiles.service.
Feb  8 23:15:08.318972 kernel: fuse: init (API version 7.34)
Feb  8 23:15:08.318982 systemd[1]: Finished kmod-static-nodes.service.
Feb  8 23:15:08.318998 systemd-journald[1140]: Journal started
Feb  8 23:15:08.319050 systemd-journald[1140]: Runtime Journal (/run/log/journal/c8a3448ebd414d75af4a2b5251101cc1) is 8.0M, max 159.0M, 151.0M free.
Feb  8 23:14:56.638000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb  8 23:14:57.356000 audit[1]: AVC avc:  denied  { integrity } for  pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  8 23:14:57.375000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  8 23:14:57.375000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  8 23:14:57.376000 audit: BPF prog-id=10 op=LOAD
Feb  8 23:14:57.376000 audit: BPF prog-id=10 op=UNLOAD
Feb  8 23:14:57.376000 audit: BPF prog-id=11 op=LOAD
Feb  8 23:14:57.376000 audit: BPF prog-id=11 op=UNLOAD
Feb  8 23:14:58.894000 audit[1035]: AVC avc:  denied  { associate } for  pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Feb  8 23:14:58.894000 audit[1035]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:14:58.894000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb  8 23:14:58.902000 audit[1035]: AVC avc:  denied  { associate } for  pid=1035 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Feb  8 23:14:58.902000 audit[1035]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:14:58.902000 audit: CWD cwd="/"
Feb  8 23:14:58.902000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:14:58.902000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:14:58.902000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb  8 23:15:07.833000 audit: BPF prog-id=12 op=LOAD
Feb  8 23:15:07.833000 audit: BPF prog-id=3 op=UNLOAD
Feb  8 23:15:07.844000 audit: BPF prog-id=13 op=LOAD
Feb  8 23:15:07.849000 audit: BPF prog-id=14 op=LOAD
Feb  8 23:15:07.849000 audit: BPF prog-id=4 op=UNLOAD
Feb  8 23:15:07.849000 audit: BPF prog-id=5 op=UNLOAD
Feb  8 23:15:07.854000 audit: BPF prog-id=15 op=LOAD
Feb  8 23:15:07.854000 audit: BPF prog-id=12 op=UNLOAD
Feb  8 23:15:07.859000 audit: BPF prog-id=16 op=LOAD
Feb  8 23:15:07.864000 audit: BPF prog-id=17 op=LOAD
Feb  8 23:15:07.864000 audit: BPF prog-id=13 op=UNLOAD
Feb  8 23:15:07.864000 audit: BPF prog-id=14 op=UNLOAD
Feb  8 23:15:07.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:07.890000 audit: BPF prog-id=15 op=UNLOAD
Feb  8 23:15:07.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:07.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.214000 audit: BPF prog-id=18 op=LOAD
Feb  8 23:15:08.214000 audit: BPF prog-id=19 op=LOAD
Feb  8 23:15:08.214000 audit: BPF prog-id=20 op=LOAD
Feb  8 23:15:08.214000 audit: BPF prog-id=16 op=UNLOAD
Feb  8 23:15:08.214000 audit: BPF prog-id=17 op=UNLOAD
Feb  8 23:15:08.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.309000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb  8 23:15:08.309000 audit[1140]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffea1edb730 a2=4000 a3=7ffea1edb7cc items=0 ppid=1 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:15:08.309000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb  8 23:15:08.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:07.832276 systemd[1]: Queued start job for default target multi-user.target.
Feb  8 23:14:58.843826 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  8 23:15:07.865603 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb  8 23:14:58.862477 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb  8 23:14:58.862503 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb  8 23:14:58.862549 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Feb  8 23:14:58.862561 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="skipped missing lower profile" missing profile=oem
Feb  8 23:14:58.862617 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Feb  8 23:14:58.862636 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Feb  8 23:14:58.862877 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Feb  8 23:14:58.862932 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb  8 23:14:58.862954 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb  8 23:14:58.878620 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Feb  8 23:14:58.878718 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Feb  8 23:14:58.878740 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2
Feb  8 23:14:58.878763 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Feb  8 23:14:58.878786 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2
Feb  8 23:14:58.878802 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:14:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Feb  8 23:15:06.575919 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:15:06Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  8 23:15:06.576168 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:15:06Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  8 23:15:06.576292 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:15:06Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  8 23:15:06.576514 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:15:06Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  8 23:15:06.576572 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:15:06Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Feb  8 23:15:06.576632 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:15:06Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Feb  8 23:15:08.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.332106 systemd[1]: Started systemd-journald.service.
Feb  8 23:15:08.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.332725 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  8 23:15:08.332871 systemd[1]: Finished modprobe@configfs.service.
Feb  8 23:15:08.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.335107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  8 23:15:08.335248 systemd[1]: Finished modprobe@dm_mod.service.
Feb  8 23:15:08.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.337592 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  8 23:15:08.337729 systemd[1]: Finished modprobe@drm.service.
Feb  8 23:15:08.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.339811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  8 23:15:08.339947 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  8 23:15:08.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.342408 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  8 23:15:08.342621 systemd[1]: Finished modprobe@fuse.service.
Feb  8 23:15:08.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.344952 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  8 23:15:08.345109 systemd[1]: Finished modprobe@loop.service.
Feb  8 23:15:08.347362 systemd[1]: Finished systemd-network-generator.service.
Feb  8 23:15:08.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.349857 systemd[1]: Finished systemd-remount-fs.service.
Feb  8 23:15:08.352331 systemd[1]: Reached target network-pre.target.
Feb  8 23:15:08.355537 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb  8 23:15:08.359217 systemd[1]: Mounting sys-kernel-config.mount...
Feb  8 23:15:08.363523 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  8 23:15:08.366296 systemd[1]: Starting systemd-hwdb-update.service...
Feb  8 23:15:08.370032 systemd[1]: Starting systemd-journal-flush.service...
Feb  8 23:15:08.372559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  8 23:15:08.373988 systemd[1]: Starting systemd-random-seed.service...
Feb  8 23:15:08.376344 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  8 23:15:08.377908 systemd[1]: Starting systemd-sysusers.service...
Feb  8 23:15:08.383716 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb  8 23:15:08.394035 systemd[1]: Finished systemd-modules-load.service.
Feb  8 23:15:08.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.396635 systemd[1]: Mounted sys-kernel-config.mount.
Feb  8 23:15:08.400356 systemd[1]: Starting systemd-sysctl.service...
Feb  8 23:15:08.424340 systemd[1]: Finished systemd-udev-trigger.service.
Feb  8 23:15:08.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.427392 systemd[1]: Finished systemd-random-seed.service.
Feb  8 23:15:08.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.430156 systemd[1]: Reached target first-boot-complete.target.
Feb  8 23:15:08.433765 systemd[1]: Starting systemd-udev-settle.service...
Feb  8 23:15:08.443123 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb  8 23:15:08.445388 systemd-journald[1140]: Time spent on flushing to /var/log/journal/c8a3448ebd414d75af4a2b5251101cc1 is 23.026ms for 1203 entries.
Feb  8 23:15:08.445388 systemd-journald[1140]: System Journal (/var/log/journal/c8a3448ebd414d75af4a2b5251101cc1) is 8.0M, max 2.6G, 2.6G free.
Feb  8 23:15:08.524797 systemd-journald[1140]: Received client request to flush runtime journal.
Feb  8 23:15:08.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.471521 systemd[1]: Finished systemd-sysctl.service.
Feb  8 23:15:08.525694 systemd[1]: Finished systemd-journal-flush.service.
Feb  8 23:15:08.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:08.944527 systemd[1]: Finished systemd-sysusers.service.
Feb  8 23:15:08.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:09.692749 systemd[1]: Finished systemd-hwdb-update.service.
Feb  8 23:15:09.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:09.695000 audit: BPF prog-id=21 op=LOAD
Feb  8 23:15:09.695000 audit: BPF prog-id=22 op=LOAD
Feb  8 23:15:09.696000 audit: BPF prog-id=7 op=UNLOAD
Feb  8 23:15:09.696000 audit: BPF prog-id=8 op=UNLOAD
Feb  8 23:15:09.697149 systemd[1]: Starting systemd-udevd.service...
Feb  8 23:15:09.714331 systemd-udevd[1161]: Using default interface naming scheme 'v252'.
Feb  8 23:15:10.000519 systemd[1]: Started systemd-udevd.service.
Feb  8 23:15:10.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:10.004000 audit: BPF prog-id=23 op=LOAD
Feb  8 23:15:10.005942 systemd[1]: Starting systemd-networkd.service...
Feb  8 23:15:10.036379 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Feb  8 23:15:10.123565 kernel: mousedev: PS/2 mouse device common for all mice
Feb  8 23:15:10.147000 audit: BPF prog-id=24 op=LOAD
Feb  8 23:15:10.147000 audit: BPF prog-id=25 op=LOAD
Feb  8 23:15:10.147000 audit: BPF prog-id=26 op=LOAD
Feb  8 23:15:10.148927 systemd[1]: Starting systemd-userdbd.service...
Feb  8 23:15:10.144000 audit[1180]: AVC avc:  denied  { confidentiality } for  pid=1180 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  8 23:15:10.163432 kernel: hv_vmbus: registering driver hv_balloon
Feb  8 23:15:10.168494 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0
Feb  8 23:15:10.184557 kernel: hv_utils: Registering HyperV Utility Driver
Feb  8 23:15:10.184637 kernel: hv_vmbus: registering driver hv_utils
Feb  8 23:15:10.198432 kernel: hv_vmbus: registering driver hyperv_fb
Feb  8 23:15:10.216792 kernel: hv_utils: Heartbeat IC version 3.0
Feb  8 23:15:10.216883 kernel: hyperv_fb: Synthvid Version major 3, minor 5
Feb  8 23:15:10.216920 kernel: hv_utils: Shutdown IC version 3.2
Feb  8 23:15:10.216951 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608
Feb  8 23:15:10.216986 kernel: hv_utils: TimeSync IC version 4.0
Feb  8 23:15:10.217564 kernel: Console: switching to colour dummy device 80x25
Feb  8 23:15:11.037074 kernel: Console: switching to colour frame buffer device 128x48
Feb  8 23:15:11.040098 systemd[1]: Started systemd-userdbd.service.
Feb  8 23:15:10.144000 audit[1180]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561fa6e6e090 a1=f884 a2=7f02cf69dbc5 a3=5 items=12 ppid=1161 pid=1180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:15:10.144000 audit: CWD cwd="/"
Feb  8 23:15:10.144000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=1 name=(null) inode=15209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=2 name=(null) inode=15209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:11.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:10.144000 audit: PATH item=3 name=(null) inode=15210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=4 name=(null) inode=15209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=5 name=(null) inode=15211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=6 name=(null) inode=15209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=7 name=(null) inode=15212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=8 name=(null) inode=15209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=9 name=(null) inode=15213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=10 name=(null) inode=15209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PATH item=11 name=(null) inode=15214 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:15:10.144000 audit: PROCTITLE proctitle="(udev-worker)"
Feb  8 23:15:11.185071 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1170)
Feb  8 23:15:11.235159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  8 23:15:11.278071 kernel: KVM: vmx: using Hyper-V Enlightened VMCS
Feb  8 23:15:11.312420 systemd[1]: Finished systemd-udev-settle.service.
Feb  8 23:15:11.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:11.315959 systemd[1]: Starting lvm2-activation-early.service...
Feb  8 23:15:11.381871 systemd-networkd[1167]: lo: Link UP
Feb  8 23:15:11.381882 systemd-networkd[1167]: lo: Gained carrier
Feb  8 23:15:11.382469 systemd-networkd[1167]: Enumeration completed
Feb  8 23:15:11.382610 systemd[1]: Started systemd-networkd.service.
Feb  8 23:15:11.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:11.386686 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  8 23:15:11.452635 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  8 23:15:11.507075 kernel: mlx5_core 1b28:00:02.0 enP6952s1: Link up
Feb  8 23:15:11.545084 kernel: hv_netvsc 000d3a67-5ef5-000d-3a67-5ef5000d3a67 eth0: Data path switched to VF: enP6952s1
Feb  8 23:15:11.546323 systemd-networkd[1167]: enP6952s1: Link UP
Feb  8 23:15:11.546646 systemd-networkd[1167]: eth0: Link UP
Feb  8 23:15:11.546746 systemd-networkd[1167]: eth0: Gained carrier
Feb  8 23:15:11.551319 systemd-networkd[1167]: enP6952s1: Gained carrier
Feb  8 23:15:11.588183 systemd-networkd[1167]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16
Feb  8 23:15:11.748852 lvm[1238]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  8 23:15:11.777095 systemd[1]: Finished lvm2-activation-early.service.
Feb  8 23:15:11.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:11.779921 systemd[1]: Reached target cryptsetup.target.
Feb  8 23:15:11.783450 systemd[1]: Starting lvm2-activation.service...
Feb  8 23:15:11.789358 lvm[1240]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  8 23:15:11.815992 systemd[1]: Finished lvm2-activation.service.
Feb  8 23:15:11.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:11.818847 systemd[1]: Reached target local-fs-pre.target.
Feb  8 23:15:11.821336 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  8 23:15:11.821377 systemd[1]: Reached target local-fs.target.
Feb  8 23:15:11.823650 systemd[1]: Reached target machines.target.
Feb  8 23:15:11.826931 systemd[1]: Starting ldconfig.service...
Feb  8 23:15:11.829228 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb  8 23:15:11.829333 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  8 23:15:11.830462 systemd[1]: Starting systemd-boot-update.service...
Feb  8 23:15:11.833973 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb  8 23:15:11.837848 systemd[1]: Starting systemd-machine-id-commit.service...
Feb  8 23:15:11.839994 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb  8 23:15:11.840102 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb  8 23:15:11.841243 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb  8 23:15:11.868315 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1242 (bootctl)
Feb  8 23:15:11.869787 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb  8 23:15:12.050735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb  8 23:15:12.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:12.104346 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb  8 23:15:12.104993 systemd[1]: Finished systemd-machine-id-commit.service.
Feb  8 23:15:12.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:12.204796 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb  8 23:15:12.254602 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb  8 23:15:12.306678 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb  8 23:15:12.844290 systemd-networkd[1167]: eth0: Gained IPv6LL
Feb  8 23:15:12.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:12.850405 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  8 23:15:12.853757 systemd-fsck[1250]: fsck.fat 4.2 (2021-01-31)
Feb  8 23:15:12.853757 systemd-fsck[1250]: /dev/sda1: 789 files, 115332/258078 clusters
Feb  8 23:15:12.855204 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb  8 23:15:12.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:12.859471 systemd[1]: Mounting boot.mount...
Feb  8 23:15:12.867677 systemd[1]: Mounted boot.mount.
Feb  8 23:15:12.882595 systemd[1]: Finished systemd-boot-update.service.
Feb  8 23:15:12.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.706499 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb  8 23:15:14.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.710682 systemd[1]: Starting audit-rules.service...
Feb  8 23:15:14.712518 kernel: kauditd_printk_skb: 78 callbacks suppressed
Feb  8 23:15:14.712581 kernel: audit: type=1130 audit(1707434114.708:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.727574 systemd[1]: Starting clean-ca-certificates.service...
Feb  8 23:15:14.731021 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb  8 23:15:14.735771 systemd[1]: Starting systemd-resolved.service...
Feb  8 23:15:14.733000 audit: BPF prog-id=27 op=LOAD
Feb  8 23:15:14.742098 systemd[1]: Starting systemd-timesyncd.service...
Feb  8 23:15:14.749205 kernel: audit: type=1334 audit(1707434114.733:163): prog-id=27 op=LOAD
Feb  8 23:15:14.749268 kernel: audit: type=1334 audit(1707434114.738:164): prog-id=28 op=LOAD
Feb  8 23:15:14.738000 audit: BPF prog-id=28 op=LOAD
Feb  8 23:15:14.750790 systemd[1]: Starting systemd-update-utmp.service...
Feb  8 23:15:14.770000 audit[1262]: SYSTEM_BOOT pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.784381 kernel: audit: type=1127 audit(1707434114.770:165): pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.785923 systemd[1]: Finished systemd-update-utmp.service.
Feb  8 23:15:14.800080 kernel: audit: type=1130 audit(1707434114.787:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.821806 systemd[1]: Finished clean-ca-certificates.service.
Feb  8 23:15:14.824769 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb  8 23:15:14.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.837256 kernel: audit: type=1130 audit(1707434114.823:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.870740 systemd[1]: Started systemd-timesyncd.service.
Feb  8 23:15:14.873770 systemd[1]: Reached target time-set.target.
Feb  8 23:15:14.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.886069 kernel: audit: type=1130 audit(1707434114.872:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:14.993676 systemd-resolved[1260]: Positive Trust Anchors:
Feb  8 23:15:14.993689 systemd-resolved[1260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  8 23:15:14.993726 systemd-resolved[1260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  8 23:15:15.049534 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb  8 23:15:15.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:15.065066 kernel: audit: type=1130 audit(1707434115.051:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:15:15.066000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  8 23:15:15.067945 systemd[1]: Finished audit-rules.service.
Feb  8 23:15:15.068243 augenrules[1277]: No rules
Feb  8 23:15:15.066000 audit[1277]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc57367080 a2=420 a3=0 items=0 ppid=1256 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:15:15.093102 kernel: audit: type=1305 audit(1707434115.066:170): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  8 23:15:15.093167 kernel: audit: type=1300 audit(1707434115.066:170): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc57367080 a2=420 a3=0 items=0 ppid=1256 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:15:15.066000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  8 23:15:15.146240 systemd-resolved[1260]: Using system hostname 'ci-3510.3.2-a-4203397181'.
Feb  8 23:15:15.147806 systemd[1]: Started systemd-resolved.service.
Feb  8 23:15:15.150746 systemd[1]: Reached target network.target.
Feb  8 23:15:15.153031 systemd[1]: Reached target network-online.target.
Feb  8 23:15:15.155394 systemd[1]: Reached target nss-lookup.target.
Feb  8 23:15:15.319159 systemd-timesyncd[1261]: Contacted time server 77.68.25.145:123 (0.flatcar.pool.ntp.org).
Feb  8 23:15:15.319254 systemd-timesyncd[1261]: Initial clock synchronization to Thu 2024-02-08 23:15:15.320479 UTC.
Feb  8 23:15:19.712524 ldconfig[1241]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb  8 23:15:19.728688 systemd[1]: Finished ldconfig.service.
Feb  8 23:15:19.732596 systemd[1]: Starting systemd-update-done.service...
Feb  8 23:15:19.755338 systemd[1]: Finished systemd-update-done.service.
Feb  8 23:15:19.757775 systemd[1]: Reached target sysinit.target.
Feb  8 23:15:19.759888 systemd[1]: Started motdgen.path.
Feb  8 23:15:19.761636 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb  8 23:15:19.764597 systemd[1]: Started logrotate.timer.
Feb  8 23:15:19.766416 systemd[1]: Started mdadm.timer.
Feb  8 23:15:19.768067 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb  8 23:15:19.770312 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb  8 23:15:19.770351 systemd[1]: Reached target paths.target.
Feb  8 23:15:19.772162 systemd[1]: Reached target timers.target.
Feb  8 23:15:19.774294 systemd[1]: Listening on dbus.socket.
Feb  8 23:15:19.777093 systemd[1]: Starting docker.socket...
Feb  8 23:15:19.781516 systemd[1]: Listening on sshd.socket.
Feb  8 23:15:19.783585 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  8 23:15:19.784001 systemd[1]: Listening on docker.socket.
Feb  8 23:15:19.786121 systemd[1]: Reached target sockets.target.
Feb  8 23:15:19.788070 systemd[1]: Reached target basic.target.
Feb  8 23:15:19.789981 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  8 23:15:19.790011 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  8 23:15:19.790991 systemd[1]: Starting containerd.service...
Feb  8 23:15:19.794150 systemd[1]: Starting dbus.service...
Feb  8 23:15:19.797071 systemd[1]: Starting enable-oem-cloudinit.service...
Feb  8 23:15:19.800500 systemd[1]: Starting extend-filesystems.service...
Feb  8 23:15:19.803155 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb  8 23:15:19.804407 systemd[1]: Starting motdgen.service...
Feb  8 23:15:19.807764 systemd[1]: Started nvidia.service.
Feb  8 23:15:19.811024 systemd[1]: Starting prepare-cni-plugins.service...
Feb  8 23:15:19.815135 systemd[1]: Starting prepare-critools.service...
Feb  8 23:15:19.818087 systemd[1]: Starting prepare-helm.service...
Feb  8 23:15:19.820955 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb  8 23:15:19.824818 systemd[1]: Starting sshd-keygen.service...
Feb  8 23:15:19.830196 systemd[1]: Starting systemd-logind.service...
Feb  8 23:15:19.832504 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  8 23:15:19.832585 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb  8 23:15:19.833107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb  8 23:15:19.833883 systemd[1]: Starting update-engine.service...
Feb  8 23:15:19.838045 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb  8 23:15:19.849421 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb  8 23:15:19.849627 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb  8 23:15:19.871298 jq[1287]: false
Feb  8 23:15:19.871580 jq[1304]: true
Feb  8 23:15:19.873394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb  8 23:15:19.873605 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb  8 23:15:19.884924 systemd[1]: motdgen.service: Deactivated successfully.
Feb  8 23:15:19.885155 systemd[1]: Finished motdgen.service.
Feb  8 23:15:19.890911 jq[1311]: true
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda1
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda2
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda3
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found usr
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda4
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda6
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda7
Feb  8 23:15:19.902436 extend-filesystems[1288]: Found sda9
Feb  8 23:15:19.902436 extend-filesystems[1288]: Checking size of /dev/sda9
Feb  8 23:15:19.945086 env[1313]: time="2024-02-08T23:15:19.943826624Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb  8 23:15:19.949284 systemd-logind[1301]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb  8 23:15:19.957344 systemd-logind[1301]: New seat seat0.
Feb  8 23:15:19.972120 tar[1307]: ./
Feb  8 23:15:19.972120 tar[1307]: ./loopback
Feb  8 23:15:19.976224 tar[1308]: crictl
Feb  8 23:15:19.977363 tar[1309]: linux-amd64/helm
Feb  8 23:15:20.026131 env[1313]: time="2024-02-08T23:15:20.026078726Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb  8 23:15:20.028133 env[1313]: time="2024-02-08T23:15:20.026249439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:15:20.030008 extend-filesystems[1288]: Old size kept for /dev/sda9
Feb  8 23:15:20.032504 extend-filesystems[1288]: Found sr0
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037333638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037366940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037622758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037643760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037661461Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037675962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037756868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.037982484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.038169698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  8 23:15:20.043219 env[1313]: time="2024-02-08T23:15:20.038191999Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb  8 23:15:20.033890 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb  8 23:15:20.043544 env[1313]: time="2024-02-08T23:15:20.038252604Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb  8 23:15:20.043544 env[1313]: time="2024-02-08T23:15:20.038267305Z" level=info msg="metadata content store policy set" policy=shared
Feb  8 23:15:20.034067 systemd[1]: Finished extend-filesystems.service.
Feb  8 23:15:20.064457 env[1313]: time="2024-02-08T23:15:20.064417289Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb  8 23:15:20.064554 env[1313]: time="2024-02-08T23:15:20.064469093Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb  8 23:15:20.064554 env[1313]: time="2024-02-08T23:15:20.064485694Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb  8 23:15:20.064554 env[1313]: time="2024-02-08T23:15:20.064523397Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064554 env[1313]: time="2024-02-08T23:15:20.064542998Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064562100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064581201Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064600302Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064618804Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064636905Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064654806Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.064709 env[1313]: time="2024-02-08T23:15:20.064673708Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb  8 23:15:20.064947 env[1313]: time="2024-02-08T23:15:20.064801017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb  8 23:15:20.064947 env[1313]: time="2024-02-08T23:15:20.064898524Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb  8 23:15:20.065304 env[1313]: time="2024-02-08T23:15:20.065281652Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb  8 23:15:20.065370 env[1313]: time="2024-02-08T23:15:20.065324255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.065370 env[1313]: time="2024-02-08T23:15:20.065345856Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb  8 23:15:20.065448 env[1313]: time="2024-02-08T23:15:20.065404860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.065448 env[1313]: time="2024-02-08T23:15:20.065425162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.065448 env[1313]: time="2024-02-08T23:15:20.065443663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065460164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065477966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065494267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065510368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065526269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065550371Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065683881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065701482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065717583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065733784Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065753486Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065770887Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065795589Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb  8 23:15:20.066680 env[1313]: time="2024-02-08T23:15:20.065859493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.066128113Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.066196218Z" level=info msg="Connect containerd service"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.066240021Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.066867566Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067080181Z" level=info msg="Start subscribing containerd event"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067124384Z" level=info msg="Start recovering state"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067189189Z" level=info msg="Start event monitor"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067204690Z" level=info msg="Start snapshots syncer"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067217391Z" level=info msg="Start cni network conf syncer for default"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067227592Z" level=info msg="Start streaming server"
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067540414Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb  8 23:15:20.067583 env[1313]: time="2024-02-08T23:15:20.067587918Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb  8 23:15:20.109464 env[1313]: time="2024-02-08T23:15:20.067650222Z" level=info msg="containerd successfully booted in 0.126676s"
Feb  8 23:15:20.109512 tar[1307]: ./bandwidth
Feb  8 23:15:20.067729 systemd[1]: Started containerd.service.
Feb  8 23:15:20.117258 dbus-daemon[1286]: [system] SELinux support is enabled
Feb  8 23:15:20.117429 systemd[1]: Started dbus.service.
Feb  8 23:15:20.121958 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb  8 23:15:20.121994 systemd[1]: Reached target system-config.target.
Feb  8 23:15:20.124538 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb  8 23:15:20.124564 systemd[1]: Reached target user-config.target.
Feb  8 23:15:20.126475 dbus-daemon[1286]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb  8 23:15:20.126891 systemd[1]: Started systemd-logind.service.
Feb  8 23:15:20.190874 bash[1332]: Updated "/home/core/.ssh/authorized_keys"
Feb  8 23:15:20.191235 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb  8 23:15:20.226442 systemd[1]: nvidia.service: Deactivated successfully.
Feb  8 23:15:20.238671 tar[1307]: ./ptp
Feb  8 23:15:20.392307 tar[1307]: ./vlan
Feb  8 23:15:20.540371 tar[1307]: ./host-device
Feb  8 23:15:20.682765 tar[1307]: ./tuning
Feb  8 23:15:20.762743 tar[1307]: ./vrf
Feb  8 23:15:20.807726 tar[1309]: linux-amd64/LICENSE
Feb  8 23:15:20.807962 tar[1309]: linux-amd64/README.md
Feb  8 23:15:20.817487 systemd[1]: Finished prepare-helm.service.
Feb  8 23:15:20.840801 tar[1307]: ./sbr
Feb  8 23:15:20.857356 update_engine[1302]: I0208 23:15:20.856576  1302 main.cc:92] Flatcar Update Engine starting
Feb  8 23:15:20.915849 tar[1307]: ./tap
Feb  8 23:15:20.919109 systemd[1]: Started update-engine.service.
Feb  8 23:15:20.924325 systemd[1]: Started locksmithd.service.
Feb  8 23:15:20.932535 update_engine[1302]: I0208 23:15:20.932411  1302 update_check_scheduler.cc:74] Next update check in 2m27s
Feb  8 23:15:20.970434 tar[1307]: ./dhcp
Feb  8 23:15:21.092247 tar[1307]: ./static
Feb  8 23:15:21.146939 tar[1307]: ./firewall
Feb  8 23:15:21.226531 tar[1307]: ./macvlan
Feb  8 23:15:21.267588 systemd[1]: Finished prepare-critools.service.
Feb  8 23:15:21.282193 tar[1307]: ./dummy
Feb  8 23:15:21.329267 tar[1307]: ./bridge
Feb  8 23:15:21.377108 tar[1307]: ./ipvlan
Feb  8 23:15:21.421919 tar[1307]: ./portmap
Feb  8 23:15:21.464230 tar[1307]: ./host-local
Feb  8 23:15:21.555556 systemd[1]: Finished prepare-cni-plugins.service.
Feb  8 23:15:21.695374 sshd_keygen[1310]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb  8 23:15:21.714838 systemd[1]: Finished sshd-keygen.service.
Feb  8 23:15:21.718870 systemd[1]: Starting issuegen.service...
Feb  8 23:15:21.722392 systemd[1]: Started waagent.service.
Feb  8 23:15:21.729140 systemd[1]: issuegen.service: Deactivated successfully.
Feb  8 23:15:21.729308 systemd[1]: Finished issuegen.service.
Feb  8 23:15:21.732856 systemd[1]: Starting systemd-user-sessions.service...
Feb  8 23:15:21.739850 systemd[1]: Finished systemd-user-sessions.service.
Feb  8 23:15:21.743515 systemd[1]: Started getty@tty1.service.
Feb  8 23:15:21.746922 systemd[1]: Started serial-getty@ttyS0.service.
Feb  8 23:15:21.749343 systemd[1]: Reached target getty.target.
Feb  8 23:15:21.751292 systemd[1]: Reached target multi-user.target.
Feb  8 23:15:21.754590 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb  8 23:15:21.764401 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  8 23:15:21.764560 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb  8 23:15:21.767451 systemd[1]: Startup finished in 1.149s (firmware) + 30.461s (loader) + 905ms (kernel) + 1min 59.238s (initrd) + 25.009s (userspace) = 2min 56.764s.
Feb  8 23:15:22.244681 login[1407]: pam_lastlog(login:session): file /var/log/lastlog is locked/write
Feb  8 23:15:22.245377 login[1406]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Feb  8 23:15:22.340424 systemd[1]: Created slice user-500.slice.
Feb  8 23:15:22.341990 systemd[1]: Starting user-runtime-dir@500.service...
Feb  8 23:15:22.346974 systemd-logind[1301]: New session 2 of user core.
Feb  8 23:15:22.352746 systemd[1]: Finished user-runtime-dir@500.service.
Feb  8 23:15:22.354411 systemd[1]: Starting user@500.service...
Feb  8 23:15:22.357592 (systemd)[1413]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:15:22.603074 systemd[1413]: Queued start job for default target default.target.
Feb  8 23:15:22.603772 systemd[1413]: Reached target paths.target.
Feb  8 23:15:22.603800 systemd[1413]: Reached target sockets.target.
Feb  8 23:15:22.603827 systemd[1413]: Reached target timers.target.
Feb  8 23:15:22.603841 systemd[1413]: Reached target basic.target.
Feb  8 23:15:22.603955 systemd[1]: Started user@500.service.
Feb  8 23:15:22.605173 systemd[1]: Started session-2.scope.
Feb  8 23:15:22.605711 systemd[1413]: Reached target default.target.
Feb  8 23:15:22.605894 systemd[1413]: Startup finished in 242ms.
Feb  8 23:15:22.976207 locksmithd[1389]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb  8 23:15:23.246952 login[1407]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Feb  8 23:15:23.252887 systemd[1]: Started session-1.scope.
Feb  8 23:15:23.253676 systemd-logind[1301]: New session 1 of user core.
Feb  8 23:15:28.905250 waagent[1401]: 2024-02-08T23:15:28.905139Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2
Feb  8 23:15:28.919270 waagent[1401]: 2024-02-08T23:15:28.906456Z INFO Daemon Daemon OS: flatcar 3510.3.2
Feb  8 23:15:28.919270 waagent[1401]: 2024-02-08T23:15:28.907118Z INFO Daemon Daemon Python: 3.9.16
Feb  8 23:15:28.919270 waagent[1401]: 2024-02-08T23:15:28.908446Z INFO Daemon Daemon Run daemon
Feb  8 23:15:28.919270 waagent[1401]: 2024-02-08T23:15:28.909483Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2'
Feb  8 23:15:28.923802 waagent[1401]: 2024-02-08T23:15:28.923685Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1.
Feb  8 23:15:28.931704 waagent[1401]: 2024-02-08T23:15:28.931605Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Feb  8 23:15:28.936745 waagent[1401]: 2024-02-08T23:15:28.936685Z INFO Daemon Daemon cloud-init is enabled: False
Feb  8 23:15:28.939524 waagent[1401]: 2024-02-08T23:15:28.939465Z INFO Daemon Daemon Using waagent for provisioning
Feb  8 23:15:28.942725 waagent[1401]: 2024-02-08T23:15:28.942663Z INFO Daemon Daemon Activate resource disk
Feb  8 23:15:28.945295 waagent[1401]: 2024-02-08T23:15:28.945237Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb
Feb  8 23:15:28.955540 waagent[1401]: 2024-02-08T23:15:28.955481Z INFO Daemon Daemon Found device: None
Feb  8 23:15:28.958126 waagent[1401]: 2024-02-08T23:15:28.958068Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology
Feb  8 23:15:28.962275 waagent[1401]: 2024-02-08T23:15:28.962215Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0
Feb  8 23:15:28.968368 waagent[1401]: 2024-02-08T23:15:28.968308Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Feb  8 23:15:28.971611 waagent[1401]: 2024-02-08T23:15:28.971551Z INFO Daemon Daemon Running default provisioning handler
Feb  8 23:15:28.981420 waagent[1401]: 2024-02-08T23:15:28.981304Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1.
Feb  8 23:15:28.988354 waagent[1401]: 2024-02-08T23:15:28.988255Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service'
Feb  8 23:15:28.996615 waagent[1401]: 2024-02-08T23:15:28.988617Z INFO Daemon Daemon cloud-init is enabled: False
Feb  8 23:15:28.996615 waagent[1401]: 2024-02-08T23:15:28.989460Z INFO Daemon Daemon Copying ovf-env.xml
Feb  8 23:15:29.075363 waagent[1401]: 2024-02-08T23:15:29.075194Z INFO Daemon Daemon Successfully mounted dvd
Feb  8 23:15:29.229377 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully.
Feb  8 23:15:29.234025 waagent[1401]: 2024-02-08T23:15:29.233904Z INFO Daemon Daemon Detect protocol endpoint
Feb  8 23:15:29.249675 waagent[1401]: 2024-02-08T23:15:29.234404Z INFO Daemon Daemon Clean protocol and wireserver endpoint
Feb  8 23:15:29.249675 waagent[1401]: 2024-02-08T23:15:29.235645Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler
Feb  8 23:15:29.249675 waagent[1401]: 2024-02-08T23:15:29.236647Z INFO Daemon Daemon Test for route to 168.63.129.16
Feb  8 23:15:29.249675 waagent[1401]: 2024-02-08T23:15:29.237856Z INFO Daemon Daemon Route to 168.63.129.16 exists
Feb  8 23:15:29.249675 waagent[1401]: 2024-02-08T23:15:29.238692Z INFO Daemon Daemon Wire server endpoint:168.63.129.16
Feb  8 23:15:29.383878 waagent[1401]: 2024-02-08T23:15:29.383806Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05
Feb  8 23:15:29.392961 waagent[1401]: 2024-02-08T23:15:29.384707Z INFO Daemon Daemon Wire protocol version:2012-11-30
Feb  8 23:15:29.392961 waagent[1401]: 2024-02-08T23:15:29.385892Z INFO Daemon Daemon Server preferred version:2015-04-05
Feb  8 23:15:29.893753 waagent[1401]: 2024-02-08T23:15:29.893612Z INFO Daemon Daemon Initializing goal state during protocol detection
Feb  8 23:15:29.906442 waagent[1401]: 2024-02-08T23:15:29.906368Z INFO Daemon Daemon Forcing an update of the goal state..
Feb  8 23:15:29.909734 waagent[1401]: 2024-02-08T23:15:29.909662Z INFO Daemon Daemon Fetching goal state [incarnation 1]
Feb  8 23:15:29.995794 waagent[1401]: 2024-02-08T23:15:29.995678Z INFO Daemon Daemon Found private key matching thumbprint 0253D575FAF0B6170285ED3B700DFEE372FBE2CF
Feb  8 23:15:30.007326 waagent[1401]: 2024-02-08T23:15:29.996173Z INFO Daemon Daemon Certificate with thumbprint 66DB4E9D95F178AB4A4D75286F3DB593794DDC69 has no matching private key.
Feb  8 23:15:30.007326 waagent[1401]: 2024-02-08T23:15:29.997373Z INFO Daemon Daemon Fetch goal state completed
Feb  8 23:15:30.043201 waagent[1401]: 2024-02-08T23:15:30.043112Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 54c92a13-83ba-4d7d-8c6b-851bf223f1b3 New eTag: 7918511854374688389]
Feb  8 23:15:30.051184 waagent[1401]: 2024-02-08T23:15:30.044182Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob
Feb  8 23:15:30.057806 waagent[1401]: 2024-02-08T23:15:30.057745Z INFO Daemon Daemon Starting provisioning
Feb  8 23:15:30.066623 waagent[1401]: 2024-02-08T23:15:30.058070Z INFO Daemon Daemon Handle ovf-env.xml.
Feb  8 23:15:30.066623 waagent[1401]: 2024-02-08T23:15:30.058620Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-4203397181]
Feb  8 23:15:30.081387 waagent[1401]: 2024-02-08T23:15:30.081253Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-4203397181]
Feb  8 23:15:30.089225 waagent[1401]: 2024-02-08T23:15:30.082029Z INFO Daemon Daemon Examine /proc/net/route for primary interface
Feb  8 23:15:30.089225 waagent[1401]: 2024-02-08T23:15:30.083065Z INFO Daemon Daemon Primary interface is [eth0]
Feb  8 23:15:30.097532 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully.
Feb  8 23:15:30.097780 systemd[1]: Stopped systemd-networkd-wait-online.service.
Feb  8 23:15:30.097853 systemd[1]: Stopping systemd-networkd-wait-online.service...
Feb  8 23:15:30.098231 systemd[1]: Stopping systemd-networkd.service...
Feb  8 23:15:30.102103 systemd-networkd[1167]: eth0: DHCPv6 lease lost
Feb  8 23:15:30.103639 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  8 23:15:30.103794 systemd[1]: Stopped systemd-networkd.service.
Feb  8 23:15:30.106234 systemd[1]: Starting systemd-networkd.service...
Feb  8 23:15:30.136744 systemd-networkd[1456]: enP6952s1: Link UP
Feb  8 23:15:30.136752 systemd-networkd[1456]: enP6952s1: Gained carrier
Feb  8 23:15:30.138197 systemd-networkd[1456]: eth0: Link UP
Feb  8 23:15:30.138206 systemd-networkd[1456]: eth0: Gained carrier
Feb  8 23:15:30.138633 systemd-networkd[1456]: lo: Link UP
Feb  8 23:15:30.138641 systemd-networkd[1456]: lo: Gained carrier
Feb  8 23:15:30.138944 systemd-networkd[1456]: eth0: Gained IPv6LL
Feb  8 23:15:30.139233 systemd-networkd[1456]: Enumeration completed
Feb  8 23:15:30.139328 systemd[1]: Started systemd-networkd.service.
Feb  8 23:15:30.141248 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  8 23:15:30.143722 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  8 23:15:30.146828 waagent[1401]: 2024-02-08T23:15:30.146176Z INFO Daemon Daemon Create user account if not exists
Feb  8 23:15:30.147661 waagent[1401]: 2024-02-08T23:15:30.147570Z INFO Daemon Daemon User core already exists, skip useradd
Feb  8 23:15:30.149011 waagent[1401]: 2024-02-08T23:15:30.148946Z INFO Daemon Daemon Configure sudoer
Feb  8 23:15:30.206254 systemd-networkd[1456]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16
Feb  8 23:15:30.209360 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  8 23:15:30.281649 waagent[1401]: 2024-02-08T23:15:30.281500Z INFO Daemon Daemon Configure sshd
Feb  8 23:15:30.286179 waagent[1401]: 2024-02-08T23:15:30.282019Z INFO Daemon Daemon Deploy ssh public key.
Feb  8 23:15:31.717039 waagent[1401]: 2024-02-08T23:15:31.716937Z INFO Daemon Daemon Provisioning complete
Feb  8 23:15:31.736994 waagent[1401]: 2024-02-08T23:15:31.736914Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping
Feb  8 23:15:31.740434 waagent[1401]: 2024-02-08T23:15:31.740367Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions.
Feb  8 23:15:31.746004 waagent[1401]: 2024-02-08T23:15:31.745937Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent
Feb  8 23:15:32.010024 waagent[1465]: 2024-02-08T23:15:32.009858Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent
Feb  8 23:15:32.010739 waagent[1465]: 2024-02-08T23:15:32.010670Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb  8 23:15:32.010885 waagent[1465]: 2024-02-08T23:15:32.010830Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Feb  8 23:15:32.021385 waagent[1465]: 2024-02-08T23:15:32.021310Z INFO ExtHandler ExtHandler Forcing an update of the goal state..
Feb  8 23:15:32.021543 waagent[1465]: 2024-02-08T23:15:32.021490Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1]
Feb  8 23:15:32.082573 waagent[1465]: 2024-02-08T23:15:32.082442Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0253D575FAF0B6170285ED3B700DFEE372FBE2CF
Feb  8 23:15:32.082805 waagent[1465]: 2024-02-08T23:15:32.082743Z INFO ExtHandler ExtHandler Certificate with thumbprint 66DB4E9D95F178AB4A4D75286F3DB593794DDC69 has no matching private key.
Feb  8 23:15:32.083043 waagent[1465]: 2024-02-08T23:15:32.082992Z INFO ExtHandler ExtHandler Fetch goal state completed
Feb  8 23:15:32.096660 waagent[1465]: 2024-02-08T23:15:32.096597Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 93fbf46c-e58c-46b0-9e62-79432e08b1a7 New eTag: 7918511854374688389]
Feb  8 23:15:32.097214 waagent[1465]: 2024-02-08T23:15:32.097158Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob
Feb  8 23:15:32.177780 waagent[1465]: 2024-02-08T23:15:32.177632Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Feb  8 23:15:32.218131 waagent[1465]: 2024-02-08T23:15:32.218028Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1465
Feb  8 23:15:32.221565 waagent[1465]: 2024-02-08T23:15:32.221499Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk']
Feb  8 23:15:32.222704 waagent[1465]: 2024-02-08T23:15:32.222648Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Feb  8 23:15:32.303577 waagent[1465]: 2024-02-08T23:15:32.303507Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Feb  8 23:15:32.304240 waagent[1465]: 2024-02-08T23:15:32.304168Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Feb  8 23:15:32.312068 waagent[1465]: 2024-02-08T23:15:32.311996Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Feb  8 23:15:32.312506 waagent[1465]: 2024-02-08T23:15:32.312449Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service'
Feb  8 23:15:32.313560 waagent[1465]: 2024-02-08T23:15:32.313495Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True]
Feb  8 23:15:32.314804 waagent[1465]: 2024-02-08T23:15:32.314746Z INFO ExtHandler ExtHandler Starting env monitor service.
Feb  8 23:15:32.315364 waagent[1465]: 2024-02-08T23:15:32.315310Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb  8 23:15:32.315519 waagent[1465]: 2024-02-08T23:15:32.315472Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Feb  8 23:15:32.316032 waagent[1465]: 2024-02-08T23:15:32.315975Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Feb  8 23:15:32.316349 waagent[1465]: 2024-02-08T23:15:32.316291Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Feb  8 23:15:32.316349 waagent[1465]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Feb  8 23:15:32.316349 waagent[1465]: eth0        00000000        0108C80A        0003        0        0        1024        00000000        0        0        0
Feb  8 23:15:32.316349 waagent[1465]: eth0        0008C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Feb  8 23:15:32.316349 waagent[1465]: eth0        0108C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Feb  8 23:15:32.316349 waagent[1465]: eth0        10813FA8        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb  8 23:15:32.316349 waagent[1465]: eth0        FEA9FEA9        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb  8 23:15:32.319394 waagent[1465]: 2024-02-08T23:15:32.319194Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Feb  8 23:15:32.320211 waagent[1465]: 2024-02-08T23:15:32.320149Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Feb  8 23:15:32.320365 waagent[1465]: 2024-02-08T23:15:32.320312Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb  8 23:15:32.320466 waagent[1465]: 2024-02-08T23:15:32.320416Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Feb  8 23:15:32.321128 waagent[1465]: 2024-02-08T23:15:32.321073Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Feb  8 23:15:32.321523 waagent[1465]: 2024-02-08T23:15:32.321463Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Feb  8 23:15:32.322345 waagent[1465]: 2024-02-08T23:15:32.322270Z INFO EnvHandler ExtHandler Configure routes
Feb  8 23:15:32.322526 waagent[1465]: 2024-02-08T23:15:32.322476Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Feb  8 23:15:32.322785 waagent[1465]: 2024-02-08T23:15:32.322723Z INFO EnvHandler ExtHandler Gateway:None
Feb  8 23:15:32.323377 waagent[1465]: 2024-02-08T23:15:32.323324Z INFO EnvHandler ExtHandler Routes:None
Feb  8 23:15:32.323846 waagent[1465]: 2024-02-08T23:15:32.323799Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Feb  8 23:15:32.336423 waagent[1465]: 2024-02-08T23:15:32.336381Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod)
Feb  8 23:15:32.337072 waagent[1465]: 2024-02-08T23:15:32.337016Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required
Feb  8 23:15:32.337865 waagent[1465]: 2024-02-08T23:15:32.337821Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders'
Feb  8 23:15:32.364538 waagent[1465]: 2024-02-08T23:15:32.364443Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1456'
Feb  8 23:15:32.374410 waagent[1465]: 2024-02-08T23:15:32.374361Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel.
Feb  8 23:15:32.477333 waagent[1465]: 2024-02-08T23:15:32.477217Z INFO MonitorHandler ExtHandler Network interfaces:
Feb  8 23:15:32.477333 waagent[1465]: Executing ['ip', '-a', '-o', 'link']:
Feb  8 23:15:32.477333 waagent[1465]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Feb  8 23:15:32.477333 waagent[1465]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:67:5e:f5 brd ff:ff:ff:ff:ff:ff
Feb  8 23:15:32.477333 waagent[1465]: 3: enP6952s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:67:5e:f5 brd ff:ff:ff:ff:ff:ff\    altname enP6952p0s2
Feb  8 23:15:32.477333 waagent[1465]: Executing ['ip', '-4', '-a', '-o', 'address']:
Feb  8 23:15:32.477333 waagent[1465]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Feb  8 23:15:32.477333 waagent[1465]: 2: eth0    inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\       valid_lft forever preferred_lft forever
Feb  8 23:15:32.477333 waagent[1465]: Executing ['ip', '-6', '-a', '-o', 'address']:
Feb  8 23:15:32.477333 waagent[1465]: 1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
Feb  8 23:15:32.477333 waagent[1465]: 2: eth0    inet6 fe80::20d:3aff:fe67:5ef5/64 scope link \       valid_lft forever preferred_lft forever
Feb  8 23:15:32.681237 waagent[1465]: 2024-02-08T23:15:32.681126Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting
Feb  8 23:15:32.749406 waagent[1401]: 2024-02-08T23:15:32.749267Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running
Feb  8 23:15:32.754871 waagent[1401]: 2024-02-08T23:15:32.754811Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent
Feb  8 23:15:33.759708 waagent[1502]: 2024-02-08T23:15:33.759603Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1)
Feb  8 23:15:33.760393 waagent[1502]: 2024-02-08T23:15:33.760329Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2
Feb  8 23:15:33.760536 waagent[1502]: 2024-02-08T23:15:33.760482Z INFO ExtHandler ExtHandler Python: 3.9.16
Feb  8 23:15:33.769962 waagent[1502]: 2024-02-08T23:15:33.769866Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1;
Feb  8 23:15:33.770349 waagent[1502]: 2024-02-08T23:15:33.770292Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb  8 23:15:33.770522 waagent[1502]: 2024-02-08T23:15:33.770472Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16
Feb  8 23:15:33.781926 waagent[1502]: 2024-02-08T23:15:33.781856Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1]
Feb  8 23:15:33.791043 waagent[1502]: 2024-02-08T23:15:33.790985Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143
Feb  8 23:15:33.791924 waagent[1502]: 2024-02-08T23:15:33.791866Z INFO ExtHandler
Feb  8 23:15:33.792086 waagent[1502]: 2024-02-08T23:15:33.792020Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8b90b828-d7ca-47fc-a12a-2361892f0927 eTag: 7918511854374688389 source: Fabric]
Feb  8 23:15:33.792777 waagent[1502]: 2024-02-08T23:15:33.792722Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them.
Feb  8 23:15:33.793802 waagent[1502]: 2024-02-08T23:15:33.793743Z INFO ExtHandler
Feb  8 23:15:33.793931 waagent[1502]: 2024-02-08T23:15:33.793882Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1]
Feb  8 23:15:33.800380 waagent[1502]: 2024-02-08T23:15:33.800330Z INFO ExtHandler ExtHandler Downloading artifacts profile blob
Feb  8 23:15:33.800786 waagent[1502]: 2024-02-08T23:15:33.800739Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required
Feb  8 23:15:33.820705 waagent[1502]: 2024-02-08T23:15:33.820644Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel.
Feb  8 23:15:33.885141 waagent[1502]: 2024-02-08T23:15:33.884997Z INFO ExtHandler Downloaded certificate {'thumbprint': '66DB4E9D95F178AB4A4D75286F3DB593794DDC69', 'hasPrivateKey': False}
Feb  8 23:15:33.886110 waagent[1502]: 2024-02-08T23:15:33.886012Z INFO ExtHandler Downloaded certificate {'thumbprint': '0253D575FAF0B6170285ED3B700DFEE372FBE2CF', 'hasPrivateKey': True}
Feb  8 23:15:33.887100 waagent[1502]: 2024-02-08T23:15:33.887023Z INFO ExtHandler Fetch goal state completed
Feb  8 23:15:33.910330 waagent[1502]: 2024-02-08T23:15:33.910259Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1502
Feb  8 23:15:33.913528 waagent[1502]: 2024-02-08T23:15:33.913469Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk']
Feb  8 23:15:33.914916 waagent[1502]: 2024-02-08T23:15:33.914860Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules
Feb  8 23:15:33.919790 waagent[1502]: 2024-02-08T23:15:33.919736Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service
Feb  8 23:15:33.920151 waagent[1502]: 2024-02-08T23:15:33.920097Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup
Feb  8 23:15:33.927914 waagent[1502]: 2024-02-08T23:15:33.927861Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now
Feb  8 23:15:33.928369 waagent[1502]: 2024-02-08T23:15:33.928315Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service'
Feb  8 23:15:33.934207 waagent[1502]: 2024-02-08T23:15:33.934117Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up.
Feb  8 23:15:33.938690 waagent[1502]: 2024-02-08T23:15:33.938631Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True]
Feb  8 23:15:33.940083 waagent[1502]: 2024-02-08T23:15:33.939997Z INFO ExtHandler ExtHandler Starting env monitor service.
Feb  8 23:15:33.940485 waagent[1502]: 2024-02-08T23:15:33.940431Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb  8 23:15:33.940632 waagent[1502]: 2024-02-08T23:15:33.940585Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16
Feb  8 23:15:33.941192 waagent[1502]: 2024-02-08T23:15:33.941133Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled.
Feb  8 23:15:33.941601 waagent[1502]: 2024-02-08T23:15:33.941547Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service.
Feb  8 23:15:33.942257 waagent[1502]: 2024-02-08T23:15:33.942189Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file
Feb  8 23:15:33.942664 waagent[1502]: 2024-02-08T23:15:33.942611Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16
Feb  8 23:15:33.942740 waagent[1502]: 2024-02-08T23:15:33.942678Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route:
Feb  8 23:15:33.942740 waagent[1502]: Iface        Destination        Gateway         Flags        RefCnt        Use        Metric        Mask                MTU        Window        IRTT
Feb  8 23:15:33.942740 waagent[1502]: eth0        00000000        0108C80A        0003        0        0        1024        00000000        0        0        0
Feb  8 23:15:33.942740 waagent[1502]: eth0        0008C80A        00000000        0001        0        0        1024        00FFFFFF        0        0        0
Feb  8 23:15:33.942740 waagent[1502]: eth0        0108C80A        00000000        0005        0        0        1024        FFFFFFFF        0        0        0
Feb  8 23:15:33.942740 waagent[1502]: eth0        10813FA8        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb  8 23:15:33.942740 waagent[1502]: eth0        FEA9FEA9        0108C80A        0007        0        0        1024        FFFFFFFF        0        0        0
Feb  8 23:15:33.945299 waagent[1502]: 2024-02-08T23:15:33.945209Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread
Feb  8 23:15:33.945856 waagent[1502]: 2024-02-08T23:15:33.945801Z INFO EnvHandler ExtHandler Configure routes
Feb  8 23:15:33.945972 waagent[1502]: 2024-02-08T23:15:33.945909Z INFO ExtHandler ExtHandler Start Extension Telemetry service.
Feb  8 23:15:33.946328 waagent[1502]: 2024-02-08T23:15:33.946266Z INFO EnvHandler ExtHandler Gateway:None
Feb  8 23:15:33.946493 waagent[1502]: 2024-02-08T23:15:33.946434Z INFO EnvHandler ExtHandler Routes:None
Feb  8 23:15:33.951027 waagent[1502]: 2024-02-08T23:15:33.950966Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True
Feb  8 23:15:33.951239 waagent[1502]: 2024-02-08T23:15:33.951167Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status.
Feb  8 23:15:33.958239 waagent[1502]: 2024-02-08T23:15:33.957996Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread
Feb  8 23:15:33.971248 waagent[1502]: 2024-02-08T23:15:33.971179Z INFO MonitorHandler ExtHandler Network interfaces:
Feb  8 23:15:33.971248 waagent[1502]: Executing ['ip', '-a', '-o', 'link']:
Feb  8 23:15:33.971248 waagent[1502]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
Feb  8 23:15:33.971248 waagent[1502]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:67:5e:f5 brd ff:ff:ff:ff:ff:ff
Feb  8 23:15:33.971248 waagent[1502]: 3: enP6952s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\    link/ether 00:0d:3a:67:5e:f5 brd ff:ff:ff:ff:ff:ff\    altname enP6952p0s2
Feb  8 23:15:33.971248 waagent[1502]: Executing ['ip', '-4', '-a', '-o', 'address']:
Feb  8 23:15:33.971248 waagent[1502]: 1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
Feb  8 23:15:33.971248 waagent[1502]: 2: eth0    inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\       valid_lft forever preferred_lft forever
Feb  8 23:15:33.971248 waagent[1502]: Executing ['ip', '-6', '-a', '-o', 'address']:
Feb  8 23:15:33.971248 waagent[1502]: 1: lo    inet6 ::1/128 scope host \       valid_lft forever preferred_lft forever
Feb  8 23:15:33.971248 waagent[1502]: 2: eth0    inet6 fe80::20d:3aff:fe67:5ef5/64 scope link \       valid_lft forever preferred_lft forever
Feb  8 23:15:33.973498 waagent[1502]: 2024-02-08T23:15:33.973438Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod)
Feb  8 23:15:33.977032 waagent[1502]: 2024-02-08T23:15:33.976964Z INFO ExtHandler ExtHandler Downloading manifest
Feb  8 23:15:34.045806 waagent[1502]: 2024-02-08T23:15:34.045724Z INFO ExtHandler ExtHandler
Feb  8 23:15:34.046393 waagent[1502]: 2024-02-08T23:15:34.046315Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2269e415-86c3-42a1-b3ee-8f3e4ca78ff0 correlation 57b003ae-51d4-41f8-9592-d248d600a6cc created: 2024-02-08T23:12:15.538353Z]
Feb  8 23:15:34.047941 waagent[1502]: 2024-02-08T23:15:34.047883Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything.
Feb  8 23:15:34.052649 waagent[1502]: 2024-02-08T23:15:34.052563Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms]
Feb  8 23:15:34.075265 waagent[1502]: 2024-02-08T23:15:34.075193Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules:
Feb  8 23:15:34.075265 waagent[1502]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Feb  8 23:15:34.075265 waagent[1502]:     pkts      bytes target     prot opt in     out     source               destination
Feb  8 23:15:34.075265 waagent[1502]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Feb  8 23:15:34.075265 waagent[1502]:     pkts      bytes target     prot opt in     out     source               destination
Feb  8 23:15:34.075265 waagent[1502]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Feb  8 23:15:34.075265 waagent[1502]:     pkts      bytes target     prot opt in     out     source               destination
Feb  8 23:15:34.075265 waagent[1502]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Feb  8 23:15:34.075265 waagent[1502]:        9     3239 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Feb  8 23:15:34.075265 waagent[1502]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Feb  8 23:15:34.083089 waagent[1502]: 2024-02-08T23:15:34.083003Z INFO EnvHandler ExtHandler Current Firewall rules:
Feb  8 23:15:34.083089 waagent[1502]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
Feb  8 23:15:34.083089 waagent[1502]:     pkts      bytes target     prot opt in     out     source               destination
Feb  8 23:15:34.083089 waagent[1502]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
Feb  8 23:15:34.083089 waagent[1502]:     pkts      bytes target     prot opt in     out     source               destination
Feb  8 23:15:34.083089 waagent[1502]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
Feb  8 23:15:34.083089 waagent[1502]:     pkts      bytes target     prot opt in     out     source               destination
Feb  8 23:15:34.083089 waagent[1502]:        0        0 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        tcp dpt:53
Feb  8 23:15:34.083089 waagent[1502]:       12     3395 ACCEPT     tcp  --  *      *       0.0.0.0/0            168.63.129.16        owner UID match 0
Feb  8 23:15:34.083089 waagent[1502]:        0        0 DROP       tcp  --  *      *       0.0.0.0/0            168.63.129.16        ctstate INVALID,NEW
Feb  8 23:15:34.083552 waagent[1502]: 2024-02-08T23:15:34.083273Z INFO ExtHandler ExtHandler Looking for existing remote access users.
Feb  8 23:15:34.087812 waagent[1502]: 2024-02-08T23:15:34.087525Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300
Feb  8 23:15:34.094604 waagent[1502]: 2024-02-08T23:15:34.094529Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 30326CE3-530A-4830-8EF2-BE77A1485A2D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1]
Feb  8 23:15:59.105548 kernel: hv_balloon: Max. dynamic memory size: 8192 MB
Feb  8 23:16:06.615274 update_engine[1302]: I0208 23:16:06.615196  1302 update_attempter.cc:509] Updating boot flags...
Feb  8 23:16:23.100795 systemd[1]: Created slice system-sshd.slice.
Feb  8 23:16:23.102378 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.12.6:55118.service.
Feb  8 23:16:23.969201 sshd[1616]: Accepted publickey for core from 10.200.12.6 port 55118 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:16:23.970856 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:16:23.976546 systemd[1]: Started session-3.scope.
Feb  8 23:16:23.976997 systemd-logind[1301]: New session 3 of user core.
Feb  8 23:16:24.510075 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.12.6:55134.service.
Feb  8 23:16:25.134938 sshd[1621]: Accepted publickey for core from 10.200.12.6 port 55134 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:16:25.136559 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:16:25.141465 systemd[1]: Started session-4.scope.
Feb  8 23:16:25.141904 systemd-logind[1301]: New session 4 of user core.
Feb  8 23:16:25.574988 sshd[1621]: pam_unix(sshd:session): session closed for user core
Feb  8 23:16:25.577972 systemd[1]: sshd@1-10.200.8.39:22-10.200.12.6:55134.service: Deactivated successfully.
Feb  8 23:16:25.578810 systemd[1]: session-4.scope: Deactivated successfully.
Feb  8 23:16:25.579442 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit.
Feb  8 23:16:25.580205 systemd-logind[1301]: Removed session 4.
Feb  8 23:16:25.678023 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.12.6:55144.service.
Feb  8 23:16:26.292476 sshd[1627]: Accepted publickey for core from 10.200.12.6 port 55144 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:16:26.294131 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:16:26.299758 systemd[1]: Started session-5.scope.
Feb  8 23:16:26.300299 systemd-logind[1301]: New session 5 of user core.
Feb  8 23:16:26.724780 sshd[1627]: pam_unix(sshd:session): session closed for user core
Feb  8 23:16:26.728122 systemd[1]: sshd@2-10.200.8.39:22-10.200.12.6:55144.service: Deactivated successfully.
Feb  8 23:16:26.729281 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit.
Feb  8 23:16:26.729362 systemd[1]: session-5.scope: Deactivated successfully.
Feb  8 23:16:26.730407 systemd-logind[1301]: Removed session 5.
Feb  8 23:16:26.831419 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.12.6:55148.service.
Feb  8 23:16:27.451631 sshd[1633]: Accepted publickey for core from 10.200.12.6 port 55148 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:16:27.454427 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:16:27.459855 systemd[1]: Started session-6.scope.
Feb  8 23:16:27.460454 systemd-logind[1301]: New session 6 of user core.
Feb  8 23:16:27.889943 sshd[1633]: pam_unix(sshd:session): session closed for user core
Feb  8 23:16:27.893071 systemd[1]: sshd@3-10.200.8.39:22-10.200.12.6:55148.service: Deactivated successfully.
Feb  8 23:16:27.894076 systemd[1]: session-6.scope: Deactivated successfully.
Feb  8 23:16:27.894832 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit.
Feb  8 23:16:27.895740 systemd-logind[1301]: Removed session 6.
Feb  8 23:16:27.993166 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.12.6:54874.service.
Feb  8 23:16:28.609964 sshd[1639]: Accepted publickey for core from 10.200.12.6 port 54874 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:16:28.611602 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:16:28.616186 systemd[1]: Started session-7.scope.
Feb  8 23:16:28.616790 systemd-logind[1301]: New session 7 of user core.
Feb  8 23:16:29.227140 sudo[1642]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb  8 23:16:29.227479 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb  8 23:16:30.005092 systemd[1]: Starting docker.service...
Feb  8 23:16:30.058371 env[1657]: time="2024-02-08T23:16:30.058313285Z" level=info msg="Starting up"
Feb  8 23:16:30.059554 env[1657]: time="2024-02-08T23:16:30.059524227Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb  8 23:16:30.059554 env[1657]: time="2024-02-08T23:16:30.059543628Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb  8 23:16:30.059720 env[1657]: time="2024-02-08T23:16:30.059567628Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb  8 23:16:30.059720 env[1657]: time="2024-02-08T23:16:30.059580029Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb  8 23:16:30.061615 env[1657]: time="2024-02-08T23:16:30.061584499Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb  8 23:16:30.061615 env[1657]: time="2024-02-08T23:16:30.061602699Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb  8 23:16:30.061755 env[1657]: time="2024-02-08T23:16:30.061619400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb  8 23:16:30.061755 env[1657]: time="2024-02-08T23:16:30.061630700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb  8 23:16:30.070344 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3136938847-merged.mount: Deactivated successfully.
Feb  8 23:16:30.224029 env[1657]: time="2024-02-08T23:16:30.223982366Z" level=info msg="Loading containers: start."
Feb  8 23:16:30.387084 kernel: Initializing XFRM netlink socket
Feb  8 23:16:30.413428 env[1657]: time="2024-02-08T23:16:30.413397375Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb  8 23:16:30.528719 systemd-networkd[1456]: docker0: Link UP
Feb  8 23:16:30.551363 env[1657]: time="2024-02-08T23:16:30.551332489Z" level=info msg="Loading containers: done."
Feb  8 23:16:30.567336 env[1657]: time="2024-02-08T23:16:30.567298346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb  8 23:16:30.567560 env[1657]: time="2024-02-08T23:16:30.567535754Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Feb  8 23:16:30.567662 env[1657]: time="2024-02-08T23:16:30.567643458Z" level=info msg="Daemon has completed initialization"
Feb  8 23:16:30.601247 systemd[1]: Started docker.service.
Feb  8 23:16:30.611733 env[1657]: time="2024-02-08T23:16:30.611681894Z" level=info msg="API listen on /run/docker.sock"
Feb  8 23:16:30.628307 systemd[1]: Reloading.
Feb  8 23:16:30.701588 /usr/lib/systemd/system-generators/torcx-generator[1786]: time="2024-02-08T23:16:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  8 23:16:30.704569 /usr/lib/systemd/system-generators/torcx-generator[1786]: time="2024-02-08T23:16:30Z" level=info msg="torcx already run"
Feb  8 23:16:30.795789 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:16:30.795810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:16:30.811708 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:16:30.899042 systemd[1]: Started kubelet.service.
Feb  8 23:16:30.971550 kubelet[1848]: E0208 23:16:30.971232    1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb  8 23:16:30.973271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  8 23:16:30.973427 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  8 23:16:35.201044 env[1313]: time="2024-02-08T23:16:35.200989554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\""
Feb  8 23:16:35.917466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180611226.mount: Deactivated successfully.
Feb  8 23:16:38.122571 env[1313]: time="2024-02-08T23:16:38.122500294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:38.129847 env[1313]: time="2024-02-08T23:16:38.129725298Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:38.134202 env[1313]: time="2024-02-08T23:16:38.134168723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:38.142489 env[1313]: time="2024-02-08T23:16:38.142459057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:38.143123 env[1313]: time="2024-02-08T23:16:38.143092975Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\""
Feb  8 23:16:38.152927 env[1313]: time="2024-02-08T23:16:38.152901752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\""
Feb  8 23:16:40.437937 env[1313]: time="2024-02-08T23:16:40.437822180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:40.445414 env[1313]: time="2024-02-08T23:16:40.445374182Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:40.451463 env[1313]: time="2024-02-08T23:16:40.451426944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:40.455339 env[1313]: time="2024-02-08T23:16:40.455306048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:40.456296 env[1313]: time="2024-02-08T23:16:40.456265174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\""
Feb  8 23:16:40.466403 env[1313]: time="2024-02-08T23:16:40.466379145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\""
Feb  8 23:16:40.840235 env[1313]: time="2024-02-08T23:16:40.840003556Z" level=error msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.28.6\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\": dial tcp: lookup prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com: no such host"
Feb  8 23:16:40.866527 env[1313]: time="2024-02-08T23:16:40.866483665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\""
Feb  8 23:16:41.024934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb  8 23:16:41.025217 systemd[1]: Stopped kubelet.service.
Feb  8 23:16:41.026969 systemd[1]: Started kubelet.service.
Feb  8 23:16:41.076110 kubelet[1881]: E0208 23:16:41.076044    1881 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb  8 23:16:41.079217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  8 23:16:41.079375 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  8 23:16:42.236946 env[1313]: time="2024-02-08T23:16:42.236889997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:42.246772 env[1313]: time="2024-02-08T23:16:42.246725748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:42.254964 env[1313]: time="2024-02-08T23:16:42.254926256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:42.259470 env[1313]: time="2024-02-08T23:16:42.259433871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:42.260104 env[1313]: time="2024-02-08T23:16:42.260073787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\""
Feb  8 23:16:42.270026 env[1313]: time="2024-02-08T23:16:42.269999140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\""
Feb  8 23:16:43.304276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762591131.mount: Deactivated successfully.
Feb  8 23:16:43.873160 env[1313]: time="2024-02-08T23:16:43.873099501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:43.880695 env[1313]: time="2024-02-08T23:16:43.880648988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:43.887215 env[1313]: time="2024-02-08T23:16:43.887177050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:43.893564 env[1313]: time="2024-02-08T23:16:43.893526708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:43.893959 env[1313]: time="2024-02-08T23:16:43.893927718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\""
Feb  8 23:16:43.903815 env[1313]: time="2024-02-08T23:16:43.903787763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb  8 23:16:44.369992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806109842.mount: Deactivated successfully.
Feb  8 23:16:44.396388 env[1313]: time="2024-02-08T23:16:44.396342548Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:44.403440 env[1313]: time="2024-02-08T23:16:44.403345618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:44.410257 env[1313]: time="2024-02-08T23:16:44.410219984Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:44.415555 env[1313]: time="2024-02-08T23:16:44.415462811Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:44.416387 env[1313]: time="2024-02-08T23:16:44.416354833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Feb  8 23:16:44.425725 env[1313]: time="2024-02-08T23:16:44.425699259Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\""
Feb  8 23:16:44.908107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020974013.mount: Deactivated successfully.
Feb  8 23:16:49.550468 env[1313]: time="2024-02-08T23:16:49.550406505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:49.555724 env[1313]: time="2024-02-08T23:16:49.555659718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:49.559487 env[1313]: time="2024-02-08T23:16:49.559430598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:49.562906 env[1313]: time="2024-02-08T23:16:49.562854372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:49.563708 env[1313]: time="2024-02-08T23:16:49.563663689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\""
Feb  8 23:16:49.573472 env[1313]: time="2024-02-08T23:16:49.573432398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\""
Feb  8 23:16:50.224868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613525584.mount: Deactivated successfully.
Feb  8 23:16:51.040499 env[1313]: time="2024-02-08T23:16:51.040442753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:51.050174 env[1313]: time="2024-02-08T23:16:51.050127451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:51.054120 env[1313]: time="2024-02-08T23:16:51.054081532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:51.058433 env[1313]: time="2024-02-08T23:16:51.058390720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:51.058883 env[1313]: time="2024-02-08T23:16:51.058848329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\""
Feb  8 23:16:51.091885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb  8 23:16:51.092203 systemd[1]: Stopped kubelet.service.
Feb  8 23:16:51.093996 systemd[1]: Started kubelet.service.
Feb  8 23:16:51.145762 kubelet[1912]: E0208 23:16:51.145706    1912 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb  8 23:16:51.148343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  8 23:16:51.148516 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  8 23:16:53.505255 systemd[1]: Stopped kubelet.service.
Feb  8 23:16:53.520669 systemd[1]: Reloading.
Feb  8 23:16:53.615499 /usr/lib/systemd/system-generators/torcx-generator[1996]: time="2024-02-08T23:16:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  8 23:16:53.615540 /usr/lib/systemd/system-generators/torcx-generator[1996]: time="2024-02-08T23:16:53Z" level=info msg="torcx already run"
Feb  8 23:16:53.694105 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:16:53.694126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:16:53.710208 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:16:53.805233 systemd[1]: Started kubelet.service.
Feb  8 23:16:53.859389 kubelet[2058]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  8 23:16:53.859733 kubelet[2058]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb  8 23:16:53.859778 kubelet[2058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  8 23:16:53.859909 kubelet[2058]: I0208 23:16:53.859875    2058 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  8 23:16:54.165734 kubelet[2058]: I0208 23:16:54.165244    2058 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Feb  8 23:16:54.165734 kubelet[2058]: I0208 23:16:54.165274    2058 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  8 23:16:54.165734 kubelet[2058]: I0208 23:16:54.165539    2058 server.go:895] "Client rotation is on, will bootstrap in background"
Feb  8 23:16:54.169954 kubelet[2058]: E0208 23:16:54.169928    2058 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.170150 kubelet[2058]: I0208 23:16:54.170131    2058 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  8 23:16:54.175358 kubelet[2058]: I0208 23:16:54.175334    2058 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  8 23:16:54.175591 kubelet[2058]: I0208 23:16:54.175569    2058 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  8 23:16:54.175760 kubelet[2058]: I0208 23:16:54.175734    2058 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb  8 23:16:54.175892 kubelet[2058]: I0208 23:16:54.175771    2058 topology_manager.go:138] "Creating topology manager with none policy"
Feb  8 23:16:54.175892 kubelet[2058]: I0208 23:16:54.175784    2058 container_manager_linux.go:301] "Creating device plugin manager"
Feb  8 23:16:54.175986 kubelet[2058]: I0208 23:16:54.175903    2058 state_mem.go:36] "Initialized new in-memory state store"
Feb  8 23:16:54.176027 kubelet[2058]: I0208 23:16:54.176010    2058 kubelet.go:393] "Attempting to sync node with API server"
Feb  8 23:16:54.176080 kubelet[2058]: I0208 23:16:54.176030    2058 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  8 23:16:54.176126 kubelet[2058]: I0208 23:16:54.176082    2058 kubelet.go:309] "Adding apiserver pod source"
Feb  8 23:16:54.176126 kubelet[2058]: I0208 23:16:54.176106    2058 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  8 23:16:54.177654 kubelet[2058]: W0208 23:16:54.177539    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4203397181&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.177754 kubelet[2058]: E0208 23:16:54.177607    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4203397181&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.178106 kubelet[2058]: I0208 23:16:54.178085    2058 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  8 23:16:54.178705 kubelet[2058]: W0208 23:16:54.178684    2058 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb  8 23:16:54.181851 kubelet[2058]: W0208 23:16:54.181809    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.181945 kubelet[2058]: E0208 23:16:54.181859    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.182272 kubelet[2058]: I0208 23:16:54.182253    2058 server.go:1232] "Started kubelet"
Feb  8 23:16:54.188419 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb  8 23:16:54.188494 kubelet[2058]: I0208 23:16:54.185907    2058 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb  8 23:16:54.188494 kubelet[2058]: I0208 23:16:54.186503    2058 server.go:462] "Adding debug handlers to kubelet server"
Feb  8 23:16:54.188494 kubelet[2058]: I0208 23:16:54.187546    2058 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb  8 23:16:54.188494 kubelet[2058]: I0208 23:16:54.187713    2058 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb  8 23:16:54.188494 kubelet[2058]: E0208 23:16:54.188246    2058 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-4203397181.17b2065f15ae0b62", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-4203397181", UID:"ci-3510.3.2-a-4203397181", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-4203397181"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 16, 54, 182226786, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 16, 54, 182226786, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-4203397181"}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping)
Feb  8 23:16:54.189170 kubelet[2058]: E0208 23:16:54.189148    2058 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  8 23:16:54.189255 kubelet[2058]: E0208 23:16:54.189176    2058 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  8 23:16:54.189328 kubelet[2058]: I0208 23:16:54.189317    2058 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  8 23:16:54.191972 kubelet[2058]: I0208 23:16:54.191312    2058 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb  8 23:16:54.191972 kubelet[2058]: I0208 23:16:54.191405    2058 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  8 23:16:54.191972 kubelet[2058]: I0208 23:16:54.191462    2058 reconciler_new.go:29] "Reconciler: start to sync state"
Feb  8 23:16:54.191972 kubelet[2058]: W0208 23:16:54.191783    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.191972 kubelet[2058]: E0208 23:16:54.191829    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.192468 kubelet[2058]: E0208 23:16:54.192360    2058 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4203397181?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms"
Feb  8 23:16:54.214220 kubelet[2058]: I0208 23:16:54.214189    2058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb  8 23:16:54.215112 kubelet[2058]: I0208 23:16:54.215088    2058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb  8 23:16:54.215112 kubelet[2058]: I0208 23:16:54.215117    2058 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb  8 23:16:54.215268 kubelet[2058]: I0208 23:16:54.215139    2058 kubelet.go:2303] "Starting kubelet main sync loop"
Feb  8 23:16:54.215268 kubelet[2058]: E0208 23:16:54.215193    2058 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb  8 23:16:54.220214 kubelet[2058]: W0208 23:16:54.220162    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.220375 kubelet[2058]: E0208 23:16:54.220362    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:54.245881 kubelet[2058]: I0208 23:16:54.245841    2058 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  8 23:16:54.246100 kubelet[2058]: I0208 23:16:54.245945    2058 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  8 23:16:54.246100 kubelet[2058]: I0208 23:16:54.245970    2058 state_mem.go:36] "Initialized new in-memory state store"
Feb  8 23:16:54.264989 kubelet[2058]: I0208 23:16:54.264951    2058 policy_none.go:49] "None policy: Start"
Feb  8 23:16:54.266002 kubelet[2058]: I0208 23:16:54.265968    2058 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  8 23:16:54.266002 kubelet[2058]: I0208 23:16:54.266001    2058 state_mem.go:35] "Initializing new in-memory state store"
Feb  8 23:16:54.278459 systemd[1]: Created slice kubepods.slice.
Feb  8 23:16:54.282605 systemd[1]: Created slice kubepods-burstable.slice.
Feb  8 23:16:54.285552 systemd[1]: Created slice kubepods-besteffort.slice.
Feb  8 23:16:54.292110 kubelet[2058]: I0208 23:16:54.292086    2058 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  8 23:16:54.292347 kubelet[2058]: I0208 23:16:54.292329    2058 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  8 23:16:54.294379 kubelet[2058]: E0208 23:16:54.293986    2058 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-4203397181\" not found"
Feb  8 23:16:54.296239 kubelet[2058]: I0208 23:16:54.296221    2058 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.296594 kubelet[2058]: E0208 23:16:54.296578    2058 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.315903 kubelet[2058]: I0208 23:16:54.315863    2058 topology_manager.go:215] "Topology Admit Handler" podUID="548d17ac7794c04911c1c531f10eb33e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.317480 kubelet[2058]: I0208 23:16:54.317455    2058 topology_manager.go:215] "Topology Admit Handler" podUID="0d3dea2670384361cc1b7e29a5df4fc9" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.318772 kubelet[2058]: I0208 23:16:54.318754    2058 topology_manager.go:215] "Topology Admit Handler" podUID="4de81e4b3fc317224c181e6184112201" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.324372 systemd[1]: Created slice kubepods-burstable-pod548d17ac7794c04911c1c531f10eb33e.slice.
Feb  8 23:16:54.333528 systemd[1]: Created slice kubepods-burstable-pod0d3dea2670384361cc1b7e29a5df4fc9.slice.
Feb  8 23:16:54.342174 systemd[1]: Created slice kubepods-burstable-pod4de81e4b3fc317224c181e6184112201.slice.
Feb  8 23:16:54.392930 kubelet[2058]: E0208 23:16:54.392886    2058 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4203397181?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms"
Feb  8 23:16:54.493818 kubelet[2058]: I0208 23:16:54.493273    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/548d17ac7794c04911c1c531f10eb33e-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4203397181\" (UID: \"548d17ac7794c04911c1c531f10eb33e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.493818 kubelet[2058]: I0208 23:16:54.493353    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/548d17ac7794c04911c1c531f10eb33e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-4203397181\" (UID: \"548d17ac7794c04911c1c531f10eb33e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.493818 kubelet[2058]: I0208 23:16:54.493392    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.493818 kubelet[2058]: I0208 23:16:54.493422    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.493818 kubelet[2058]: I0208 23:16:54.493457    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.494233 kubelet[2058]: I0208 23:16:54.493490    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4de81e4b3fc317224c181e6184112201-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-4203397181\" (UID: \"4de81e4b3fc317224c181e6184112201\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.494233 kubelet[2058]: I0208 23:16:54.493521    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/548d17ac7794c04911c1c531f10eb33e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4203397181\" (UID: \"548d17ac7794c04911c1c531f10eb33e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.494233 kubelet[2058]: I0208 23:16:54.493558    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.494233 kubelet[2058]: I0208 23:16:54.493596    2058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.498761 kubelet[2058]: I0208 23:16:54.498731    2058 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.499126 kubelet[2058]: E0208 23:16:54.499106    2058 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.632782 env[1313]: time="2024-02-08T23:16:54.632731352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-4203397181,Uid:548d17ac7794c04911c1c531f10eb33e,Namespace:kube-system,Attempt:0,}"
Feb  8 23:16:54.636989 env[1313]: time="2024-02-08T23:16:54.636944433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-4203397181,Uid:0d3dea2670384361cc1b7e29a5df4fc9,Namespace:kube-system,Attempt:0,}"
Feb  8 23:16:54.645187 env[1313]: time="2024-02-08T23:16:54.645153489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-4203397181,Uid:4de81e4b3fc317224c181e6184112201,Namespace:kube-system,Attempt:0,}"
Feb  8 23:16:54.793398 kubelet[2058]: E0208 23:16:54.793360    2058 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4203397181?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms"
Feb  8 23:16:54.901673 kubelet[2058]: I0208 23:16:54.901640    2058 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:54.902342 kubelet[2058]: E0208 23:16:54.902315    2058 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:55.057525 kubelet[2058]: W0208 23:16:55.057388    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4203397181&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.057525 kubelet[2058]: E0208 23:16:55.057451    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4203397181&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.211822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228582266.mount: Deactivated successfully.
Feb  8 23:16:55.246206 env[1313]: time="2024-02-08T23:16:55.246144110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.250019 env[1313]: time="2024-02-08T23:16:55.249980381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.262824 env[1313]: time="2024-02-08T23:16:55.262780919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.267396 env[1313]: time="2024-02-08T23:16:55.267332503Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.271576 env[1313]: time="2024-02-08T23:16:55.271534482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.277861 env[1313]: time="2024-02-08T23:16:55.277818798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.279896 env[1313]: time="2024-02-08T23:16:55.279858236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.283997 env[1313]: time="2024-02-08T23:16:55.283961012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.296290 env[1313]: time="2024-02-08T23:16:55.296249841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.299601 env[1313]: time="2024-02-08T23:16:55.299564502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.320348 env[1313]: time="2024-02-08T23:16:55.319612575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.342162 env[1313]: time="2024-02-08T23:16:55.342108693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:16:55.399933 env[1313]: time="2024-02-08T23:16:55.399841265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:16:55.400263 env[1313]: time="2024-02-08T23:16:55.400222073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:16:55.400432 env[1313]: time="2024-02-08T23:16:55.400402576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:16:55.401065 env[1313]: time="2024-02-08T23:16:55.400972987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f pid=2100 runtime=io.containerd.runc.v2
Feb  8 23:16:55.401278 env[1313]: time="2024-02-08T23:16:55.401224391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:16:55.401416 env[1313]: time="2024-02-08T23:16:55.401264992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:16:55.401416 env[1313]: time="2024-02-08T23:16:55.401280392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:16:55.402326 env[1313]: time="2024-02-08T23:16:55.402275111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18 pid=2106 runtime=io.containerd.runc.v2
Feb  8 23:16:55.423195 systemd[1]: Started cri-containerd-b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18.scope.
Feb  8 23:16:55.442420 systemd[1]: Started cri-containerd-c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f.scope.
Feb  8 23:16:55.452650 env[1313]: time="2024-02-08T23:16:55.452554845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:16:55.452806 env[1313]: time="2024-02-08T23:16:55.452667647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:16:55.452806 env[1313]: time="2024-02-08T23:16:55.452697548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:16:55.453086 env[1313]: time="2024-02-08T23:16:55.452998953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/405302613ff5e0a4a82765ca3b478226d6077fffb91755a9611aeb35db83e2c7 pid=2152 runtime=io.containerd.runc.v2
Feb  8 23:16:55.488645 systemd[1]: Started cri-containerd-405302613ff5e0a4a82765ca3b478226d6077fffb91755a9611aeb35db83e2c7.scope.
Feb  8 23:16:55.525908 env[1313]: time="2024-02-08T23:16:55.525866407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-4203397181,Uid:0d3dea2670384361cc1b7e29a5df4fc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18\""
Feb  8 23:16:55.531388 env[1313]: time="2024-02-08T23:16:55.531350209Z" level=info msg="CreateContainer within sandbox \"b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb  8 23:16:55.540879 env[1313]: time="2024-02-08T23:16:55.540841485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-4203397181,Uid:4de81e4b3fc317224c181e6184112201,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f\""
Feb  8 23:16:55.543638 env[1313]: time="2024-02-08T23:16:55.543608437Z" level=info msg="CreateContainer within sandbox \"c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb  8 23:16:55.561164 env[1313]: time="2024-02-08T23:16:55.561129862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-4203397181,Uid:548d17ac7794c04911c1c531f10eb33e,Namespace:kube-system,Attempt:0,} returns sandbox id \"405302613ff5e0a4a82765ca3b478226d6077fffb91755a9611aeb35db83e2c7\""
Feb  8 23:16:55.563438 env[1313]: time="2024-02-08T23:16:55.563410505Z" level=info msg="CreateContainer within sandbox \"405302613ff5e0a4a82765ca3b478226d6077fffb91755a9611aeb35db83e2c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb  8 23:16:55.588714 kubelet[2058]: W0208 23:16:55.586257    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.588714 kubelet[2058]: E0208 23:16:55.586320    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.596654 kubelet[2058]: E0208 23:16:55.596521    2058 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4203397181?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s"
Feb  8 23:16:55.682789 kubelet[2058]: W0208 23:16:55.682715    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.682789 kubelet[2058]: E0208 23:16:55.682793    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.704923 kubelet[2058]: I0208 23:16:55.704885    2058 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:55.705267 kubelet[2058]: E0208 23:16:55.705244    2058 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:55.809432 kubelet[2058]: W0208 23:16:55.809367    2058 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:55.809432 kubelet[2058]: E0208 23:16:55.809436    2058 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:56.038587 env[1313]: time="2024-02-08T23:16:56.038523716Z" level=info msg="CreateContainer within sandbox \"b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661\""
Feb  8 23:16:56.039330 env[1313]: time="2024-02-08T23:16:56.039297930Z" level=info msg="StartContainer for \"9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661\""
Feb  8 23:16:56.057046 env[1313]: time="2024-02-08T23:16:56.056430841Z" level=info msg="CreateContainer within sandbox \"c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0\""
Feb  8 23:16:56.057046 env[1313]: time="2024-02-08T23:16:56.056956251Z" level=info msg="StartContainer for \"9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0\""
Feb  8 23:16:56.058537 systemd[1]: Started cri-containerd-9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661.scope.
Feb  8 23:16:56.087431 env[1313]: time="2024-02-08T23:16:56.087375903Z" level=info msg="CreateContainer within sandbox \"405302613ff5e0a4a82765ca3b478226d6077fffb91755a9611aeb35db83e2c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"946bef7c25d3780c7eeb91a60b48d96f132aa9ec38d25f0fac5091c4d86fcd0b\""
Feb  8 23:16:56.091858 env[1313]: time="2024-02-08T23:16:56.091815684Z" level=info msg="StartContainer for \"946bef7c25d3780c7eeb91a60b48d96f132aa9ec38d25f0fac5091c4d86fcd0b\""
Feb  8 23:16:56.095690 systemd[1]: Started cri-containerd-9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0.scope.
Feb  8 23:16:56.136524 env[1313]: time="2024-02-08T23:16:56.136471195Z" level=info msg="StartContainer for \"9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661\" returns successfully"
Feb  8 23:16:56.140824 systemd[1]: Started cri-containerd-946bef7c25d3780c7eeb91a60b48d96f132aa9ec38d25f0fac5091c4d86fcd0b.scope.
Feb  8 23:16:56.194930 kubelet[2058]: E0208 23:16:56.194893    2058 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused
Feb  8 23:16:56.251394 env[1313]: time="2024-02-08T23:16:56.251343481Z" level=info msg="StartContainer for \"9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0\" returns successfully"
Feb  8 23:16:56.253229 env[1313]: time="2024-02-08T23:16:56.253177614Z" level=info msg="StartContainer for \"946bef7c25d3780c7eeb91a60b48d96f132aa9ec38d25f0fac5091c4d86fcd0b\" returns successfully"
Feb  8 23:16:57.307224 kubelet[2058]: I0208 23:16:57.307187    2058 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:58.182125 kubelet[2058]: I0208 23:16:58.182082    2058 apiserver.go:52] "Watching apiserver"
Feb  8 23:16:58.224338 kubelet[2058]: E0208 23:16:58.224300    2058 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-4203397181\" not found" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:58.224533 kubelet[2058]: I0208 23:16:58.224457    2058 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:16:58.291819 kubelet[2058]: I0208 23:16:58.291769    2058 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  8 23:16:58.303308 kubelet[2058]: E0208 23:16:58.303272    2058 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4203397181\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:16:58.304135 kubelet[2058]: E0208 23:16:58.304108    2058 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-4203397181\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-4203397181"
Feb  8 23:16:59.249324 kubelet[2058]: W0208 23:16:59.249280    2058 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb  8 23:17:00.903001 systemd[1]: Reloading.
Feb  8 23:17:00.995348 /usr/lib/systemd/system-generators/torcx-generator[2351]: time="2024-02-08T23:17:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  8 23:17:00.995781 /usr/lib/systemd/system-generators/torcx-generator[2351]: time="2024-02-08T23:17:00Z" level=info msg="torcx already run"
Feb  8 23:17:01.088043 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:17:01.088072 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:17:01.107883 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:17:01.221741 kubelet[2058]: I0208 23:17:01.221636    2058 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  8 23:17:01.222069 systemd[1]: Stopping kubelet.service...
Feb  8 23:17:01.238509 systemd[1]: kubelet.service: Deactivated successfully.
Feb  8 23:17:01.238740 systemd[1]: Stopped kubelet.service.
Feb  8 23:17:01.240827 systemd[1]: Started kubelet.service.
Feb  8 23:17:01.319386 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  8 23:17:01.319386 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb  8 23:17:01.319386 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  8 23:17:02.235832 kubelet[2415]: I0208 23:17:01.319431    2415 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  8 23:17:02.235832 kubelet[2415]: I0208 23:17:01.323838    2415 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Feb  8 23:17:02.235832 kubelet[2415]: I0208 23:17:01.323856    2415 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  8 23:17:02.235832 kubelet[2415]: I0208 23:17:01.324073    2415 server.go:895] "Client rotation is on, will bootstrap in background"
Feb  8 23:17:02.249656 kubelet[2415]: I0208 23:17:02.249599    2415 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb  8 23:17:02.252967 kubelet[2415]: I0208 23:17:02.252944    2415 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  8 23:17:02.259537 kubelet[2415]: I0208 23:17:02.259507    2415 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  8 23:17:02.259748 kubelet[2415]: I0208 23:17:02.259724    2415 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  8 23:17:02.259943 kubelet[2415]: I0208 23:17:02.259923    2415 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb  8 23:17:02.260116 kubelet[2415]: I0208 23:17:02.259955    2415 topology_manager.go:138] "Creating topology manager with none policy"
Feb  8 23:17:02.260116 kubelet[2415]: I0208 23:17:02.259967    2415 container_manager_linux.go:301] "Creating device plugin manager"
Feb  8 23:17:02.260116 kubelet[2415]: I0208 23:17:02.260006    2415 state_mem.go:36] "Initialized new in-memory state store"
Feb  8 23:17:02.260239 kubelet[2415]: I0208 23:17:02.260128    2415 kubelet.go:393] "Attempting to sync node with API server"
Feb  8 23:17:02.260239 kubelet[2415]: I0208 23:17:02.260151    2415 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  8 23:17:02.260659 kubelet[2415]: I0208 23:17:02.260555    2415 kubelet.go:309] "Adding apiserver pod source"
Feb  8 23:17:02.263096 kubelet[2415]: I0208 23:17:02.263077    2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  8 23:17:02.273455 kubelet[2415]: I0208 23:17:02.273432    2415 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  8 23:17:02.274198 kubelet[2415]: I0208 23:17:02.274182    2415 server.go:1232] "Started kubelet"
Feb  8 23:17:02.275702 kubelet[2415]: I0208 23:17:02.275267    2415 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb  8 23:17:02.275702 kubelet[2415]: I0208 23:17:02.275544    2415 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb  8 23:17:02.275702 kubelet[2415]: I0208 23:17:02.275594    2415 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb  8 23:17:02.279080 kubelet[2415]: I0208 23:17:02.276733    2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  8 23:17:02.279080 kubelet[2415]: I0208 23:17:02.277247    2415 server.go:462] "Adding debug handlers to kubelet server"
Feb  8 23:17:02.279080 kubelet[2415]: E0208 23:17:02.278678    2415 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  8 23:17:02.279080 kubelet[2415]: E0208 23:17:02.278707    2415 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  8 23:17:02.282824 kubelet[2415]: I0208 23:17:02.282210    2415 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb  8 23:17:02.282824 kubelet[2415]: I0208 23:17:02.282311    2415 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  8 23:17:02.282824 kubelet[2415]: I0208 23:17:02.282448    2415 reconciler_new.go:29] "Reconciler: start to sync state"
Feb  8 23:17:02.294285 kubelet[2415]: I0208 23:17:02.293119    2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb  8 23:17:02.294285 kubelet[2415]: I0208 23:17:02.294250    2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb  8 23:17:02.294285 kubelet[2415]: I0208 23:17:02.294281    2415 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb  8 23:17:02.294503 kubelet[2415]: I0208 23:17:02.294300    2415 kubelet.go:2303] "Starting kubelet main sync loop"
Feb  8 23:17:02.294503 kubelet[2415]: E0208 23:17:02.294352    2415 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.347613    2415 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.347639    2415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.347656    2415 state_mem.go:36] "Initialized new in-memory state store"
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.347865    2415 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.347924    2415 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.347933    2415 policy_none.go:49] "None policy: Start"
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.348524    2415 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  8 23:17:02.348613 kubelet[2415]: I0208 23:17:02.348545    2415 state_mem.go:35] "Initializing new in-memory state store"
Feb  8 23:17:02.349355 kubelet[2415]: I0208 23:17:02.348684    2415 state_mem.go:75] "Updated machine memory state"
Feb  8 23:17:02.352562 kubelet[2415]: I0208 23:17:02.352541    2415 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  8 23:17:02.352778 kubelet[2415]: I0208 23:17:02.352762    2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  8 23:17:02.385192 kubelet[2415]: I0208 23:17:02.385158    2415 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.392629 kubelet[2415]: I0208 23:17:02.392598    2415 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.392797 kubelet[2415]: I0208 23:17:02.392699    2415 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.394533 kubelet[2415]: I0208 23:17:02.394500    2415 topology_manager.go:215] "Topology Admit Handler" podUID="548d17ac7794c04911c1c531f10eb33e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.394649 kubelet[2415]: I0208 23:17:02.394616    2415 topology_manager.go:215] "Topology Admit Handler" podUID="0d3dea2670384361cc1b7e29a5df4fc9" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.394702 kubelet[2415]: I0208 23:17:02.394668    2415 topology_manager.go:215] "Topology Admit Handler" podUID="4de81e4b3fc317224c181e6184112201" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.405470 kubelet[2415]: W0208 23:17:02.404237    2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb  8 23:17:02.405841 kubelet[2415]: W0208 23:17:02.405816    2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb  8 23:17:02.410482 kubelet[2415]: W0208 23:17:02.410206    2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb  8 23:17:02.410482 kubelet[2415]: E0208 23:17:02.410268    2415 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4203397181\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.583981 kubelet[2415]: I0208 23:17:02.583933    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584232 kubelet[2415]: I0208 23:17:02.584030    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584232 kubelet[2415]: I0208 23:17:02.584097    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584232 kubelet[2415]: I0208 23:17:02.584135    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/548d17ac7794c04911c1c531f10eb33e-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4203397181\" (UID: \"548d17ac7794c04911c1c531f10eb33e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584232 kubelet[2415]: I0208 23:17:02.584167    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/548d17ac7794c04911c1c531f10eb33e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4203397181\" (UID: \"548d17ac7794c04911c1c531f10eb33e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584232 kubelet[2415]: I0208 23:17:02.584201    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/548d17ac7794c04911c1c531f10eb33e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-4203397181\" (UID: \"548d17ac7794c04911c1c531f10eb33e\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584518 kubelet[2415]: I0208 23:17:02.584232    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584518 kubelet[2415]: I0208 23:17:02.584264    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d3dea2670384361cc1b7e29a5df4fc9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4203397181\" (UID: \"0d3dea2670384361cc1b7e29a5df4fc9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181"
Feb  8 23:17:02.584518 kubelet[2415]: I0208 23:17:02.584317    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4de81e4b3fc317224c181e6184112201-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-4203397181\" (UID: \"4de81e4b3fc317224c181e6184112201\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-4203397181"
Feb  8 23:17:03.097024 sudo[2445]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb  8 23:17:03.097303 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb  8 23:17:03.264307 kubelet[2415]: I0208 23:17:03.264250    2415 apiserver.go:52] "Watching apiserver"
Feb  8 23:17:03.283100 kubelet[2415]: I0208 23:17:03.283038    2415 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  8 23:17:03.340180 kubelet[2415]: W0208 23:17:03.340156    2415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]
Feb  8 23:17:03.340466 kubelet[2415]: E0208 23:17:03.340451    2415 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4203397181\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181"
Feb  8 23:17:03.360830 kubelet[2415]: I0208 23:17:03.360703    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181" podStartSLOduration=4.360617515 podCreationTimestamp="2024-02-08 23:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:03.358417081 +0000 UTC m=+2.110898299" watchObservedRunningTime="2024-02-08 23:17:03.360617515 +0000 UTC m=+2.113098733"
Feb  8 23:17:03.367679 kubelet[2415]: I0208 23:17:03.367638    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4203397181" podStartSLOduration=1.367609224 podCreationTimestamp="2024-02-08 23:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:03.365913297 +0000 UTC m=+2.118394615" watchObservedRunningTime="2024-02-08 23:17:03.367609224 +0000 UTC m=+2.120090442"
Feb  8 23:17:03.644304 sudo[2445]: pam_unix(sudo:session): session closed for user root
Feb  8 23:17:05.224988 sudo[1642]: pam_unix(sudo:session): session closed for user root
Feb  8 23:17:05.322802 sshd[1639]: pam_unix(sshd:session): session closed for user core
Feb  8 23:17:05.326260 systemd[1]: sshd@4-10.200.8.39:22-10.200.12.6:54874.service: Deactivated successfully.
Feb  8 23:17:05.327387 systemd[1]: session-7.scope: Deactivated successfully.
Feb  8 23:17:05.327628 systemd[1]: session-7.scope: Consumed 3.646s CPU time.
Feb  8 23:17:05.328509 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit.
Feb  8 23:17:05.330842 systemd-logind[1301]: Removed session 7.
Feb  8 23:17:12.247227 kubelet[2415]: I0208 23:17:12.247189    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-4203397181" podStartSLOduration=10.247149213 podCreationTimestamp="2024-02-08 23:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:03.374616233 +0000 UTC m=+2.127097451" watchObservedRunningTime="2024-02-08 23:17:12.247149213 +0000 UTC m=+10.999630531"
Feb  8 23:17:15.583598 kubelet[2415]: I0208 23:17:15.583558    2415 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb  8 23:17:15.584168 env[1313]: time="2024-02-08T23:17:15.584077927Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb  8 23:17:15.584575 kubelet[2415]: I0208 23:17:15.584328    2415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb  8 23:17:16.197638 kubelet[2415]: I0208 23:17:16.197578    2415 topology_manager.go:215] "Topology Admit Handler" podUID="946a734c-e015-4374-9310-02151612cc1d" podNamespace="kube-system" podName="kube-proxy-shvtn"
Feb  8 23:17:16.205538 systemd[1]: Created slice kubepods-besteffort-pod946a734c_e015_4374_9310_02151612cc1d.slice.
Feb  8 23:17:16.226659 kubelet[2415]: I0208 23:17:16.226617    2415 topology_manager.go:215] "Topology Admit Handler" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" podNamespace="kube-system" podName="cilium-w499b"
Feb  8 23:17:16.234334 systemd[1]: Created slice kubepods-burstable-pode65bf645_6a14_478e_b24a_92e612e26db3.slice.
Feb  8 23:17:16.272777 kubelet[2415]: I0208 23:17:16.272721    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e65bf645-6a14-478e-b24a-92e612e26db3-clustermesh-secrets\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.272777 kubelet[2415]: I0208 23:17:16.272777    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/946a734c-e015-4374-9310-02151612cc1d-xtables-lock\") pod \"kube-proxy-shvtn\" (UID: \"946a734c-e015-4374-9310-02151612cc1d\") " pod="kube-system/kube-proxy-shvtn"
Feb  8 23:17:16.273067 kubelet[2415]: I0208 23:17:16.272820    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cni-path\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273067 kubelet[2415]: I0208 23:17:16.272848    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-xtables-lock\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273067 kubelet[2415]: I0208 23:17:16.272884    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/946a734c-e015-4374-9310-02151612cc1d-lib-modules\") pod \"kube-proxy-shvtn\" (UID: \"946a734c-e015-4374-9310-02151612cc1d\") " pod="kube-system/kube-proxy-shvtn"
Feb  8 23:17:16.273067 kubelet[2415]: I0208 23:17:16.272912    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-bpf-maps\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273067 kubelet[2415]: I0208 23:17:16.272935    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-lib-modules\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273067 kubelet[2415]: I0208 23:17:16.272972    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-net\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273329 kubelet[2415]: I0208 23:17:16.272996    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-hostproc\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273329 kubelet[2415]: I0208 23:17:16.273035    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-hubble-tls\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273329 kubelet[2415]: I0208 23:17:16.273069    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npcz\" (UniqueName: \"kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-kube-api-access-6npcz\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273329 kubelet[2415]: I0208 23:17:16.273096    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/946a734c-e015-4374-9310-02151612cc1d-kube-proxy\") pod \"kube-proxy-shvtn\" (UID: \"946a734c-e015-4374-9310-02151612cc1d\") " pod="kube-system/kube-proxy-shvtn"
Feb  8 23:17:16.273329 kubelet[2415]: I0208 23:17:16.273139    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-config-path\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273329 kubelet[2415]: I0208 23:17:16.273170    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-kernel\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273572 kubelet[2415]: I0208 23:17:16.273243    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbzl\" (UniqueName: \"kubernetes.io/projected/946a734c-e015-4374-9310-02151612cc1d-kube-api-access-fkbzl\") pod \"kube-proxy-shvtn\" (UID: \"946a734c-e015-4374-9310-02151612cc1d\") " pod="kube-system/kube-proxy-shvtn"
Feb  8 23:17:16.273572 kubelet[2415]: I0208 23:17:16.273291    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-cgroup\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273572 kubelet[2415]: I0208 23:17:16.273322    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-run\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.273572 kubelet[2415]: I0208 23:17:16.273367    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-etc-cni-netd\") pod \"cilium-w499b\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") " pod="kube-system/cilium-w499b"
Feb  8 23:17:16.517847 env[1313]: time="2024-02-08T23:17:16.517699749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-shvtn,Uid:946a734c-e015-4374-9310-02151612cc1d,Namespace:kube-system,Attempt:0,}"
Feb  8 23:17:16.541095 env[1313]: time="2024-02-08T23:17:16.540711925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w499b,Uid:e65bf645-6a14-478e-b24a-92e612e26db3,Namespace:kube-system,Attempt:0,}"
Feb  8 23:17:16.553874 kubelet[2415]: I0208 23:17:16.553833    2415 topology_manager.go:215] "Topology Admit Handler" podUID="efa2ee26-01ef-4375-8f97-1e8a8e275b08" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-m6jhr"
Feb  8 23:17:16.563736 systemd[1]: Created slice kubepods-besteffort-podefa2ee26_01ef_4375_8f97_1e8a8e275b08.slice.
Feb  8 23:17:16.564681 env[1313]: time="2024-02-08T23:17:16.564604613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:17:16.564761 env[1313]: time="2024-02-08T23:17:16.564685614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:17:16.564761 env[1313]: time="2024-02-08T23:17:16.564715414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:17:16.564964 env[1313]: time="2024-02-08T23:17:16.564880616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98a710ce5554c4d5b3a4a96c0c506ab22c56d8bb5c80698730f26c5c50cd21b9 pid=2494 runtime=io.containerd.runc.v2
Feb  8 23:17:16.577083 kubelet[2415]: I0208 23:17:16.574794    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efa2ee26-01ef-4375-8f97-1e8a8e275b08-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-m6jhr\" (UID: \"efa2ee26-01ef-4375-8f97-1e8a8e275b08\") " pod="kube-system/cilium-operator-6bc8ccdb58-m6jhr"
Feb  8 23:17:16.577083 kubelet[2415]: I0208 23:17:16.574847    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqdh\" (UniqueName: \"kubernetes.io/projected/efa2ee26-01ef-4375-8f97-1e8a8e275b08-kube-api-access-pkqdh\") pod \"cilium-operator-6bc8ccdb58-m6jhr\" (UID: \"efa2ee26-01ef-4375-8f97-1e8a8e275b08\") " pod="kube-system/cilium-operator-6bc8ccdb58-m6jhr"
Feb  8 23:17:16.592562 systemd[1]: Started cri-containerd-98a710ce5554c4d5b3a4a96c0c506ab22c56d8bb5c80698730f26c5c50cd21b9.scope.
Feb  8 23:17:16.610288 env[1313]: time="2024-02-08T23:17:16.610208361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:17:16.610725 env[1313]: time="2024-02-08T23:17:16.610321663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:17:16.610725 env[1313]: time="2024-02-08T23:17:16.610358663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:17:16.610725 env[1313]: time="2024-02-08T23:17:16.610508865Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb pid=2519 runtime=io.containerd.runc.v2
Feb  8 23:17:16.628353 systemd[1]: Started cri-containerd-58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb.scope.
Feb  8 23:17:16.674785 env[1313]: time="2024-02-08T23:17:16.674724537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w499b,Uid:e65bf645-6a14-478e-b24a-92e612e26db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\""
Feb  8 23:17:16.682893 env[1313]: time="2024-02-08T23:17:16.682838835Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb  8 23:17:16.691031 env[1313]: time="2024-02-08T23:17:16.690953632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-shvtn,Uid:946a734c-e015-4374-9310-02151612cc1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"98a710ce5554c4d5b3a4a96c0c506ab22c56d8bb5c80698730f26c5c50cd21b9\""
Feb  8 23:17:16.693863 env[1313]: time="2024-02-08T23:17:16.693825667Z" level=info msg="CreateContainer within sandbox \"98a710ce5554c4d5b3a4a96c0c506ab22c56d8bb5c80698730f26c5c50cd21b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb  8 23:17:16.758286 env[1313]: time="2024-02-08T23:17:16.758227742Z" level=info msg="CreateContainer within sandbox \"98a710ce5554c4d5b3a4a96c0c506ab22c56d8bb5c80698730f26c5c50cd21b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d0a6d1cc80c8026eb88b2fcd4f224f4eb1117ac24d8c1ce404af9b8591d4997\""
Feb  8 23:17:16.759176 env[1313]: time="2024-02-08T23:17:16.759143053Z" level=info msg="StartContainer for \"1d0a6d1cc80c8026eb88b2fcd4f224f4eb1117ac24d8c1ce404af9b8591d4997\""
Feb  8 23:17:16.780685 systemd[1]: Started cri-containerd-1d0a6d1cc80c8026eb88b2fcd4f224f4eb1117ac24d8c1ce404af9b8591d4997.scope.
Feb  8 23:17:16.822593 env[1313]: time="2024-02-08T23:17:16.822546515Z" level=info msg="StartContainer for \"1d0a6d1cc80c8026eb88b2fcd4f224f4eb1117ac24d8c1ce404af9b8591d4997\" returns successfully"
Feb  8 23:17:16.869544 env[1313]: time="2024-02-08T23:17:16.869506080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-m6jhr,Uid:efa2ee26-01ef-4375-8f97-1e8a8e275b08,Namespace:kube-system,Attempt:0,}"
Feb  8 23:17:16.902886 env[1313]: time="2024-02-08T23:17:16.902820581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:17:16.903117 env[1313]: time="2024-02-08T23:17:16.903077884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:17:16.903223 env[1313]: time="2024-02-08T23:17:16.903204885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:17:16.903515 env[1313]: time="2024-02-08T23:17:16.903471688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc pid=2645 runtime=io.containerd.runc.v2
Feb  8 23:17:16.918730 systemd[1]: Started cri-containerd-253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc.scope.
Feb  8 23:17:16.964770 env[1313]: time="2024-02-08T23:17:16.964718725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-m6jhr,Uid:efa2ee26-01ef-4375-8f97-1e8a8e275b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\""
Feb  8 23:17:17.364827 kubelet[2415]: I0208 23:17:17.364795    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-shvtn" podStartSLOduration=1.364742258 podCreationTimestamp="2024-02-08 23:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:17.364368153 +0000 UTC m=+16.116849371" watchObservedRunningTime="2024-02-08 23:17:17.364742258 +0000 UTC m=+16.117223476"
Feb  8 23:17:22.304388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493961423.mount: Deactivated successfully.
Feb  8 23:17:25.031708 env[1313]: time="2024-02-08T23:17:25.031651359Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:17:25.043974 env[1313]: time="2024-02-08T23:17:25.043925086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:17:25.052158 env[1313]: time="2024-02-08T23:17:25.052113670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:17:25.052697 env[1313]: time="2024-02-08T23:17:25.052657476Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb  8 23:17:25.055733 env[1313]: time="2024-02-08T23:17:25.054266792Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb  8 23:17:25.055733 env[1313]: time="2024-02-08T23:17:25.055566206Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  8 23:17:25.093465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165643504.mount: Deactivated successfully.
Feb  8 23:17:25.100923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3881201074.mount: Deactivated successfully.
Feb  8 23:17:25.111072 env[1313]: time="2024-02-08T23:17:25.111016777Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\""
Feb  8 23:17:25.111780 env[1313]: time="2024-02-08T23:17:25.111749185Z" level=info msg="StartContainer for \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\""
Feb  8 23:17:25.132603 systemd[1]: Started cri-containerd-19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c.scope.
Feb  8 23:17:25.169918 env[1313]: time="2024-02-08T23:17:25.169857084Z" level=info msg="StartContainer for \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\" returns successfully"
Feb  8 23:17:25.176390 systemd[1]: cri-containerd-19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c.scope: Deactivated successfully.
Feb  8 23:17:26.091878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c-rootfs.mount: Deactivated successfully.
Feb  8 23:17:28.889526 env[1313]: time="2024-02-08T23:17:28.889452528Z" level=info msg="shim disconnected" id=19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c
Feb  8 23:17:28.890128 env[1313]: time="2024-02-08T23:17:28.889553829Z" level=warning msg="cleaning up after shim disconnected" id=19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c namespace=k8s.io
Feb  8 23:17:28.890128 env[1313]: time="2024-02-08T23:17:28.889575629Z" level=info msg="cleaning up dead shim"
Feb  8 23:17:28.900706 env[1313]: time="2024-02-08T23:17:28.900665438Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2817 runtime=io.containerd.runc.v2\n"
Feb  8 23:17:29.392807 env[1313]: time="2024-02-08T23:17:29.392759021Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  8 23:17:29.433538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323868574.mount: Deactivated successfully.
Feb  8 23:17:29.437784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627299535.mount: Deactivated successfully.
Feb  8 23:17:29.443913 env[1313]: time="2024-02-08T23:17:29.443860316Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\""
Feb  8 23:17:29.446640 env[1313]: time="2024-02-08T23:17:29.446600343Z" level=info msg="StartContainer for \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\""
Feb  8 23:17:29.491073 systemd[1]: Started cri-containerd-9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b.scope.
Feb  8 23:17:29.562478 env[1313]: time="2024-02-08T23:17:29.562242163Z" level=info msg="StartContainer for \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\" returns successfully"
Feb  8 23:17:29.575192 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  8 23:17:29.575510 systemd[1]: Stopped systemd-sysctl.service.
Feb  8 23:17:29.576228 systemd[1]: Stopping systemd-sysctl.service...
Feb  8 23:17:29.579521 systemd[1]: Starting systemd-sysctl.service...
Feb  8 23:17:29.580022 systemd[1]: cri-containerd-9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b.scope: Deactivated successfully.
Feb  8 23:17:29.592474 systemd[1]: Finished systemd-sysctl.service.
Feb  8 23:17:29.664281 env[1313]: time="2024-02-08T23:17:29.663498145Z" level=info msg="shim disconnected" id=9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b
Feb  8 23:17:29.664281 env[1313]: time="2024-02-08T23:17:29.663542345Z" level=warning msg="cleaning up after shim disconnected" id=9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b namespace=k8s.io
Feb  8 23:17:29.664281 env[1313]: time="2024-02-08T23:17:29.663553345Z" level=info msg="cleaning up dead shim"
Feb  8 23:17:29.671685 env[1313]: time="2024-02-08T23:17:29.671618223Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n"
Feb  8 23:17:30.400500 env[1313]: time="2024-02-08T23:17:30.400452928Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  8 23:17:30.425887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b-rootfs.mount: Deactivated successfully.
Feb  8 23:17:30.513391 env[1313]: time="2024-02-08T23:17:30.513339006Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\""
Feb  8 23:17:30.514017 env[1313]: time="2024-02-08T23:17:30.513979012Z" level=info msg="StartContainer for \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\""
Feb  8 23:17:30.545561 systemd[1]: Started cri-containerd-755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7.scope.
Feb  8 23:17:30.580286 systemd[1]: cri-containerd-755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7.scope: Deactivated successfully.
Feb  8 23:17:30.582005 env[1313]: time="2024-02-08T23:17:30.581959061Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:17:30.587888 env[1313]: time="2024-02-08T23:17:30.587852517Z" level=info msg="StartContainer for \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\" returns successfully"
Feb  8 23:17:30.594527 env[1313]: time="2024-02-08T23:17:30.594492380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:17:30.600511 env[1313]: time="2024-02-08T23:17:30.600472538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:17:30.600978 env[1313]: time="2024-02-08T23:17:30.600938442Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb  8 23:17:30.606984 env[1313]: time="2024-02-08T23:17:30.606952899Z" level=info msg="CreateContainer within sandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb  8 23:17:31.053773 env[1313]: time="2024-02-08T23:17:31.053720357Z" level=info msg="shim disconnected" id=755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7
Feb  8 23:17:31.053773 env[1313]: time="2024-02-08T23:17:31.053769858Z" level=warning msg="cleaning up after shim disconnected" id=755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7 namespace=k8s.io
Feb  8 23:17:31.054083 env[1313]: time="2024-02-08T23:17:31.053782158Z" level=info msg="cleaning up dead shim"
Feb  8 23:17:31.062039 env[1313]: time="2024-02-08T23:17:31.061989635Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2940 runtime=io.containerd.runc.v2\n"
Feb  8 23:17:31.094769 env[1313]: time="2024-02-08T23:17:31.094714343Z" level=info msg="CreateContainer within sandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\""
Feb  8 23:17:31.095451 env[1313]: time="2024-02-08T23:17:31.095348749Z" level=info msg="StartContainer for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\""
Feb  8 23:17:31.114131 systemd[1]: Started cri-containerd-11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323.scope.
Feb  8 23:17:31.148009 env[1313]: time="2024-02-08T23:17:31.147949044Z" level=info msg="StartContainer for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" returns successfully"
Feb  8 23:17:31.395090 env[1313]: time="2024-02-08T23:17:31.392210842Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  8 23:17:31.428117 kubelet[2415]: I0208 23:17:31.424990    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-m6jhr" podStartSLOduration=1.789338343 podCreationTimestamp="2024-02-08 23:17:16 +0000 UTC" firstStartedPulling="2024-02-08 23:17:16.966093342 +0000 UTC m=+15.718574560" lastFinishedPulling="2024-02-08 23:17:30.601699849 +0000 UTC m=+29.354181067" observedRunningTime="2024-02-08 23:17:31.399589611 +0000 UTC m=+30.152070829" watchObservedRunningTime="2024-02-08 23:17:31.42494485 +0000 UTC m=+30.177426068"
Feb  8 23:17:31.427729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7-rootfs.mount: Deactivated successfully.
Feb  8 23:17:31.446142 env[1313]: time="2024-02-08T23:17:31.446081948Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\""
Feb  8 23:17:31.447433 env[1313]: time="2024-02-08T23:17:31.447395361Z" level=info msg="StartContainer for \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\""
Feb  8 23:17:31.473843 systemd[1]: Started cri-containerd-00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25.scope.
Feb  8 23:17:31.539816 systemd[1]: cri-containerd-00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25.scope: Deactivated successfully.
Feb  8 23:17:31.541331 env[1313]: time="2024-02-08T23:17:31.541259144Z" level=info msg="StartContainer for \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\" returns successfully"
Feb  8 23:17:31.561728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25-rootfs.mount: Deactivated successfully.
Feb  8 23:17:31.604965 env[1313]: time="2024-02-08T23:17:31.604906743Z" level=info msg="shim disconnected" id=00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25
Feb  8 23:17:31.604965 env[1313]: time="2024-02-08T23:17:31.604962543Z" level=warning msg="cleaning up after shim disconnected" id=00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25 namespace=k8s.io
Feb  8 23:17:31.604965 env[1313]: time="2024-02-08T23:17:31.604973143Z" level=info msg="cleaning up dead shim"
Feb  8 23:17:31.622163 env[1313]: time="2024-02-08T23:17:31.622110605Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3034 runtime=io.containerd.runc.v2\n"
Feb  8 23:17:32.397299 env[1313]: time="2024-02-08T23:17:32.397247844Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  8 23:17:32.441911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883756100.mount: Deactivated successfully.
Feb  8 23:17:32.454853 env[1313]: time="2024-02-08T23:17:32.454751277Z" level=info msg="CreateContainer within sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\""
Feb  8 23:17:32.455722 env[1313]: time="2024-02-08T23:17:32.455692386Z" level=info msg="StartContainer for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\""
Feb  8 23:17:32.475777 systemd[1]: Started cri-containerd-79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264.scope.
Feb  8 23:17:32.507223 env[1313]: time="2024-02-08T23:17:32.507169263Z" level=info msg="StartContainer for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" returns successfully"
Feb  8 23:17:32.624865 kubelet[2415]: I0208 23:17:32.624839    2415 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb  8 23:17:32.661297 kubelet[2415]: I0208 23:17:32.661195    2415 topology_manager.go:215] "Topology Admit Handler" podUID="cb53f95e-7689-4410-9246-0d3a04c3fa60" podNamespace="kube-system" podName="coredns-5dd5756b68-96nfq"
Feb  8 23:17:32.668032 systemd[1]: Created slice kubepods-burstable-podcb53f95e_7689_4410_9246_0d3a04c3fa60.slice.
Feb  8 23:17:32.669232 kubelet[2415]: I0208 23:17:32.669205    2415 topology_manager.go:215] "Topology Admit Handler" podUID="c9ff389d-5da1-4e8c-ad6a-7a40ec32f547" podNamespace="kube-system" podName="coredns-5dd5756b68-22c6m"
Feb  8 23:17:32.678249 systemd[1]: Created slice kubepods-burstable-podc9ff389d_5da1_4e8c_ad6a_7a40ec32f547.slice.
Feb  8 23:17:32.790250 kubelet[2415]: I0208 23:17:32.790215    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb53f95e-7689-4410-9246-0d3a04c3fa60-config-volume\") pod \"coredns-5dd5756b68-96nfq\" (UID: \"cb53f95e-7689-4410-9246-0d3a04c3fa60\") " pod="kube-system/coredns-5dd5756b68-96nfq"
Feb  8 23:17:32.790250 kubelet[2415]: I0208 23:17:32.790258    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9ff389d-5da1-4e8c-ad6a-7a40ec32f547-config-volume\") pod \"coredns-5dd5756b68-22c6m\" (UID: \"c9ff389d-5da1-4e8c-ad6a-7a40ec32f547\") " pod="kube-system/coredns-5dd5756b68-22c6m"
Feb  8 23:17:32.790475 kubelet[2415]: I0208 23:17:32.790300    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8cw\" (UniqueName: \"kubernetes.io/projected/c9ff389d-5da1-4e8c-ad6a-7a40ec32f547-kube-api-access-bm8cw\") pod \"coredns-5dd5756b68-22c6m\" (UID: \"c9ff389d-5da1-4e8c-ad6a-7a40ec32f547\") " pod="kube-system/coredns-5dd5756b68-22c6m"
Feb  8 23:17:32.790475 kubelet[2415]: I0208 23:17:32.790334    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slpc\" (UniqueName: \"kubernetes.io/projected/cb53f95e-7689-4410-9246-0d3a04c3fa60-kube-api-access-6slpc\") pod \"coredns-5dd5756b68-96nfq\" (UID: \"cb53f95e-7689-4410-9246-0d3a04c3fa60\") " pod="kube-system/coredns-5dd5756b68-96nfq"
Feb  8 23:17:32.972874 env[1313]: time="2024-02-08T23:17:32.972747481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-96nfq,Uid:cb53f95e-7689-4410-9246-0d3a04c3fa60,Namespace:kube-system,Attempt:0,}"
Feb  8 23:17:32.984316 env[1313]: time="2024-02-08T23:17:32.984280288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-22c6m,Uid:c9ff389d-5da1-4e8c-ad6a-7a40ec32f547,Namespace:kube-system,Attempt:0,}"
Feb  8 23:17:33.411245 kubelet[2415]: I0208 23:17:33.411204    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w499b" podStartSLOduration=9.039833535 podCreationTimestamp="2024-02-08 23:17:16 +0000 UTC" firstStartedPulling="2024-02-08 23:17:16.681972824 +0000 UTC m=+15.434454042" lastFinishedPulling="2024-02-08 23:17:25.053299782 +0000 UTC m=+23.805781000" observedRunningTime="2024-02-08 23:17:33.410297185 +0000 UTC m=+32.162778403" watchObservedRunningTime="2024-02-08 23:17:33.411160493 +0000 UTC m=+32.163641711"
Feb  8 23:17:34.960840 systemd-networkd[1456]: cilium_host: Link UP
Feb  8 23:17:34.960955 systemd-networkd[1456]: cilium_net: Link UP
Feb  8 23:17:34.964278 systemd-networkd[1456]: cilium_net: Gained carrier
Feb  8 23:17:34.968843 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready
Feb  8 23:17:34.968934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb  8 23:17:34.969185 systemd-networkd[1456]: cilium_host: Gained carrier
Feb  8 23:17:35.162880 systemd-networkd[1456]: cilium_vxlan: Link UP
Feb  8 23:17:35.162890 systemd-networkd[1456]: cilium_vxlan: Gained carrier
Feb  8 23:17:35.455125 kernel: NET: Registered PF_ALG protocol family
Feb  8 23:17:35.756213 systemd-networkd[1456]: cilium_net: Gained IPv6LL
Feb  8 23:17:35.884240 systemd-networkd[1456]: cilium_host: Gained IPv6LL
Feb  8 23:17:36.162496 systemd-networkd[1456]: lxc_health: Link UP
Feb  8 23:17:36.180269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  8 23:17:36.181459 systemd-networkd[1456]: lxc_health: Gained carrier
Feb  8 23:17:36.546168 systemd-networkd[1456]: lxcf149f39a24fc: Link UP
Feb  8 23:17:36.553069 kernel: eth0: renamed from tmp1ecd6
Feb  8 23:17:36.565386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf149f39a24fc: link becomes ready
Feb  8 23:17:36.565507 systemd-networkd[1456]: lxcf149f39a24fc: Gained carrier
Feb  8 23:17:36.601623 systemd-networkd[1456]: lxc7099d396dc7c: Link UP
Feb  8 23:17:36.611150 kernel: eth0: renamed from tmp27dd7
Feb  8 23:17:36.622164 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7099d396dc7c: link becomes ready
Feb  8 23:17:36.621887 systemd-networkd[1456]: lxc7099d396dc7c: Gained carrier
Feb  8 23:17:36.972202 systemd-networkd[1456]: cilium_vxlan: Gained IPv6LL
Feb  8 23:17:37.612252 systemd-networkd[1456]: lxc_health: Gained IPv6LL
Feb  8 23:17:38.060254 systemd-networkd[1456]: lxc7099d396dc7c: Gained IPv6LL
Feb  8 23:17:38.252199 systemd-networkd[1456]: lxcf149f39a24fc: Gained IPv6LL
Feb  8 23:17:40.384996 env[1313]: time="2024-02-08T23:17:40.384929440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:17:40.385601 env[1313]: time="2024-02-08T23:17:40.385569145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:17:40.385755 env[1313]: time="2024-02-08T23:17:40.385729247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:17:40.386015 env[1313]: time="2024-02-08T23:17:40.385983249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27dd77404f97aef7a93602f29efde4cc1093613c6003921f1010b460cff9087e pid=3586 runtime=io.containerd.runc.v2
Feb  8 23:17:40.414931 systemd[1]: run-containerd-runc-k8s.io-27dd77404f97aef7a93602f29efde4cc1093613c6003921f1010b460cff9087e-runc.ytUqvu.mount: Deactivated successfully.
Feb  8 23:17:40.426168 systemd[1]: Started cri-containerd-27dd77404f97aef7a93602f29efde4cc1093613c6003921f1010b460cff9087e.scope.
Feb  8 23:17:40.440762 env[1313]: time="2024-02-08T23:17:40.440677205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:17:40.440971 env[1313]: time="2024-02-08T23:17:40.440940607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:17:40.441140 env[1313]: time="2024-02-08T23:17:40.441112609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:17:40.441584 env[1313]: time="2024-02-08T23:17:40.441528912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ecd642967d3174dd9b02741de60f693cf028a032ab4dfb663c12d5e2f889708 pid=3620 runtime=io.containerd.runc.v2
Feb  8 23:17:40.463326 systemd[1]: Started cri-containerd-1ecd642967d3174dd9b02741de60f693cf028a032ab4dfb663c12d5e2f889708.scope.
Feb  8 23:17:40.533258 env[1313]: time="2024-02-08T23:17:40.533201877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-96nfq,Uid:cb53f95e-7689-4410-9246-0d3a04c3fa60,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ecd642967d3174dd9b02741de60f693cf028a032ab4dfb663c12d5e2f889708\""
Feb  8 23:17:40.536523 env[1313]: time="2024-02-08T23:17:40.536486804Z" level=info msg="CreateContainer within sandbox \"1ecd642967d3174dd9b02741de60f693cf028a032ab4dfb663c12d5e2f889708\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb  8 23:17:40.568123 env[1313]: time="2024-02-08T23:17:40.568030367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-22c6m,Uid:c9ff389d-5da1-4e8c-ad6a-7a40ec32f547,Namespace:kube-system,Attempt:0,} returns sandbox id \"27dd77404f97aef7a93602f29efde4cc1093613c6003921f1010b460cff9087e\""
Feb  8 23:17:40.573578 env[1313]: time="2024-02-08T23:17:40.573546513Z" level=info msg="CreateContainer within sandbox \"27dd77404f97aef7a93602f29efde4cc1093613c6003921f1010b460cff9087e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb  8 23:17:40.594547 env[1313]: time="2024-02-08T23:17:40.594488988Z" level=info msg="CreateContainer within sandbox \"1ecd642967d3174dd9b02741de60f693cf028a032ab4dfb663c12d5e2f889708\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36cbd4aafb69266b0f866b534779966ffadd5b3cda3594566ed5522c5ffa3eae\""
Feb  8 23:17:40.596820 env[1313]: time="2024-02-08T23:17:40.596781407Z" level=info msg="StartContainer for \"36cbd4aafb69266b0f866b534779966ffadd5b3cda3594566ed5522c5ffa3eae\""
Feb  8 23:17:40.627072 systemd[1]: Started cri-containerd-36cbd4aafb69266b0f866b534779966ffadd5b3cda3594566ed5522c5ffa3eae.scope.
Feb  8 23:17:40.651796 env[1313]: time="2024-02-08T23:17:40.651458663Z" level=info msg="CreateContainer within sandbox \"27dd77404f97aef7a93602f29efde4cc1093613c6003921f1010b460cff9087e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d817550296948f4110ae280f7c8c43234972c48eba71bb6f609192ef4f2a0e8d\""
Feb  8 23:17:40.652704 env[1313]: time="2024-02-08T23:17:40.652295670Z" level=info msg="StartContainer for \"d817550296948f4110ae280f7c8c43234972c48eba71bb6f609192ef4f2a0e8d\""
Feb  8 23:17:40.683000 systemd[1]: Started cri-containerd-d817550296948f4110ae280f7c8c43234972c48eba71bb6f609192ef4f2a0e8d.scope.
Feb  8 23:17:40.699092 env[1313]: time="2024-02-08T23:17:40.698829658Z" level=info msg="StartContainer for \"36cbd4aafb69266b0f866b534779966ffadd5b3cda3594566ed5522c5ffa3eae\" returns successfully"
Feb  8 23:17:40.756710 env[1313]: time="2024-02-08T23:17:40.756657741Z" level=info msg="StartContainer for \"d817550296948f4110ae280f7c8c43234972c48eba71bb6f609192ef4f2a0e8d\" returns successfully"
Feb  8 23:17:41.444328 kubelet[2415]: I0208 23:17:41.444282    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-22c6m" podStartSLOduration=25.444241131 podCreationTimestamp="2024-02-08 23:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:41.442329515 +0000 UTC m=+40.194810733" watchObservedRunningTime="2024-02-08 23:17:41.444241131 +0000 UTC m=+40.196722449"
Feb  8 23:17:41.474649 kubelet[2415]: I0208 23:17:41.474612    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-96nfq" podStartSLOduration=25.474570581 podCreationTimestamp="2024-02-08 23:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:41.457894643 +0000 UTC m=+40.210375861" watchObservedRunningTime="2024-02-08 23:17:41.474570581 +0000 UTC m=+40.227051899"
Feb  8 23:17:48.622758 update_engine[1302]: I0208 23:17:48.622705  1302 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs
Feb  8 23:17:48.622758 update_engine[1302]: I0208 23:17:48.622751  1302 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs
Feb  8 23:17:48.623389 update_engine[1302]: I0208 23:17:48.622905  1302 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs
Feb  8 23:17:48.623565 update_engine[1302]: I0208 23:17:48.623483  1302 omaha_request_params.cc:62] Current group set to lts
Feb  8 23:17:48.623833 update_engine[1302]: I0208 23:17:48.623675  1302 update_attempter.cc:499] Already updated boot flags. Skipping.
Feb  8 23:17:48.623833 update_engine[1302]: I0208 23:17:48.623690  1302 update_attempter.cc:643] Scheduling an action processor start.
Feb  8 23:17:48.623833 update_engine[1302]: I0208 23:17:48.623713  1302 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Feb  8 23:17:48.623833 update_engine[1302]: I0208 23:17:48.623749  1302 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs
Feb  8 23:17:48.623833 update_engine[1302]: I0208 23:17:48.623829  1302 omaha_request_action.cc:270] Posting an Omaha request to disabled
Feb  8 23:17:48.623833 update_engine[1302]: I0208 23:17:48.623837  1302 omaha_request_action.cc:271] Request: <?xml version="1.0" encoding="UTF-8"?>
Feb  8 23:17:48.623833 update_engine[1302]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Feb  8 23:17:48.623833 update_engine[1302]:     <os version="Chateau" platform="CoreOS" sp="3510.3.2_x86_64"></os>
Feb  8 23:17:48.623833 update_engine[1302]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="3510.3.2" track="lts" bootid="{bf70853a-0472-49eb-83ef-80334378bcbc}" oem="azure" oemversion="2.6.0.2-r1" alephversion="3510.3.2" machineid="c8a3448ebd414d75af4a2b5251101cc1" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" >
Feb  8 23:17:48.623833 update_engine[1302]:         <ping active="1"></ping>
Feb  8 23:17:48.623833 update_engine[1302]:         <updatecheck></updatecheck>
Feb  8 23:17:48.623833 update_engine[1302]:         <event eventtype="3" eventresult="2" previousversion="0.0.0.0"></event>
Feb  8 23:17:48.623833 update_engine[1302]:     </app>
Feb  8 23:17:48.623833 update_engine[1302]: </request>
Feb  8 23:17:48.624543 update_engine[1302]: I0208 23:17:48.623843  1302 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb  8 23:17:48.624848 locksmithd[1389]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0
Feb  8 23:17:48.625610 update_engine[1302]: I0208 23:17:48.625417  1302 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb  8 23:17:48.625673 update_engine[1302]: I0208 23:17:48.625617  1302 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb  8 23:17:48.641629 update_engine[1302]: E0208 23:17:48.641597  1302 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb  8 23:17:48.641742 update_engine[1302]: I0208 23:17:48.641703  1302 libcurl_http_fetcher.cc:283] No HTTP response, retry 1
Feb  8 23:17:58.624815 update_engine[1302]: I0208 23:17:58.624755  1302 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb  8 23:17:58.625354 update_engine[1302]: I0208 23:17:58.625082  1302 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb  8 23:17:58.625354 update_engine[1302]: I0208 23:17:58.625324  1302 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb  8 23:17:58.653510 update_engine[1302]: E0208 23:17:58.653478  1302 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb  8 23:17:58.653654 update_engine[1302]: I0208 23:17:58.653587  1302 libcurl_http_fetcher.cc:283] No HTTP response, retry 2
Feb  8 23:18:08.623088 update_engine[1302]: I0208 23:18:08.622956  1302 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb  8 23:18:08.623720 update_engine[1302]: I0208 23:18:08.623395  1302 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb  8 23:18:08.623720 update_engine[1302]: I0208 23:18:08.623679  1302 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb  8 23:18:08.645514 update_engine[1302]: E0208 23:18:08.645478  1302 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb  8 23:18:08.645673 update_engine[1302]: I0208 23:18:08.645596  1302 libcurl_http_fetcher.cc:283] No HTTP response, retry 3
Feb  8 23:18:18.624450 update_engine[1302]: I0208 23:18:18.624344  1302 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb  8 23:18:18.624877 update_engine[1302]: I0208 23:18:18.624729  1302 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb  8 23:18:18.625028 update_engine[1302]: I0208 23:18:18.625001  1302 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb  8 23:18:18.646641 update_engine[1302]: E0208 23:18:18.646595  1302 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb  8 23:18:18.646834 update_engine[1302]: I0208 23:18:18.646734  1302 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Feb  8 23:18:18.646834 update_engine[1302]: I0208 23:18:18.646749  1302 omaha_request_action.cc:621] Omaha request response:
Feb  8 23:18:18.646944 update_engine[1302]: E0208 23:18:18.646847  1302 omaha_request_action.cc:640] Omaha request network transfer failed.
Feb  8 23:18:18.646944 update_engine[1302]: I0208 23:18:18.646867  1302 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing.
Feb  8 23:18:18.646944 update_engine[1302]: I0208 23:18:18.646873  1302 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Feb  8 23:18:18.646944 update_engine[1302]: I0208 23:18:18.646879  1302 update_attempter.cc:306] Processing Done.
Feb  8 23:18:18.646944 update_engine[1302]: E0208 23:18:18.646896  1302 update_attempter.cc:619] Update failed.
Feb  8 23:18:18.646944 update_engine[1302]: I0208 23:18:18.646901  1302 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse
Feb  8 23:18:18.646944 update_engine[1302]: I0208 23:18:18.646908  1302 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse)
Feb  8 23:18:18.646944 update_engine[1302]: I0208 23:18:18.646914  1302 payload_state.cc:103] Ignoring failures until we get a valid Omaha response.
Feb  8 23:18:18.647352 update_engine[1302]: I0208 23:18:18.647017  1302 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction
Feb  8 23:18:18.647352 update_engine[1302]: I0208 23:18:18.647043  1302 omaha_request_action.cc:270] Posting an Omaha request to disabled
Feb  8 23:18:18.647352 update_engine[1302]: I0208 23:18:18.647076  1302 omaha_request_action.cc:271] Request: <?xml version="1.0" encoding="UTF-8"?>
Feb  8 23:18:18.647352 update_engine[1302]: <request protocol="3.0" version="update_engine-0.4.10" updaterversion="update_engine-0.4.10" installsource="scheduler" ismachine="1">
Feb  8 23:18:18.647352 update_engine[1302]:     <os version="Chateau" platform="CoreOS" sp="3510.3.2_x86_64"></os>
Feb  8 23:18:18.647352 update_engine[1302]:     <app appid="{e96281a6-d1af-4bde-9a0a-97b76e56dc57}" version="3510.3.2" track="lts" bootid="{bf70853a-0472-49eb-83ef-80334378bcbc}" oem="azure" oemversion="2.6.0.2-r1" alephversion="3510.3.2" machineid="c8a3448ebd414d75af4a2b5251101cc1" machinealias="" lang="en-US" board="amd64-usr" hardware_class="" delta_okay="false" >
Feb  8 23:18:18.647352 update_engine[1302]:         <event eventtype="3" eventresult="0" errorcode="268437456"></event>
Feb  8 23:18:18.647352 update_engine[1302]:     </app>
Feb  8 23:18:18.647352 update_engine[1302]: </request>
Feb  8 23:18:18.647352 update_engine[1302]: I0208 23:18:18.647295  1302 libcurl_http_fetcher.cc:47] Starting/Resuming transfer
Feb  8 23:18:18.647866 update_engine[1302]: I0208 23:18:18.647500  1302 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP
Feb  8 23:18:18.647866 update_engine[1302]: I0208 23:18:18.647722  1302 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds.
Feb  8 23:18:18.647992 locksmithd[1389]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0
Feb  8 23:18:18.663286 update_engine[1302]: E0208 23:18:18.663045  1302 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663168  1302 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663177  1302 omaha_request_action.cc:621] Omaha request response:
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663182  1302 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663185  1302 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663188  1302 update_attempter.cc:306] Processing Done.
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663193  1302 update_attempter.cc:310] Error event sent.
Feb  8 23:18:18.663286 update_engine[1302]: I0208 23:18:18.663201  1302 update_check_scheduler.cc:74] Next update check in 44m21s
Feb  8 23:18:18.663781 locksmithd[1389]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0
Feb  8 23:24:30.493325 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.12.6:57104.service.
Feb  8 23:24:31.113513 sshd[3799]: Accepted publickey for core from 10.200.12.6 port 57104 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:31.115248 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:31.121587 systemd[1]: Started session-8.scope.
Feb  8 23:24:31.122219 systemd-logind[1301]: New session 8 of user core.
Feb  8 23:24:31.727722 sshd[3799]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:31.730785 systemd[1]: sshd@5-10.200.8.39:22-10.200.12.6:57104.service: Deactivated successfully.
Feb  8 23:24:31.731747 systemd[1]: session-8.scope: Deactivated successfully.
Feb  8 23:24:31.732470 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit.
Feb  8 23:24:31.733310 systemd-logind[1301]: Removed session 8.
Feb  8 23:24:36.833926 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.12.6:57108.service.
Feb  8 23:24:37.454042 sshd[3816]: Accepted publickey for core from 10.200.12.6 port 57108 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:37.455473 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:37.460718 systemd-logind[1301]: New session 9 of user core.
Feb  8 23:24:37.461229 systemd[1]: Started session-9.scope.
Feb  8 23:24:37.944621 sshd[3816]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:37.948029 systemd[1]: sshd@6-10.200.8.39:22-10.200.12.6:57108.service: Deactivated successfully.
Feb  8 23:24:37.949134 systemd[1]: session-9.scope: Deactivated successfully.
Feb  8 23:24:37.950127 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit.
Feb  8 23:24:37.951154 systemd-logind[1301]: Removed session 9.
Feb  8 23:24:43.048894 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.12.6:36548.service.
Feb  8 23:24:43.663143 sshd[3830]: Accepted publickey for core from 10.200.12.6 port 36548 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:43.664654 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:43.669853 systemd[1]: Started session-10.scope.
Feb  8 23:24:43.670343 systemd-logind[1301]: New session 10 of user core.
Feb  8 23:24:44.156308 sshd[3830]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:44.159169 systemd[1]: sshd@7-10.200.8.39:22-10.200.12.6:36548.service: Deactivated successfully.
Feb  8 23:24:44.160159 systemd[1]: session-10.scope: Deactivated successfully.
Feb  8 23:24:44.160859 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit.
Feb  8 23:24:44.161653 systemd-logind[1301]: Removed session 10.
Feb  8 23:24:49.263225 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.12.6:49830.service.
Feb  8 23:24:49.927578 sshd[3846]: Accepted publickey for core from 10.200.12.6 port 49830 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:49.928942 sshd[3846]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:49.933466 systemd[1]: Started session-11.scope.
Feb  8 23:24:49.933963 systemd-logind[1301]: New session 11 of user core.
Feb  8 23:24:50.432967 sshd[3846]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:50.435780 systemd[1]: sshd@8-10.200.8.39:22-10.200.12.6:49830.service: Deactivated successfully.
Feb  8 23:24:50.436678 systemd[1]: session-11.scope: Deactivated successfully.
Feb  8 23:24:50.437455 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit.
Feb  8 23:24:50.438241 systemd-logind[1301]: Removed session 11.
Feb  8 23:24:55.539033 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.12.6:49844.service.
Feb  8 23:24:56.165107 sshd[3859]: Accepted publickey for core from 10.200.12.6 port 49844 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:56.166710 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:56.171125 systemd-logind[1301]: New session 12 of user core.
Feb  8 23:24:56.171710 systemd[1]: Started session-12.scope.
Feb  8 23:24:56.657896 sshd[3859]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:56.661279 systemd[1]: sshd@9-10.200.8.39:22-10.200.12.6:49844.service: Deactivated successfully.
Feb  8 23:24:56.662453 systemd[1]: session-12.scope: Deactivated successfully.
Feb  8 23:24:56.663198 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit.
Feb  8 23:24:56.663997 systemd-logind[1301]: Removed session 12.
Feb  8 23:24:56.764809 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.12.6:49850.service.
Feb  8 23:24:57.384121 sshd[3872]: Accepted publickey for core from 10.200.12.6 port 49850 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:57.385793 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:57.390848 systemd[1]: Started session-13.scope.
Feb  8 23:24:57.391467 systemd-logind[1301]: New session 13 of user core.
Feb  8 23:24:58.529579 sshd[3872]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:58.532899 systemd[1]: sshd@10-10.200.8.39:22-10.200.12.6:49850.service: Deactivated successfully.
Feb  8 23:24:58.534004 systemd[1]: session-13.scope: Deactivated successfully.
Feb  8 23:24:58.534849 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit.
Feb  8 23:24:58.535898 systemd-logind[1301]: Removed session 13.
Feb  8 23:24:58.633771 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.12.6:52424.service.
Feb  8 23:24:59.254628 sshd[3883]: Accepted publickey for core from 10.200.12.6 port 52424 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:24:59.256312 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:24:59.261142 systemd-logind[1301]: New session 14 of user core.
Feb  8 23:24:59.261323 systemd[1]: Started session-14.scope.
Feb  8 23:24:59.747313 sshd[3883]: pam_unix(sshd:session): session closed for user core
Feb  8 23:24:59.750494 systemd[1]: sshd@11-10.200.8.39:22-10.200.12.6:52424.service: Deactivated successfully.
Feb  8 23:24:59.751476 systemd[1]: session-14.scope: Deactivated successfully.
Feb  8 23:24:59.752178 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit.
Feb  8 23:24:59.753125 systemd-logind[1301]: Removed session 14.
Feb  8 23:25:04.855871 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.12.6:52434.service.
Feb  8 23:25:05.553613 sshd[3896]: Accepted publickey for core from 10.200.12.6 port 52434 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:05.555218 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:05.560963 systemd-logind[1301]: New session 15 of user core.
Feb  8 23:25:05.561542 systemd[1]: Started session-15.scope.
Feb  8 23:25:06.049374 sshd[3896]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:06.052720 systemd[1]: sshd@12-10.200.8.39:22-10.200.12.6:52434.service: Deactivated successfully.
Feb  8 23:25:06.053720 systemd[1]: session-15.scope: Deactivated successfully.
Feb  8 23:25:06.054407 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit.
Feb  8 23:25:06.055205 systemd-logind[1301]: Removed session 15.
Feb  8 23:25:06.154320 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.12.6:52436.service.
Feb  8 23:25:06.771297 sshd[3908]: Accepted publickey for core from 10.200.12.6 port 52436 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:06.772889 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:06.778457 systemd[1]: Started session-16.scope.
Feb  8 23:25:06.779118 systemd-logind[1301]: New session 16 of user core.
Feb  8 23:25:07.331908 sshd[3908]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:07.335293 systemd[1]: sshd@13-10.200.8.39:22-10.200.12.6:52436.service: Deactivated successfully.
Feb  8 23:25:07.336412 systemd[1]: session-16.scope: Deactivated successfully.
Feb  8 23:25:07.337296 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit.
Feb  8 23:25:07.338299 systemd-logind[1301]: Removed session 16.
Feb  8 23:25:07.435452 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.12.6:50002.service.
Feb  8 23:25:08.052800 sshd[3917]: Accepted publickey for core from 10.200.12.6 port 50002 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:08.054482 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:08.060366 systemd-logind[1301]: New session 17 of user core.
Feb  8 23:25:08.061190 systemd[1]: Started session-17.scope.
Feb  8 23:25:09.478488 sshd[3917]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:09.482085 systemd[1]: sshd@14-10.200.8.39:22-10.200.12.6:50002.service: Deactivated successfully.
Feb  8 23:25:09.483614 systemd[1]: session-17.scope: Deactivated successfully.
Feb  8 23:25:09.483809 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit.
Feb  8 23:25:09.485082 systemd-logind[1301]: Removed session 17.
Feb  8 23:25:09.590066 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.12.6:50016.service.
Feb  8 23:25:10.219771 sshd[3933]: Accepted publickey for core from 10.200.12.6 port 50016 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:10.229780 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:10.236121 systemd-logind[1301]: New session 18 of user core.
Feb  8 23:25:10.237136 systemd[1]: Started session-18.scope.
Feb  8 23:25:10.896242 sshd[3933]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:10.899744 systemd[1]: sshd@15-10.200.8.39:22-10.200.12.6:50016.service: Deactivated successfully.
Feb  8 23:25:10.900950 systemd[1]: session-18.scope: Deactivated successfully.
Feb  8 23:25:10.901839 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit.
Feb  8 23:25:10.902916 systemd-logind[1301]: Removed session 18.
Feb  8 23:25:11.003660 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.12.6:50030.service.
Feb  8 23:25:11.628153 sshd[3943]: Accepted publickey for core from 10.200.12.6 port 50030 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:11.629768 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:11.634881 systemd[1]: Started session-19.scope.
Feb  8 23:25:11.635527 systemd-logind[1301]: New session 19 of user core.
Feb  8 23:25:12.120171 sshd[3943]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:12.123282 systemd[1]: sshd@16-10.200.8.39:22-10.200.12.6:50030.service: Deactivated successfully.
Feb  8 23:25:12.124247 systemd[1]: session-19.scope: Deactivated successfully.
Feb  8 23:25:12.124915 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit.
Feb  8 23:25:12.125764 systemd-logind[1301]: Removed session 19.
Feb  8 23:25:17.224824 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.12.6:53720.service.
Feb  8 23:25:17.844911 sshd[3961]: Accepted publickey for core from 10.200.12.6 port 53720 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:17.846408 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:17.852703 systemd[1]: Started session-20.scope.
Feb  8 23:25:17.854524 systemd-logind[1301]: New session 20 of user core.
Feb  8 23:25:18.336221 sshd[3961]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:18.339656 systemd[1]: sshd@17-10.200.8.39:22-10.200.12.6:53720.service: Deactivated successfully.
Feb  8 23:25:18.340790 systemd[1]: session-20.scope: Deactivated successfully.
Feb  8 23:25:18.341813 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit.
Feb  8 23:25:18.342786 systemd-logind[1301]: Removed session 20.
Feb  8 23:25:23.440450 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.12.6:53730.service.
Feb  8 23:25:24.061003 sshd[3974]: Accepted publickey for core from 10.200.12.6 port 53730 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:24.062541 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:24.069598 systemd[1]: Started session-21.scope.
Feb  8 23:25:24.071124 systemd-logind[1301]: New session 21 of user core.
Feb  8 23:25:24.559912 sshd[3974]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:24.562799 systemd[1]: sshd@18-10.200.8.39:22-10.200.12.6:53730.service: Deactivated successfully.
Feb  8 23:25:24.563809 systemd[1]: session-21.scope: Deactivated successfully.
Feb  8 23:25:24.564677 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit.
Feb  8 23:25:24.565489 systemd-logind[1301]: Removed session 21.
Feb  8 23:25:29.667325 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.12.6:35886.service.
Feb  8 23:25:30.289464 sshd[3987]: Accepted publickey for core from 10.200.12.6 port 35886 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:30.290095 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:30.295567 systemd[1]: Started session-22.scope.
Feb  8 23:25:30.296094 systemd-logind[1301]: New session 22 of user core.
Feb  8 23:25:30.781243 sshd[3987]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:30.784921 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit.
Feb  8 23:25:30.785191 systemd[1]: sshd@19-10.200.8.39:22-10.200.12.6:35886.service: Deactivated successfully.
Feb  8 23:25:30.786391 systemd[1]: session-22.scope: Deactivated successfully.
Feb  8 23:25:30.787509 systemd-logind[1301]: Removed session 22.
Feb  8 23:25:30.886347 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.12.6:35900.service.
Feb  8 23:25:31.506104 sshd[4000]: Accepted publickey for core from 10.200.12.6 port 35900 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:31.508124 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:31.513455 systemd[1]: Started session-23.scope.
Feb  8 23:25:31.514085 systemd-logind[1301]: New session 23 of user core.
Feb  8 23:25:33.135811 systemd[1]: run-containerd-runc-k8s.io-79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264-runc.n1dN7I.mount: Deactivated successfully.
Feb  8 23:25:33.147583 env[1313]: time="2024-02-08T23:25:33.147531087Z" level=info msg="StopContainer for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" with timeout 30 (s)"
Feb  8 23:25:33.148114 env[1313]: time="2024-02-08T23:25:33.148079386Z" level=info msg="Stop container \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" with signal terminated"
Feb  8 23:25:33.170084 env[1313]: time="2024-02-08T23:25:33.169983662Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  8 23:25:33.173488 systemd[1]: cri-containerd-11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323.scope: Deactivated successfully.
Feb  8 23:25:33.187157 env[1313]: time="2024-02-08T23:25:33.186036345Z" level=info msg="StopContainer for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" with timeout 2 (s)"
Feb  8 23:25:33.187504 env[1313]: time="2024-02-08T23:25:33.187476643Z" level=info msg="Stop container \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" with signal terminated"
Feb  8 23:25:33.205773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323-rootfs.mount: Deactivated successfully.
Feb  8 23:25:33.207677 systemd-networkd[1456]: lxc_health: Link DOWN
Feb  8 23:25:33.207683 systemd-networkd[1456]: lxc_health: Lost carrier
Feb  8 23:25:33.230448 systemd[1]: cri-containerd-79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264.scope: Deactivated successfully.
Feb  8 23:25:33.230717 systemd[1]: cri-containerd-79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264.scope: Consumed 8.074s CPU time.
Feb  8 23:25:33.252967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264-rootfs.mount: Deactivated successfully.
Feb  8 23:25:33.296151 env[1313]: time="2024-02-08T23:25:33.295981223Z" level=info msg="shim disconnected" id=11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323
Feb  8 23:25:33.296540 env[1313]: time="2024-02-08T23:25:33.296513423Z" level=warning msg="cleaning up after shim disconnected" id=11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323 namespace=k8s.io
Feb  8 23:25:33.296540 env[1313]: time="2024-02-08T23:25:33.296533223Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:33.296751 env[1313]: time="2024-02-08T23:25:33.296479523Z" level=info msg="shim disconnected" id=79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264
Feb  8 23:25:33.296836 env[1313]: time="2024-02-08T23:25:33.296819422Z" level=warning msg="cleaning up after shim disconnected" id=79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264 namespace=k8s.io
Feb  8 23:25:33.296910 env[1313]: time="2024-02-08T23:25:33.296895222Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:33.306412 env[1313]: time="2024-02-08T23:25:33.306377312Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4071 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:33.308612 env[1313]: time="2024-02-08T23:25:33.308580209Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4072 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:33.316666 env[1313]: time="2024-02-08T23:25:33.316630800Z" level=info msg="StopContainer for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" returns successfully"
Feb  8 23:25:33.317282 env[1313]: time="2024-02-08T23:25:33.317252800Z" level=info msg="StopPodSandbox for \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\""
Feb  8 23:25:33.317382 env[1313]: time="2024-02-08T23:25:33.317332800Z" level=info msg="Container to stop \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:33.320412 env[1313]: time="2024-02-08T23:25:33.319943097Z" level=info msg="StopContainer for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" returns successfully"
Feb  8 23:25:33.319529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc-shm.mount: Deactivated successfully.
Feb  8 23:25:33.320863 env[1313]: time="2024-02-08T23:25:33.320836696Z" level=info msg="StopPodSandbox for \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\""
Feb  8 23:25:33.321058 env[1313]: time="2024-02-08T23:25:33.321016896Z" level=info msg="Container to stop \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:33.321168 env[1313]: time="2024-02-08T23:25:33.321147495Z" level=info msg="Container to stop \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:33.321471 env[1313]: time="2024-02-08T23:25:33.321436395Z" level=info msg="Container to stop \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:33.321588 env[1313]: time="2024-02-08T23:25:33.321567595Z" level=info msg="Container to stop \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:33.321691 env[1313]: time="2024-02-08T23:25:33.321670295Z" level=info msg="Container to stop \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:33.329363 systemd[1]: cri-containerd-253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc.scope: Deactivated successfully.
Feb  8 23:25:33.338431 systemd[1]: cri-containerd-58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb.scope: Deactivated successfully.
Feb  8 23:25:33.381744 env[1313]: time="2024-02-08T23:25:33.381690829Z" level=info msg="shim disconnected" id=253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc
Feb  8 23:25:33.382173 env[1313]: time="2024-02-08T23:25:33.382138028Z" level=warning msg="cleaning up after shim disconnected" id=253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc namespace=k8s.io
Feb  8 23:25:33.382310 env[1313]: time="2024-02-08T23:25:33.382292528Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:33.382475 env[1313]: time="2024-02-08T23:25:33.382194528Z" level=info msg="shim disconnected" id=58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb
Feb  8 23:25:33.382566 env[1313]: time="2024-02-08T23:25:33.382551928Z" level=warning msg="cleaning up after shim disconnected" id=58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb namespace=k8s.io
Feb  8 23:25:33.382651 env[1313]: time="2024-02-08T23:25:33.382637928Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:33.395424 env[1313]: time="2024-02-08T23:25:33.395309514Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4138 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:33.396810 env[1313]: time="2024-02-08T23:25:33.396779412Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4137 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:33.397118 env[1313]: time="2024-02-08T23:25:33.397085912Z" level=info msg="TearDown network for sandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" successfully"
Feb  8 23:25:33.397194 env[1313]: time="2024-02-08T23:25:33.397119312Z" level=info msg="StopPodSandbox for \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" returns successfully"
Feb  8 23:25:33.397393 env[1313]: time="2024-02-08T23:25:33.397369911Z" level=info msg="TearDown network for sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" successfully"
Feb  8 23:25:33.397504 env[1313]: time="2024-02-08T23:25:33.397484911Z" level=info msg="StopPodSandbox for \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" returns successfully"
Feb  8 23:25:33.424491 kubelet[2415]: I0208 23:25:33.424464    2415 scope.go:117] "RemoveContainer" containerID="79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264"
Feb  8 23:25:33.425985 env[1313]: time="2024-02-08T23:25:33.425950480Z" level=info msg="RemoveContainer for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\""
Feb  8 23:25:33.438597 env[1313]: time="2024-02-08T23:25:33.438561266Z" level=info msg="RemoveContainer for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" returns successfully"
Feb  8 23:25:33.438790 kubelet[2415]: I0208 23:25:33.438767    2415 scope.go:117] "RemoveContainer" containerID="00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25"
Feb  8 23:25:33.439763 env[1313]: time="2024-02-08T23:25:33.439734965Z" level=info msg="RemoveContainer for \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\""
Feb  8 23:25:33.450387 env[1313]: time="2024-02-08T23:25:33.450355153Z" level=info msg="RemoveContainer for \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\" returns successfully"
Feb  8 23:25:33.450549 kubelet[2415]: I0208 23:25:33.450531    2415 scope.go:117] "RemoveContainer" containerID="755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7"
Feb  8 23:25:33.451481 env[1313]: time="2024-02-08T23:25:33.451454452Z" level=info msg="RemoveContainer for \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\""
Feb  8 23:25:33.460161 env[1313]: time="2024-02-08T23:25:33.460133442Z" level=info msg="RemoveContainer for \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\" returns successfully"
Feb  8 23:25:33.460289 kubelet[2415]: I0208 23:25:33.460269    2415 scope.go:117] "RemoveContainer" containerID="9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b"
Feb  8 23:25:33.461296 env[1313]: time="2024-02-08T23:25:33.461268541Z" level=info msg="RemoveContainer for \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\""
Feb  8 23:25:33.469596 env[1313]: time="2024-02-08T23:25:33.469564032Z" level=info msg="RemoveContainer for \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\" returns successfully"
Feb  8 23:25:33.469740 kubelet[2415]: I0208 23:25:33.469716    2415 scope.go:117] "RemoveContainer" containerID="19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c"
Feb  8 23:25:33.470667 env[1313]: time="2024-02-08T23:25:33.470640530Z" level=info msg="RemoveContainer for \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\""
Feb  8 23:25:33.478588 env[1313]: time="2024-02-08T23:25:33.478554522Z" level=info msg="RemoveContainer for \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\" returns successfully"
Feb  8 23:25:33.478761 kubelet[2415]: I0208 23:25:33.478741    2415 scope.go:117] "RemoveContainer" containerID="79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264"
Feb  8 23:25:33.478991 env[1313]: time="2024-02-08T23:25:33.478924821Z" level=error msg="ContainerStatus for \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\": not found"
Feb  8 23:25:33.479152 kubelet[2415]: E0208 23:25:33.479133    2415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\": not found" containerID="79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264"
Feb  8 23:25:33.479249 kubelet[2415]: I0208 23:25:33.479232    2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264"} err="failed to get container status \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\": rpc error: code = NotFound desc = an error occurred when try to find container \"79c0b37855076cff3bd7d64b959117bb7fe2deb00ba84d693869bc5eb71c5264\": not found"
Feb  8 23:25:33.479320 kubelet[2415]: I0208 23:25:33.479254    2415 scope.go:117] "RemoveContainer" containerID="00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25"
Feb  8 23:25:33.479490 env[1313]: time="2024-02-08T23:25:33.479444321Z" level=error msg="ContainerStatus for \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\": not found"
Feb  8 23:25:33.479596 kubelet[2415]: E0208 23:25:33.479577    2415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\": not found" containerID="00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25"
Feb  8 23:25:33.479665 kubelet[2415]: I0208 23:25:33.479611    2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25"} err="failed to get container status \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\": rpc error: code = NotFound desc = an error occurred when try to find container \"00653a6f5299ae68b25699947af1995bfe2f9e37f0e790e35c19984818353b25\": not found"
Feb  8 23:25:33.479665 kubelet[2415]: I0208 23:25:33.479627    2415 scope.go:117] "RemoveContainer" containerID="755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7"
Feb  8 23:25:33.479838 env[1313]: time="2024-02-08T23:25:33.479794220Z" level=error msg="ContainerStatus for \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\": not found"
Feb  8 23:25:33.479967 kubelet[2415]: E0208 23:25:33.479921    2415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\": not found" containerID="755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7"
Feb  8 23:25:33.479967 kubelet[2415]: I0208 23:25:33.479952    2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7"} err="failed to get container status \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"755347f441a241c44ea62282183fa10860144914c173f1b493498d7a5383f5d7\": not found"
Feb  8 23:25:33.479967 kubelet[2415]: I0208 23:25:33.479964    2415 scope.go:117] "RemoveContainer" containerID="9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b"
Feb  8 23:25:33.481398 env[1313]: time="2024-02-08T23:25:33.480428320Z" level=error msg="ContainerStatus for \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\": not found"
Feb  8 23:25:33.481977 kubelet[2415]: E0208 23:25:33.481476    2415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\": not found" containerID="9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b"
Feb  8 23:25:33.481977 kubelet[2415]: I0208 23:25:33.481544    2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b"} err="failed to get container status \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d2b5300fc39ed38210c18ad648a7f4d5323734de77a0007a4fd81c217c13c2b\": not found"
Feb  8 23:25:33.481977 kubelet[2415]: I0208 23:25:33.481559    2415 scope.go:117] "RemoveContainer" containerID="19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c"
Feb  8 23:25:33.482234 env[1313]: time="2024-02-08T23:25:33.481760218Z" level=error msg="ContainerStatus for \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\": not found"
Feb  8 23:25:33.482289 kubelet[2415]: E0208 23:25:33.482247    2415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\": not found" containerID="19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c"
Feb  8 23:25:33.482289 kubelet[2415]: I0208 23:25:33.482282    2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c"} err="failed to get container status \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\": rpc error: code = NotFound desc = an error occurred when try to find container \"19e5ea62c4334273886c55b50a43876266334ad7c9da8159106ecb9be8c2b00c\": not found"
Feb  8 23:25:33.482378 kubelet[2415]: I0208 23:25:33.482311    2415 scope.go:117] "RemoveContainer" containerID="11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323"
Feb  8 23:25:33.483388 env[1313]: time="2024-02-08T23:25:33.483363216Z" level=info msg="RemoveContainer for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\""
Feb  8 23:25:33.491402 env[1313]: time="2024-02-08T23:25:33.491371808Z" level=info msg="RemoveContainer for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" returns successfully"
Feb  8 23:25:33.491538 kubelet[2415]: I0208 23:25:33.491517    2415 scope.go:117] "RemoveContainer" containerID="11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323"
Feb  8 23:25:33.491765 env[1313]: time="2024-02-08T23:25:33.491718507Z" level=error msg="ContainerStatus for \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\": not found"
Feb  8 23:25:33.491876 kubelet[2415]: E0208 23:25:33.491858    2415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\": not found" containerID="11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323"
Feb  8 23:25:33.491961 kubelet[2415]: I0208 23:25:33.491892    2415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323"} err="failed to get container status \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\": rpc error: code = NotFound desc = an error occurred when try to find container \"11e3cfd20d6a9afb2c7ba7819dcdc1308a1f14259d0864e59f52016d8aafa323\": not found"
Feb  8 23:25:33.540588 kubelet[2415]: I0208 23:25:33.540547    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-etc-cni-netd\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540588 kubelet[2415]: I0208 23:25:33.540598    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cni-path\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540865 kubelet[2415]: I0208 23:25:33.540631    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-xtables-lock\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540865 kubelet[2415]: I0208 23:25:33.540661    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-run\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540865 kubelet[2415]: I0208 23:25:33.540689    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-lib-modules\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540865 kubelet[2415]: I0208 23:25:33.540715    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-bpf-maps\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540865 kubelet[2415]: I0208 23:25:33.540745    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-cgroup\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.540865 kubelet[2415]: I0208 23:25:33.540784    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e65bf645-6a14-478e-b24a-92e612e26db3-clustermesh-secrets\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543173 kubelet[2415]: I0208 23:25:33.540814    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-hostproc\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543173 kubelet[2415]: I0208 23:25:33.540854    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6npcz\" (UniqueName: \"kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-kube-api-access-6npcz\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543173 kubelet[2415]: I0208 23:25:33.540888    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkqdh\" (UniqueName: \"kubernetes.io/projected/efa2ee26-01ef-4375-8f97-1e8a8e275b08-kube-api-access-pkqdh\") pod \"efa2ee26-01ef-4375-8f97-1e8a8e275b08\" (UID: \"efa2ee26-01ef-4375-8f97-1e8a8e275b08\") "
Feb  8 23:25:33.543173 kubelet[2415]: I0208 23:25:33.540965    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-net\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543173 kubelet[2415]: I0208 23:25:33.541000    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-hubble-tls\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543173 kubelet[2415]: I0208 23:25:33.541068    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-config-path\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543592 kubelet[2415]: I0208 23:25:33.541102    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-kernel\") pod \"e65bf645-6a14-478e-b24a-92e612e26db3\" (UID: \"e65bf645-6a14-478e-b24a-92e612e26db3\") "
Feb  8 23:25:33.543592 kubelet[2415]: I0208 23:25:33.541140    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efa2ee26-01ef-4375-8f97-1e8a8e275b08-cilium-config-path\") pod \"efa2ee26-01ef-4375-8f97-1e8a8e275b08\" (UID: \"efa2ee26-01ef-4375-8f97-1e8a8e275b08\") "
Feb  8 23:25:33.543592 kubelet[2415]: I0208 23:25:33.541175    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543592 kubelet[2415]: I0208 23:25:33.541132    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543592 kubelet[2415]: I0208 23:25:33.541234    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cni-path" (OuterVolumeSpecName: "cni-path") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543789 kubelet[2415]: I0208 23:25:33.541280    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543789 kubelet[2415]: I0208 23:25:33.541305    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543789 kubelet[2415]: I0208 23:25:33.541344    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543789 kubelet[2415]: I0208 23:25:33.541370    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.543789 kubelet[2415]: I0208 23:25:33.541396    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.544002 kubelet[2415]: I0208 23:25:33.541441    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-hostproc" (OuterVolumeSpecName: "hostproc") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.544930 kubelet[2415]: I0208 23:25:33.544894    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:33.548082 kubelet[2415]: I0208 23:25:33.548042    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  8 23:25:33.548941 kubelet[2415]: I0208 23:25:33.548900    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efa2ee26-01ef-4375-8f97-1e8a8e275b08-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "efa2ee26-01ef-4375-8f97-1e8a8e275b08" (UID: "efa2ee26-01ef-4375-8f97-1e8a8e275b08"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  8 23:25:33.549165 kubelet[2415]: I0208 23:25:33.549145    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65bf645-6a14-478e-b24a-92e612e26db3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  8 23:25:33.549619 kubelet[2415]: I0208 23:25:33.549593    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-kube-api-access-6npcz" (OuterVolumeSpecName: "kube-api-access-6npcz") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "kube-api-access-6npcz". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:25:33.553235 kubelet[2415]: I0208 23:25:33.552238    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa2ee26-01ef-4375-8f97-1e8a8e275b08-kube-api-access-pkqdh" (OuterVolumeSpecName: "kube-api-access-pkqdh") pod "efa2ee26-01ef-4375-8f97-1e8a8e275b08" (UID: "efa2ee26-01ef-4375-8f97-1e8a8e275b08"). InnerVolumeSpecName "kube-api-access-pkqdh". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:25:33.553745 kubelet[2415]: I0208 23:25:33.553711    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e65bf645-6a14-478e-b24a-92e612e26db3" (UID: "e65bf645-6a14-478e-b24a-92e612e26db3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:25:33.642259 kubelet[2415]: I0208 23:25:33.642182    2415 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cni-path\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642259 kubelet[2415]: I0208 23:25:33.642271    2415 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-xtables-lock\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642318    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-run\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642359    2415 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-lib-modules\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642399    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-cgroup\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642447    2415 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-bpf-maps\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642463    2415 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e65bf645-6a14-478e-b24a-92e612e26db3-clustermesh-secrets\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642479    2415 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-hostproc\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642498    2415 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6npcz\" (UniqueName: \"kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-kube-api-access-6npcz\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642592 kubelet[2415]: I0208 23:25:33.642515    2415 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pkqdh\" (UniqueName: \"kubernetes.io/projected/efa2ee26-01ef-4375-8f97-1e8a8e275b08-kube-api-access-pkqdh\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642886 kubelet[2415]: I0208 23:25:33.642532    2415 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-net\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642886 kubelet[2415]: I0208 23:25:33.642550    2415 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e65bf645-6a14-478e-b24a-92e612e26db3-hubble-tls\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642886 kubelet[2415]: I0208 23:25:33.642571    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e65bf645-6a14-478e-b24a-92e612e26db3-cilium-config-path\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642886 kubelet[2415]: I0208 23:25:33.642590    2415 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642886 kubelet[2415]: I0208 23:25:33.642613    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efa2ee26-01ef-4375-8f97-1e8a8e275b08-cilium-config-path\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.642886 kubelet[2415]: I0208 23:25:33.642634    2415 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e65bf645-6a14-478e-b24a-92e612e26db3-etc-cni-netd\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:33.730915 systemd[1]: Removed slice kubepods-burstable-pode65bf645_6a14_478e_b24a_92e612e26db3.slice.
Feb  8 23:25:33.731105 systemd[1]: kubepods-burstable-pode65bf645_6a14_478e_b24a_92e612e26db3.slice: Consumed 8.188s CPU time.
Feb  8 23:25:33.734769 systemd[1]: Removed slice kubepods-besteffort-podefa2ee26_01ef_4375_8f97_1e8a8e275b08.slice.
Feb  8 23:25:34.132670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc-rootfs.mount: Deactivated successfully.
Feb  8 23:25:34.133068 systemd[1]: var-lib-kubelet-pods-efa2ee26\x2d01ef\x2d4375\x2d8f97\x2d1e8a8e275b08-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpkqdh.mount: Deactivated successfully.
Feb  8 23:25:34.133271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb-rootfs.mount: Deactivated successfully.
Feb  8 23:25:34.133433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb-shm.mount: Deactivated successfully.
Feb  8 23:25:34.133593 systemd[1]: var-lib-kubelet-pods-e65bf645\x2d6a14\x2d478e\x2db24a\x2d92e612e26db3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6npcz.mount: Deactivated successfully.
Feb  8 23:25:34.133676 systemd[1]: var-lib-kubelet-pods-e65bf645\x2d6a14\x2d478e\x2db24a\x2d92e612e26db3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  8 23:25:34.133760 systemd[1]: var-lib-kubelet-pods-e65bf645\x2d6a14\x2d478e\x2db24a\x2d92e612e26db3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  8 23:25:34.297558 kubelet[2415]: I0208 23:25:34.297518    2415 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" path="/var/lib/kubelet/pods/e65bf645-6a14-478e-b24a-92e612e26db3/volumes"
Feb  8 23:25:34.298351 kubelet[2415]: I0208 23:25:34.298320    2415 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="efa2ee26-01ef-4375-8f97-1e8a8e275b08" path="/var/lib/kubelet/pods/efa2ee26-01ef-4375-8f97-1e8a8e275b08/volumes"
Feb  8 23:25:35.186101 sshd[4000]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:35.189857 systemd[1]: sshd@20-10.200.8.39:22-10.200.12.6:35900.service: Deactivated successfully.
Feb  8 23:25:35.190985 systemd[1]: session-23.scope: Deactivated successfully.
Feb  8 23:25:35.191849 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit.
Feb  8 23:25:35.192780 systemd-logind[1301]: Removed session 23.
Feb  8 23:25:35.292822 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.12.6:35904.service.
Feb  8 23:25:35.953629 sshd[4174]: Accepted publickey for core from 10.200.12.6 port 35904 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:35.954979 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:35.959968 systemd-logind[1301]: New session 24 of user core.
Feb  8 23:25:35.960548 systemd[1]: Started session-24.scope.
Feb  8 23:25:36.850108 kubelet[2415]: I0208 23:25:36.850074    2415 topology_manager.go:215] "Topology Admit Handler" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6" podNamespace="kube-system" podName="cilium-7t4qd"
Feb  8 23:25:36.850672 kubelet[2415]: E0208 23:25:36.850637    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" containerName="mount-cgroup"
Feb  8 23:25:36.850818 kubelet[2415]: E0208 23:25:36.850804    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" containerName="apply-sysctl-overwrites"
Feb  8 23:25:36.850931 kubelet[2415]: E0208 23:25:36.850918    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" containerName="mount-bpf-fs"
Feb  8 23:25:36.851039 kubelet[2415]: E0208 23:25:36.851028    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="efa2ee26-01ef-4375-8f97-1e8a8e275b08" containerName="cilium-operator"
Feb  8 23:25:36.851149 kubelet[2415]: E0208 23:25:36.851138    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" containerName="clean-cilium-state"
Feb  8 23:25:36.851245 kubelet[2415]: E0208 23:25:36.851234    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" containerName="cilium-agent"
Feb  8 23:25:36.851518 kubelet[2415]: I0208 23:25:36.851500    2415 memory_manager.go:346] "RemoveStaleState removing state" podUID="e65bf645-6a14-478e-b24a-92e612e26db3" containerName="cilium-agent"
Feb  8 23:25:36.851647 kubelet[2415]: I0208 23:25:36.851634    2415 memory_manager.go:346] "RemoveStaleState removing state" podUID="efa2ee26-01ef-4375-8f97-1e8a8e275b08" containerName="cilium-operator"
Feb  8 23:25:36.858793 systemd[1]: Created slice kubepods-burstable-pod17f35a04_9ca9_4269_80db_6c942c5edee6.slice.
Feb  8 23:25:36.964998 sshd[4174]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:36.967390 kubelet[2415]: I0208 23:25:36.967353    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-run\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967546 kubelet[2415]: I0208 23:25:36.967417    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-bpf-maps\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967546 kubelet[2415]: I0208 23:25:36.967456    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-lib-modules\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967546 kubelet[2415]: I0208 23:25:36.967488    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-xtables-lock\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967546 kubelet[2415]: I0208 23:25:36.967521    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmn2\" (UniqueName: \"kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-kube-api-access-jwmn2\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967773 kubelet[2415]: I0208 23:25:36.967553    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-cgroup\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967773 kubelet[2415]: I0208 23:25:36.967592    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-etc-cni-netd\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967773 kubelet[2415]: I0208 23:25:36.967633    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-clustermesh-secrets\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967773 kubelet[2415]: I0208 23:25:36.967673    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-kernel\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967773 kubelet[2415]: I0208 23:25:36.967711    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cni-path\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.967773 kubelet[2415]: I0208 23:25:36.967747    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-ipsec-secrets\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.968106 kubelet[2415]: I0208 23:25:36.967787    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-config-path\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.968106 kubelet[2415]: I0208 23:25:36.967824    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-net\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.968106 kubelet[2415]: I0208 23:25:36.967857    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-hubble-tls\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.968106 kubelet[2415]: I0208 23:25:36.967893    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-hostproc\") pod \"cilium-7t4qd\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") " pod="kube-system/cilium-7t4qd"
Feb  8 23:25:36.970578 systemd[1]: sshd@21-10.200.8.39:22-10.200.12.6:35904.service: Deactivated successfully.
Feb  8 23:25:36.972109 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit.
Feb  8 23:25:36.972171 systemd[1]: session-24.scope: Deactivated successfully.
Feb  8 23:25:36.973577 systemd-logind[1301]: Removed session 24.
Feb  8 23:25:37.075480 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.12.6:51232.service.
Feb  8 23:25:37.168659 env[1313]: time="2024-02-08T23:25:37.167538054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7t4qd,Uid:17f35a04-9ca9-4269-80db-6c942c5edee6,Namespace:kube-system,Attempt:0,}"
Feb  8 23:25:37.217045 env[1313]: time="2024-02-08T23:25:37.216982314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:25:37.217273 env[1313]: time="2024-02-08T23:25:37.217018914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:25:37.217273 env[1313]: time="2024-02-08T23:25:37.217032814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:25:37.217430 env[1313]: time="2024-02-08T23:25:37.217359514Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240 pid=4199 runtime=io.containerd.runc.v2
Feb  8 23:25:37.234736 systemd[1]: Started cri-containerd-c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240.scope.
Feb  8 23:25:37.262296 env[1313]: time="2024-02-08T23:25:37.262254778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7t4qd,Uid:17f35a04-9ca9-4269-80db-6c942c5edee6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\""
Feb  8 23:25:37.265795 env[1313]: time="2024-02-08T23:25:37.265756475Z" level=info msg="CreateContainer within sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  8 23:25:37.310339 env[1313]: time="2024-02-08T23:25:37.310288639Z" level=info msg="CreateContainer within sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\""
Feb  8 23:25:37.312761 env[1313]: time="2024-02-08T23:25:37.310973739Z" level=info msg="StartContainer for \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\""
Feb  8 23:25:37.327301 systemd[1]: Started cri-containerd-73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb.scope.
Feb  8 23:25:37.340850 systemd[1]: cri-containerd-73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb.scope: Deactivated successfully.
Feb  8 23:25:37.396626 env[1313]: time="2024-02-08T23:25:37.396571470Z" level=info msg="shim disconnected" id=73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb
Feb  8 23:25:37.396626 env[1313]: time="2024-02-08T23:25:37.396626770Z" level=warning msg="cleaning up after shim disconnected" id=73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb namespace=k8s.io
Feb  8 23:25:37.396975 env[1313]: time="2024-02-08T23:25:37.396638670Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:37.404375 env[1313]: time="2024-02-08T23:25:37.404337064Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4263 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:25:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Feb  8 23:25:37.404840 env[1313]: time="2024-02-08T23:25:37.404742663Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed"
Feb  8 23:25:37.405157 env[1313]: time="2024-02-08T23:25:37.405115263Z" level=error msg="Failed to pipe stderr of container \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\"" error="reading from a closed fifo"
Feb  8 23:25:37.405299 env[1313]: time="2024-02-08T23:25:37.405115363Z" level=error msg="Failed to pipe stdout of container \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\"" error="reading from a closed fifo"
Feb  8 23:25:37.410628 env[1313]: time="2024-02-08T23:25:37.410583859Z" level=error msg="StartContainer for \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Feb  8 23:25:37.411076 kubelet[2415]: E0208 23:25:37.410826    2415 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb"
Feb  8 23:25:37.411076 kubelet[2415]: E0208 23:25:37.410979    2415 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Feb  8 23:25:37.411076 kubelet[2415]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Feb  8 23:25:37.411076 kubelet[2415]: rm /hostbin/cilium-mount
Feb  8 23:25:37.411305 kubelet[2415]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jwmn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7t4qd_kube-system(17f35a04-9ca9-4269-80db-6c942c5edee6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Feb  8 23:25:37.411431 kubelet[2415]: E0208 23:25:37.411032    2415 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7t4qd" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6"
Feb  8 23:25:37.446152 env[1313]: time="2024-02-08T23:25:37.446038830Z" level=info msg="CreateContainer within sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}"
Feb  8 23:25:37.472959 kubelet[2415]: E0208 23:25:37.472930    2415 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  8 23:25:37.486729 env[1313]: time="2024-02-08T23:25:37.486690698Z" level=info msg="CreateContainer within sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\""
Feb  8 23:25:37.487148 env[1313]: time="2024-02-08T23:25:37.487120697Z" level=info msg="StartContainer for \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\""
Feb  8 23:25:37.503334 systemd[1]: Started cri-containerd-07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d.scope.
Feb  8 23:25:37.517462 systemd[1]: cri-containerd-07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d.scope: Deactivated successfully.
Feb  8 23:25:37.531408 env[1313]: time="2024-02-08T23:25:37.531356662Z" level=info msg="shim disconnected" id=07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d
Feb  8 23:25:37.531589 env[1313]: time="2024-02-08T23:25:37.531412362Z" level=warning msg="cleaning up after shim disconnected" id=07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d namespace=k8s.io
Feb  8 23:25:37.531589 env[1313]: time="2024-02-08T23:25:37.531424262Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:37.539598 env[1313]: time="2024-02-08T23:25:37.539563455Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4298 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:25:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Feb  8 23:25:37.539871 env[1313]: time="2024-02-08T23:25:37.539811255Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed"
Feb  8 23:25:37.542185 env[1313]: time="2024-02-08T23:25:37.542132553Z" level=error msg="Failed to pipe stdout of container \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\"" error="reading from a closed fifo"
Feb  8 23:25:37.542266 env[1313]: time="2024-02-08T23:25:37.542230353Z" level=error msg="Failed to pipe stderr of container \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\"" error="reading from a closed fifo"
Feb  8 23:25:37.546354 env[1313]: time="2024-02-08T23:25:37.546315150Z" level=error msg="StartContainer for \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Feb  8 23:25:37.546643 kubelet[2415]: E0208 23:25:37.546623    2415 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d"
Feb  8 23:25:37.546761 kubelet[2415]: E0208 23:25:37.546747    2415 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Feb  8 23:25:37.546761 kubelet[2415]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Feb  8 23:25:37.546761 kubelet[2415]: rm /hostbin/cilium-mount
Feb  8 23:25:37.546875 kubelet[2415]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jwmn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7t4qd_kube-system(17f35a04-9ca9-4269-80db-6c942c5edee6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Feb  8 23:25:37.546875 kubelet[2415]: E0208 23:25:37.546800    2415 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7t4qd" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6"
Feb  8 23:25:37.719585 sshd[4187]: Accepted publickey for core from 10.200.12.6 port 51232 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:37.720251 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:37.725127 systemd-logind[1301]: New session 25 of user core.
Feb  8 23:25:37.725644 systemd[1]: Started session-25.scope.
Feb  8 23:25:38.224641 sshd[4187]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:38.227742 systemd[1]: sshd@22-10.200.8.39:22-10.200.12.6:51232.service: Deactivated successfully.
Feb  8 23:25:38.228762 systemd[1]: session-25.scope: Deactivated successfully.
Feb  8 23:25:38.229494 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit.
Feb  8 23:25:38.230377 systemd-logind[1301]: Removed session 25.
Feb  8 23:25:38.329466 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.12.6:51240.service.
Feb  8 23:25:38.444111 kubelet[2415]: I0208 23:25:38.444082    2415 scope.go:117] "RemoveContainer" containerID="73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb"
Feb  8 23:25:38.444786 env[1313]: time="2024-02-08T23:25:38.444742361Z" level=info msg="StopPodSandbox for \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\""
Feb  8 23:25:38.445291 env[1313]: time="2024-02-08T23:25:38.445261761Z" level=info msg="Container to stop \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:38.445394 env[1313]: time="2024-02-08T23:25:38.445373061Z" level=info msg="Container to stop \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:25:38.449916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240-shm.mount: Deactivated successfully.
Feb  8 23:25:38.454921 env[1313]: time="2024-02-08T23:25:38.454886154Z" level=info msg="RemoveContainer for \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\""
Feb  8 23:25:38.460594 systemd[1]: cri-containerd-c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240.scope: Deactivated successfully.
Feb  8 23:25:38.472858 env[1313]: time="2024-02-08T23:25:38.472813641Z" level=info msg="RemoveContainer for \"73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb\" returns successfully"
Feb  8 23:25:38.486884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240-rootfs.mount: Deactivated successfully.
Feb  8 23:25:38.530857 env[1313]: time="2024-02-08T23:25:38.530798998Z" level=info msg="shim disconnected" id=c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240
Feb  8 23:25:38.532085 env[1313]: time="2024-02-08T23:25:38.532044397Z" level=warning msg="cleaning up after shim disconnected" id=c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240 namespace=k8s.io
Feb  8 23:25:38.532205 env[1313]: time="2024-02-08T23:25:38.532190197Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:38.541330 env[1313]: time="2024-02-08T23:25:38.541299291Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4341 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:38.541622 env[1313]: time="2024-02-08T23:25:38.541591190Z" level=info msg="TearDown network for sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" successfully"
Feb  8 23:25:38.541703 env[1313]: time="2024-02-08T23:25:38.541619590Z" level=info msg="StopPodSandbox for \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" returns successfully"
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677617    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-clustermesh-secrets\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677676    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-net\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677707    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-hostproc\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677749    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwmn2\" (UniqueName: \"kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-kube-api-access-jwmn2\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677783    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-kernel\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677811    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cni-path\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677844    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-ipsec-secrets\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677877    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-xtables-lock\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677908    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-lib-modules\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677938    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-run\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.677968    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-etc-cni-netd\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.678004    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-config-path\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.678035    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-bpf-maps\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.678104    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-cgroup\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.678150    2415 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-hubble-tls\") pod \"17f35a04-9ca9-4269-80db-6c942c5edee6\" (UID: \"17f35a04-9ca9-4269-80db-6c942c5edee6\") "
Feb  8 23:25:38.680086 kubelet[2415]: I0208 23:25:38.678370    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.681131 kubelet[2415]: I0208 23:25:38.678432    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.681131 kubelet[2415]: I0208 23:25:38.678462    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-hostproc" (OuterVolumeSpecName: "hostproc") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.681131 kubelet[2415]: I0208 23:25:38.678875    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.681131 kubelet[2415]: I0208 23:25:38.678928    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.681131 kubelet[2415]: I0208 23:25:38.678957    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.682677 kubelet[2415]: I0208 23:25:38.682639    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  8 23:25:38.682852 kubelet[2415]: I0208 23:25:38.682703    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.682852 kubelet[2415]: I0208 23:25:38.682772    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.683030 kubelet[2415]: I0208 23:25:38.682995    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cni-path" (OuterVolumeSpecName: "cni-path") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.683120 kubelet[2415]: I0208 23:25:38.683039    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:25:38.688421 systemd[1]: var-lib-kubelet-pods-17f35a04\x2d9ca9\x2d4269\x2d80db\x2d6c942c5edee6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  8 23:25:38.694831 kubelet[2415]: I0208 23:25:38.690584    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  8 23:25:38.694831 kubelet[2415]: I0208 23:25:38.690664    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-kube-api-access-jwmn2" (OuterVolumeSpecName: "kube-api-access-jwmn2") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "kube-api-access-jwmn2". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:25:38.694831 kubelet[2415]: I0208 23:25:38.693791    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  8 23:25:38.688568 systemd[1]: var-lib-kubelet-pods-17f35a04\x2d9ca9\x2d4269\x2d80db\x2d6c942c5edee6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  8 23:25:38.692257 systemd[1]: var-lib-kubelet-pods-17f35a04\x2d9ca9\x2d4269\x2d80db\x2d6c942c5edee6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djwmn2.mount: Deactivated successfully.
Feb  8 23:25:38.695148 kubelet[2415]: I0208 23:25:38.695122    2415 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "17f35a04-9ca9-4269-80db-6c942c5edee6" (UID: "17f35a04-9ca9-4269-80db-6c942c5edee6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:25:38.779316 kubelet[2415]: I0208 23:25:38.779253    2415 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-hostproc\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779624 kubelet[2415]: I0208 23:25:38.779333    2415 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jwmn2\" (UniqueName: \"kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-kube-api-access-jwmn2\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779624 kubelet[2415]: I0208 23:25:38.779382    2415 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779624 kubelet[2415]: I0208 23:25:38.779422    2415 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cni-path\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779624 kubelet[2415]: I0208 23:25:38.779468    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779624 kubelet[2415]: I0208 23:25:38.779514    2415 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-xtables-lock\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779624 kubelet[2415]: I0208 23:25:38.779557    2415 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-lib-modules\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779633    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-run\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779672    2415 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-etc-cni-netd\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779692    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-config-path\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779710    2415 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-bpf-maps\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779730    2415 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-cilium-cgroup\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779747    2415 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17f35a04-9ca9-4269-80db-6c942c5edee6-hubble-tls\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779765    2415 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17f35a04-9ca9-4269-80db-6c942c5edee6-clustermesh-secrets\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.779979 kubelet[2415]: I0208 23:25:38.779786    2415 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17f35a04-9ca9-4269-80db-6c942c5edee6-host-proc-sys-net\") on node \"ci-3510.3.2-a-4203397181\" DevicePath \"\""
Feb  8 23:25:38.946556 sshd[4321]: Accepted publickey for core from 10.200.12.6 port 51240 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc
Feb  8 23:25:38.947961 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:25:38.952822 systemd-logind[1301]: New session 26 of user core.
Feb  8 23:25:38.953326 systemd[1]: Started session-26.scope.
Feb  8 23:25:39.084899 systemd[1]: var-lib-kubelet-pods-17f35a04\x2d9ca9\x2d4269\x2d80db\x2d6c942c5edee6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb  8 23:25:39.447749 kubelet[2415]: I0208 23:25:39.447617    2415 scope.go:117] "RemoveContainer" containerID="07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d"
Feb  8 23:25:39.452105 env[1313]: time="2024-02-08T23:25:39.451460760Z" level=info msg="RemoveContainer for \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\""
Feb  8 23:25:39.453294 systemd[1]: Removed slice kubepods-burstable-pod17f35a04_9ca9_4269_80db_6c942c5edee6.slice.
Feb  8 23:25:39.482393 kubelet[2415]: I0208 23:25:39.482368    2415 topology_manager.go:215] "Topology Admit Handler" podUID="19b6b6b6-92a4-403d-aab5-3ebfc3367d4e" podNamespace="kube-system" podName="cilium-h5vbd"
Feb  8 23:25:39.482583 kubelet[2415]: E0208 23:25:39.482569    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6" containerName="mount-cgroup"
Feb  8 23:25:39.482667 kubelet[2415]: E0208 23:25:39.482658    2415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6" containerName="mount-cgroup"
Feb  8 23:25:39.482768 kubelet[2415]: I0208 23:25:39.482757    2415 memory_manager.go:346] "RemoveStaleState removing state" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6" containerName="mount-cgroup"
Feb  8 23:25:39.482836 kubelet[2415]: I0208 23:25:39.482828    2415 memory_manager.go:346] "RemoveStaleState removing state" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6" containerName="mount-cgroup"
Feb  8 23:25:39.484697 env[1313]: time="2024-02-08T23:25:39.484600838Z" level=info msg="RemoveContainer for \"07d226fc88a2b823c5c45a8d7687c95fb90b21f9483fdc6ed75cba476487c39d\" returns successfully"
Feb  8 23:25:39.490481 systemd[1]: Created slice kubepods-burstable-pod19b6b6b6_92a4_403d_aab5_3ebfc3367d4e.slice.
Feb  8 23:25:39.495971 kubelet[2415]: W0208 23:25:39.495933    2415 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496201 kubelet[2415]: E0208 23:25:39.496186    2415 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496303 kubelet[2415]: W0208 23:25:39.495954    2415 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496387 kubelet[2415]: E0208 23:25:39.496377    2415 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496461 kubelet[2415]: W0208 23:25:39.495982    2415 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496534 kubelet[2415]: E0208 23:25:39.496525    2415 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496597 kubelet[2415]: W0208 23:25:39.496162    2415 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.496667 kubelet[2415]: E0208 23:25:39.496658    2415 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-4203397181" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4203397181' and this object
Feb  8 23:25:39.592126 kubelet[2415]: I0208 23:25:39.592092    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cilium-ipsec-secrets\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592309 kubelet[2415]: I0208 23:25:39.592159    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-lib-modules\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592309 kubelet[2415]: I0208 23:25:39.592185    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cilium-config-path\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592309 kubelet[2415]: I0208 23:25:39.592248    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-hostproc\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592456 kubelet[2415]: I0208 23:25:39.592315    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cilium-cgroup\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592456 kubelet[2415]: I0208 23:25:39.592345    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-host-proc-sys-kernel\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592456 kubelet[2415]: I0208 23:25:39.592416    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cilium-run\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592456 kubelet[2415]: I0208 23:25:39.592447    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cni-path\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592630 kubelet[2415]: I0208 23:25:39.592535    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-etc-cni-netd\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592630 kubelet[2415]: I0208 23:25:39.592598    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-host-proc-sys-net\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592630 kubelet[2415]: I0208 23:25:39.592628    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-clustermesh-secrets\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.592759 kubelet[2415]: I0208 23:25:39.592699    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-xtables-lock\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.593504 kubelet[2415]: I0208 23:25:39.593482    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-hubble-tls\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.593611 kubelet[2415]: I0208 23:25:39.593548    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4k4f\" (UniqueName: \"kubernetes.io/projected/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-kube-api-access-c4k4f\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:39.593611 kubelet[2415]: I0208 23:25:39.593580    2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-bpf-maps\") pod \"cilium-h5vbd\" (UID: \"19b6b6b6-92a4-403d-aab5-3ebfc3367d4e\") " pod="kube-system/cilium-h5vbd"
Feb  8 23:25:40.297524 kubelet[2415]: I0208 23:25:40.297479    2415 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="17f35a04-9ca9-4269-80db-6c942c5edee6" path="/var/lib/kubelet/pods/17f35a04-9ca9-4269-80db-6c942c5edee6/volumes"
Feb  8 23:25:40.502881 kubelet[2415]: W0208 23:25:40.502836    2415 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17f35a04_9ca9_4269_80db_6c942c5edee6.slice/cri-containerd-73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb.scope WatchSource:0}: container "73ca534a858603a38807220cd9817e68627110508be50413904b4236711af9cb" in namespace "k8s.io": not found
Feb  8 23:25:40.696023 kubelet[2415]: E0208 23:25:40.695896    2415 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition
Feb  8 23:25:40.696023 kubelet[2415]: E0208 23:25:40.695933    2415 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-h5vbd: failed to sync secret cache: timed out waiting for the condition
Feb  8 23:25:40.696253 kubelet[2415]: E0208 23:25:40.695895    2415 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition
Feb  8 23:25:40.696420 kubelet[2415]: E0208 23:25:40.696370    2415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-hubble-tls podName:19b6b6b6-92a4-403d-aab5-3ebfc3367d4e nodeName:}" failed. No retries permitted until 2024-02-08 23:25:41.196000794 +0000 UTC m=+519.948482112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-hubble-tls") pod "cilium-h5vbd" (UID: "19b6b6b6-92a4-403d-aab5-3ebfc3367d4e") : failed to sync secret cache: timed out waiting for the condition
Feb  8 23:25:40.696420 kubelet[2415]: E0208 23:25:40.696415    2415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cilium-config-path podName:19b6b6b6-92a4-403d-aab5-3ebfc3367d4e nodeName:}" failed. No retries permitted until 2024-02-08 23:25:41.196402194 +0000 UTC m=+519.948883512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/19b6b6b6-92a4-403d-aab5-3ebfc3367d4e-cilium-config-path") pod "cilium-h5vbd" (UID: "19b6b6b6-92a4-403d-aab5-3ebfc3367d4e") : failed to sync configmap cache: timed out waiting for the condition
Feb  8 23:25:41.295037 env[1313]: time="2024-02-08T23:25:41.294986365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5vbd,Uid:19b6b6b6-92a4-403d-aab5-3ebfc3367d4e,Namespace:kube-system,Attempt:0,}"
Feb  8 23:25:41.331583 env[1313]: time="2024-02-08T23:25:41.331503747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:25:41.331583 env[1313]: time="2024-02-08T23:25:41.331546347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:25:41.331583 env[1313]: time="2024-02-08T23:25:41.331560947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:25:41.331991 env[1313]: time="2024-02-08T23:25:41.331947247Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05 pid=4375 runtime=io.containerd.runc.v2
Feb  8 23:25:41.357494 systemd[1]: Started cri-containerd-5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05.scope.
Feb  8 23:25:41.395195 env[1313]: time="2024-02-08T23:25:41.395160214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5vbd,Uid:19b6b6b6-92a4-403d-aab5-3ebfc3367d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\""
Feb  8 23:25:41.398206 env[1313]: time="2024-02-08T23:25:41.398171513Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  8 23:25:41.441902 env[1313]: time="2024-02-08T23:25:41.441855490Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c\""
Feb  8 23:25:41.443924 env[1313]: time="2024-02-08T23:25:41.442486190Z" level=info msg="StartContainer for \"3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c\""
Feb  8 23:25:41.465174 systemd[1]: Started cri-containerd-3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c.scope.
Feb  8 23:25:41.496776 env[1313]: time="2024-02-08T23:25:41.496721862Z" level=info msg="StartContainer for \"3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c\" returns successfully"
Feb  8 23:25:41.502874 systemd[1]: cri-containerd-3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c.scope: Deactivated successfully.
Feb  8 23:25:41.555694 env[1313]: time="2024-02-08T23:25:41.555558532Z" level=info msg="shim disconnected" id=3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c
Feb  8 23:25:41.555694 env[1313]: time="2024-02-08T23:25:41.555619032Z" level=warning msg="cleaning up after shim disconnected" id=3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c namespace=k8s.io
Feb  8 23:25:41.555694 env[1313]: time="2024-02-08T23:25:41.555634032Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:41.564168 env[1313]: time="2024-02-08T23:25:41.564131728Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4460 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:42.350678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b1b799db82e03de6dd0aec19e747d0589d3e175703046efa63e13a0c062482c-rootfs.mount: Deactivated successfully.
Feb  8 23:25:42.464321 env[1313]: time="2024-02-08T23:25:42.464278600Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  8 23:25:42.474143 kubelet[2415]: E0208 23:25:42.474102    2415 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  8 23:25:42.504601 env[1313]: time="2024-02-08T23:25:42.504558483Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015\""
Feb  8 23:25:42.506273 env[1313]: time="2024-02-08T23:25:42.506229282Z" level=info msg="StartContainer for \"0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015\""
Feb  8 23:25:42.524654 systemd[1]: Started cri-containerd-0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015.scope.
Feb  8 23:25:42.558007 env[1313]: time="2024-02-08T23:25:42.557959159Z" level=info msg="StartContainer for \"0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015\" returns successfully"
Feb  8 23:25:42.559676 systemd[1]: cri-containerd-0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015.scope: Deactivated successfully.
Feb  8 23:25:42.588893 env[1313]: time="2024-02-08T23:25:42.588842946Z" level=info msg="shim disconnected" id=0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015
Feb  8 23:25:42.589146 env[1313]: time="2024-02-08T23:25:42.588895446Z" level=warning msg="cleaning up after shim disconnected" id=0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015 namespace=k8s.io
Feb  8 23:25:42.589146 env[1313]: time="2024-02-08T23:25:42.588908746Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:42.596725 env[1313]: time="2024-02-08T23:25:42.596689142Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4526 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:43.156672 kubelet[2415]: I0208 23:25:43.156642    2415 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-4203397181" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-08T23:25:43Z","lastTransitionTime":"2024-02-08T23:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb  8 23:25:43.350701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a179e6ef1cebe55ccef0036858d88d231e6c9b29fbabbd4070fb9ed425c8015-rootfs.mount: Deactivated successfully.
Feb  8 23:25:43.465330 env[1313]: time="2024-02-08T23:25:43.465209493Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  8 23:25:43.519628 env[1313]: time="2024-02-08T23:25:43.519518773Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03\""
Feb  8 23:25:43.520459 env[1313]: time="2024-02-08T23:25:43.520425173Z" level=info msg="StartContainer for \"0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03\""
Feb  8 23:25:43.551337 systemd[1]: Started cri-containerd-0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03.scope.
Feb  8 23:25:43.588971 systemd[1]: cri-containerd-0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03.scope: Deactivated successfully.
Feb  8 23:25:43.590566 env[1313]: time="2024-02-08T23:25:43.590527847Z" level=info msg="StartContainer for \"0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03\" returns successfully"
Feb  8 23:25:43.622456 env[1313]: time="2024-02-08T23:25:43.622406535Z" level=info msg="shim disconnected" id=0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03
Feb  8 23:25:43.622456 env[1313]: time="2024-02-08T23:25:43.622455835Z" level=warning msg="cleaning up after shim disconnected" id=0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03 namespace=k8s.io
Feb  8 23:25:43.622751 env[1313]: time="2024-02-08T23:25:43.622467735Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:43.630470 env[1313]: time="2024-02-08T23:25:43.630432932Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4584 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:44.350988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eec3c69ab47094d4b0211d005635bce5800d072ad157cccbcdbc7c7136faf03-rootfs.mount: Deactivated successfully.
Feb  8 23:25:44.472362 env[1313]: time="2024-02-08T23:25:44.472318754Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  8 23:25:44.512630 env[1313]: time="2024-02-08T23:25:44.512586442Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740\""
Feb  8 23:25:44.513242 env[1313]: time="2024-02-08T23:25:44.513202542Z" level=info msg="StartContainer for \"1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740\""
Feb  8 23:25:44.537488 systemd[1]: Started cri-containerd-1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740.scope.
Feb  8 23:25:44.562462 systemd[1]: cri-containerd-1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740.scope: Deactivated successfully.
Feb  8 23:25:44.567237 env[1313]: time="2024-02-08T23:25:44.567154726Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19b6b6b6_92a4_403d_aab5_3ebfc3367d4e.slice/cri-containerd-1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740.scope/memory.events\": no such file or directory"
Feb  8 23:25:44.575248 env[1313]: time="2024-02-08T23:25:44.575204824Z" level=info msg="StartContainer for \"1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740\" returns successfully"
Feb  8 23:25:44.607093 env[1313]: time="2024-02-08T23:25:44.606488814Z" level=info msg="shim disconnected" id=1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740
Feb  8 23:25:44.607093 env[1313]: time="2024-02-08T23:25:44.606536214Z" level=warning msg="cleaning up after shim disconnected" id=1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740 namespace=k8s.io
Feb  8 23:25:44.607093 env[1313]: time="2024-02-08T23:25:44.606548314Z" level=info msg="cleaning up dead shim"
Feb  8 23:25:44.613823 env[1313]: time="2024-02-08T23:25:44.613788412Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:25:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4641 runtime=io.containerd.runc.v2\n"
Feb  8 23:25:45.351106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff440f96fdb8bb15116ab8f46fa6ddf1da3e81857cb6098292b3b0d97739740-rootfs.mount: Deactivated successfully.
Feb  8 23:25:45.474448 env[1313]: time="2024-02-08T23:25:45.474400588Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  8 23:25:45.517623 env[1313]: time="2024-02-08T23:25:45.517579178Z" level=info msg="CreateContainer within sandbox \"5a76a19408aca917f932330a7c7a90321d600296a1f1240b10dc04a2744c1c05\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33f8ee03b84126a54442e9ba9d4d5fa5f8a5b5c2aae37f98c5e2a7f2c5cdd710\""
Feb  8 23:25:45.518390 env[1313]: time="2024-02-08T23:25:45.518357578Z" level=info msg="StartContainer for \"33f8ee03b84126a54442e9ba9d4d5fa5f8a5b5c2aae37f98c5e2a7f2c5cdd710\""
Feb  8 23:25:45.543127 systemd[1]: Started cri-containerd-33f8ee03b84126a54442e9ba9d4d5fa5f8a5b5c2aae37f98c5e2a7f2c5cdd710.scope.
Feb  8 23:25:45.587804 env[1313]: time="2024-02-08T23:25:45.587756962Z" level=info msg="StartContainer for \"33f8ee03b84126a54442e9ba9d4d5fa5f8a5b5c2aae37f98c5e2a7f2c5cdd710\" returns successfully"
Feb  8 23:25:45.942083 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb  8 23:25:46.489826 kubelet[2415]: I0208 23:25:46.489780    2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h5vbd" podStartSLOduration=7.489737889 podCreationTimestamp="2024-02-08 23:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:25:46.489307989 +0000 UTC m=+525.241789307" watchObservedRunningTime="2024-02-08 23:25:46.489737889 +0000 UTC m=+525.242219207"
Feb  8 23:25:47.614486 systemd[1]: run-containerd-runc-k8s.io-33f8ee03b84126a54442e9ba9d4d5fa5f8a5b5c2aae37f98c5e2a7f2c5cdd710-runc.2fjiys.mount: Deactivated successfully.
Feb  8 23:25:48.721987 systemd-networkd[1456]: lxc_health: Link UP
Feb  8 23:25:48.741077 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  8 23:25:48.741113 systemd-networkd[1456]: lxc_health: Gained carrier
Feb  8 23:25:50.476226 systemd-networkd[1456]: lxc_health: Gained IPv6LL
Feb  8 23:25:54.294312 sshd[4321]: pam_unix(sshd:session): session closed for user core
Feb  8 23:25:54.299681 systemd[1]: sshd@23-10.200.8.39:22-10.200.12.6:51240.service: Deactivated successfully.
Feb  8 23:25:54.300572 systemd[1]: session-26.scope: Deactivated successfully.
Feb  8 23:25:54.301566 systemd-logind[1301]: Session 26 logged out. Waiting for processes to exit.
Feb  8 23:25:54.302673 systemd-logind[1301]: Removed session 26.
Feb  8 23:26:02.306626 env[1313]: time="2024-02-08T23:26:02.306577531Z" level=info msg="StopPodSandbox for \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\""
Feb  8 23:26:02.307114 env[1313]: time="2024-02-08T23:26:02.307044631Z" level=info msg="TearDown network for sandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" successfully"
Feb  8 23:26:02.307114 env[1313]: time="2024-02-08T23:26:02.307108831Z" level=info msg="StopPodSandbox for \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" returns successfully"
Feb  8 23:26:02.309061 env[1313]: time="2024-02-08T23:26:02.307642632Z" level=info msg="RemovePodSandbox for \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\""
Feb  8 23:26:02.309061 env[1313]: time="2024-02-08T23:26:02.307682032Z" level=info msg="Forcibly stopping sandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\""
Feb  8 23:26:02.309061 env[1313]: time="2024-02-08T23:26:02.307780632Z" level=info msg="TearDown network for sandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" successfully"
Feb  8 23:26:02.331933 env[1313]: time="2024-02-08T23:26:02.331891153Z" level=info msg="RemovePodSandbox \"253539bd75fae8315b3b9b55c918770f33ae9e4f11e2c2547889cde6ee9d35bc\" returns successfully"
Feb  8 23:26:02.332453 env[1313]: time="2024-02-08T23:26:02.332409453Z" level=info msg="StopPodSandbox for \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\""
Feb  8 23:26:02.332547 env[1313]: time="2024-02-08T23:26:02.332506954Z" level=info msg="TearDown network for sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" successfully"
Feb  8 23:26:02.332597 env[1313]: time="2024-02-08T23:26:02.332553054Z" level=info msg="StopPodSandbox for \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" returns successfully"
Feb  8 23:26:02.332883 env[1313]: time="2024-02-08T23:26:02.332857354Z" level=info msg="RemovePodSandbox for \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\""
Feb  8 23:26:02.332978 env[1313]: time="2024-02-08T23:26:02.332884754Z" level=info msg="Forcibly stopping sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\""
Feb  8 23:26:02.332978 env[1313]: time="2024-02-08T23:26:02.332966454Z" level=info msg="TearDown network for sandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" successfully"
Feb  8 23:26:02.350121 env[1313]: time="2024-02-08T23:26:02.350084469Z" level=info msg="RemovePodSandbox \"58a2183097e2cc6c99e18c9ae6dc0a11fd77c59a9136d65b6d3c7741b52da3eb\" returns successfully"
Feb  8 23:26:02.350498 env[1313]: time="2024-02-08T23:26:02.350472569Z" level=info msg="StopPodSandbox for \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\""
Feb  8 23:26:02.350626 env[1313]: time="2024-02-08T23:26:02.350559469Z" level=info msg="TearDown network for sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" successfully"
Feb  8 23:26:02.350695 env[1313]: time="2024-02-08T23:26:02.350631169Z" level=info msg="StopPodSandbox for \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" returns successfully"
Feb  8 23:26:02.352073 env[1313]: time="2024-02-08T23:26:02.350978470Z" level=info msg="RemovePodSandbox for \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\""
Feb  8 23:26:02.352073 env[1313]: time="2024-02-08T23:26:02.351006670Z" level=info msg="Forcibly stopping sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\""
Feb  8 23:26:02.352073 env[1313]: time="2024-02-08T23:26:02.351106770Z" level=info msg="TearDown network for sandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" successfully"
Feb  8 23:26:02.360284 env[1313]: time="2024-02-08T23:26:02.360246078Z" level=info msg="RemovePodSandbox \"c77f988e7ed8e5fd863c0dd6fdbfa6cb0697598fd0b20d6167c45dc563221240\" returns successfully"
Feb  8 23:26:08.025422 systemd[1]: cri-containerd-9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661.scope: Deactivated successfully.
Feb  8 23:26:08.025728 systemd[1]: cri-containerd-9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661.scope: Consumed 5.624s CPU time.
Feb  8 23:26:08.046311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661-rootfs.mount: Deactivated successfully.
Feb  8 23:26:08.096606 env[1313]: time="2024-02-08T23:26:08.096548945Z" level=info msg="shim disconnected" id=9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661
Feb  8 23:26:08.096606 env[1313]: time="2024-02-08T23:26:08.096606845Z" level=warning msg="cleaning up after shim disconnected" id=9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661 namespace=k8s.io
Feb  8 23:26:08.097316 env[1313]: time="2024-02-08T23:26:08.096619545Z" level=info msg="cleaning up dead shim"
Feb  8 23:26:08.108259 env[1313]: time="2024-02-08T23:26:08.108216659Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:26:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5339 runtime=io.containerd.runc.v2\n"
Feb  8 23:26:08.523296 kubelet[2415]: I0208 23:26:08.523191    2415 scope.go:117] "RemoveContainer" containerID="9d991459f282ce4da89f3ac0dec99092ec0a0aceab3e47e161aa3e101024d661"
Feb  8 23:26:08.526024 env[1313]: time="2024-02-08T23:26:08.525979475Z" level=info msg="CreateContainer within sandbox \"b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Feb  8 23:26:08.613326 env[1313]: time="2024-02-08T23:26:08.613222782Z" level=info msg="CreateContainer within sandbox \"b5709b95ccc0cc21e708cd3522833638c00f6cc2024a5cd920d550e400704d18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"74d35dcc9542e229ea5170984e2d4b7c3b143fd5d8914c9dd047203b0e0b587b\""
Feb  8 23:26:08.613951 env[1313]: time="2024-02-08T23:26:08.613913683Z" level=info msg="StartContainer for \"74d35dcc9542e229ea5170984e2d4b7c3b143fd5d8914c9dd047203b0e0b587b\""
Feb  8 23:26:08.644615 systemd[1]: Started cri-containerd-74d35dcc9542e229ea5170984e2d4b7c3b143fd5d8914c9dd047203b0e0b587b.scope.
Feb  8 23:26:08.693447 env[1313]: time="2024-02-08T23:26:08.693395581Z" level=info msg="StartContainer for \"74d35dcc9542e229ea5170984e2d4b7c3b143fd5d8914c9dd047203b0e0b587b\" returns successfully"
Feb  8 23:26:09.046467 systemd[1]: run-containerd-runc-k8s.io-74d35dcc9542e229ea5170984e2d4b7c3b143fd5d8914c9dd047203b0e0b587b-runc.pk4MoN.mount: Deactivated successfully.
Feb  8 23:26:12.545618 kubelet[2415]: E0208 23:26:12.545582    2415 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4203397181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb  8 23:26:12.557041 kubelet[2415]: E0208 23:26:12.557007    2415 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40302->10.200.8.13:2379: read: connection timed out"
Feb  8 23:26:12.563650 systemd[1]: cri-containerd-9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0.scope: Deactivated successfully.
Feb  8 23:26:12.563952 systemd[1]: cri-containerd-9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0.scope: Consumed 2.102s CPU time.
Feb  8 23:26:12.582604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0-rootfs.mount: Deactivated successfully.
Feb  8 23:26:12.610867 env[1313]: time="2024-02-08T23:26:12.610811700Z" level=info msg="shim disconnected" id=9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0
Feb  8 23:26:12.610867 env[1313]: time="2024-02-08T23:26:12.610866200Z" level=warning msg="cleaning up after shim disconnected" id=9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0 namespace=k8s.io
Feb  8 23:26:12.611460 env[1313]: time="2024-02-08T23:26:12.610878100Z" level=info msg="cleaning up dead shim"
Feb  8 23:26:12.622391 kubelet[2415]: E0208 23:26:12.621990    2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-4203397181.17b206deaaa43c8b", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-4203397181", UID:"548d17ac7794c04911c1c531f10eb33e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-4203397181"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 26, 2, 142235787, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 26, 2, 142235787, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-4203397181"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40120->10.200.8.13:2379: read: connection timed out' (will not retry!)
Feb  8 23:26:12.623734 env[1313]: time="2024-02-08T23:26:12.623700519Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:26:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5397 runtime=io.containerd.runc.v2\n"
Feb  8 23:26:13.536708 kubelet[2415]: I0208 23:26:13.536674    2415 scope.go:117] "RemoveContainer" containerID="9d02c7e8671be232b93ca7c3da871056f7a36c35bcd6a4fbcfcea391ab1557d0"
Feb  8 23:26:13.538804 env[1313]: time="2024-02-08T23:26:13.538757187Z" level=info msg="CreateContainer within sandbox \"c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Feb  8 23:26:13.587137 env[1313]: time="2024-02-08T23:26:13.587027160Z" level=info msg="CreateContainer within sandbox \"c3d4b3caf8a27d12692bc0de3054c6052bfc27a05c79becde77bfdbc64a5870f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8d36bf525daeac4f68f46f5d9a79d1d7d7b3f43fbb8c906632605b5e305b9a10\""
Feb  8 23:26:13.587581 env[1313]: time="2024-02-08T23:26:13.587547061Z" level=info msg="StartContainer for \"8d36bf525daeac4f68f46f5d9a79d1d7d7b3f43fbb8c906632605b5e305b9a10\""
Feb  8 23:26:13.614964 systemd[1]: run-containerd-runc-k8s.io-8d36bf525daeac4f68f46f5d9a79d1d7d7b3f43fbb8c906632605b5e305b9a10-runc.pUotD8.mount: Deactivated successfully.
Feb  8 23:26:13.616755 systemd[1]: Started cri-containerd-8d36bf525daeac4f68f46f5d9a79d1d7d7b3f43fbb8c906632605b5e305b9a10.scope.
Feb  8 23:26:13.666659 env[1313]: time="2024-02-08T23:26:13.666600181Z" level=info msg="StartContainer for \"8d36bf525daeac4f68f46f5d9a79d1d7d7b3f43fbb8c906632605b5e305b9a10\" returns successfully"
Feb  8 23:26:18.382920 kubelet[2415]: I0208 23:26:18.382419    2415 status_manager.go:853] "Failed to get status for pod" podUID="548d17ac7794c04911c1c531f10eb33e" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4203397181" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:40224->10.200.8.13:2379: read: connection timed out"
Feb  8 23:26:22.557510 kubelet[2415]: E0208 23:26:22.557202    2415 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4203397181?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"