Feb  9 19:41:00.818863 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024
Feb  9 19:41:00.818881 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6
Feb  9 19:41:00.818891 kernel: BIOS-provided physical RAM map:
Feb  9 19:41:00.818896 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Feb  9 19:41:00.818901 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable
Feb  9 19:41:00.818907 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS
Feb  9 19:41:00.818913 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable
Feb  9 19:41:00.818919 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS
Feb  9 19:41:00.818924 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable
Feb  9 19:41:00.818931 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
Feb  9 19:41:00.818936 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable
Feb  9 19:41:00.818941 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved
Feb  9 19:41:00.818947 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data
Feb  9 19:41:00.818952 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS
Feb  9 19:41:00.818959 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable
Feb  9 19:41:00.818966 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved
Feb  9 19:41:00.818972 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS
Feb  9 19:41:00.818977 kernel: NX (Execute Disable) protection: active
Feb  9 19:41:00.818983 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable
Feb  9 19:41:00.818989 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable
Feb  9 19:41:00.818995 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable
Feb  9 19:41:00.819000 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable
Feb  9 19:41:00.819006 kernel: extended physical RAM map:
Feb  9 19:41:00.819011 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable
Feb  9 19:41:00.819017 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable
Feb  9 19:41:00.819024 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS
Feb  9 19:41:00.819030 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable
Feb  9 19:41:00.819036 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS
Feb  9 19:41:00.819041 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable
Feb  9 19:41:00.819047 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS
Feb  9 19:41:00.819053 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable
Feb  9 19:41:00.819058 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable
Feb  9 19:41:00.819064 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable
Feb  9 19:41:00.819070 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable
Feb  9 19:41:00.819075 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable
Feb  9 19:41:00.819081 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved
Feb  9 19:41:00.819088 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data
Feb  9 19:41:00.819093 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS
Feb  9 19:41:00.819099 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable
Feb  9 19:41:00.819105 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved
Feb  9 19:41:00.819113 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS
Feb  9 19:41:00.819119 kernel: efi: EFI v2.70 by EDK II
Feb  9 19:41:00.819125 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 
Feb  9 19:41:00.819133 kernel: random: crng init done
Feb  9 19:41:00.819139 kernel: SMBIOS 2.8 present.
Feb  9 19:41:00.819145 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
Feb  9 19:41:00.819151 kernel: Hypervisor detected: KVM
Feb  9 19:41:00.819177 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  9 19:41:00.819183 kernel: kvm-clock: cpu 0, msr 20faa001, primary cpu clock
Feb  9 19:41:00.819189 kernel: kvm-clock: using sched offset of 3934342217 cycles
Feb  9 19:41:00.819196 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  9 19:41:00.819203 kernel: tsc: Detected 2794.750 MHz processor
Feb  9 19:41:00.819212 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb  9 19:41:00.819218 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb  9 19:41:00.819224 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000
Feb  9 19:41:00.819231 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  9 19:41:00.819237 kernel: Using GB pages for direct mapping
Feb  9 19:41:00.819243 kernel: Secure boot disabled
Feb  9 19:41:00.819250 kernel: ACPI: Early table checksum verification disabled
Feb  9 19:41:00.819256 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS )
Feb  9 19:41:00.819262 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS  BXPC     00000001      01000013)
Feb  9 19:41:00.819279 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 19:41:00.819286 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 19:41:00.819292 kernel: ACPI: FACS 0x000000009CBDD000 000040
Feb  9 19:41:00.819298 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 19:41:00.819305 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 19:41:00.819311 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 19:41:00.819317 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL  EDK2     00000002      01000013)
Feb  9 19:41:00.819324 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073]
Feb  9 19:41:00.819330 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38]
Feb  9 19:41:00.819338 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f]
Feb  9 19:41:00.819344 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f]
Feb  9 19:41:00.819350 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037]
Feb  9 19:41:00.819356 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027]
Feb  9 19:41:00.819363 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037]
Feb  9 19:41:00.819369 kernel: No NUMA configuration found
Feb  9 19:41:00.819375 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff]
Feb  9 19:41:00.819381 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff]
Feb  9 19:41:00.819388 kernel: Zone ranges:
Feb  9 19:41:00.819395 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  9 19:41:00.819401 kernel:   DMA32    [mem 0x0000000001000000-0x000000009cf3ffff]
Feb  9 19:41:00.819408 kernel:   Normal   empty
Feb  9 19:41:00.819414 kernel: Movable zone start for each node
Feb  9 19:41:00.819420 kernel: Early memory node ranges
Feb  9 19:41:00.819426 kernel:   node   0: [mem 0x0000000000001000-0x000000000009ffff]
Feb  9 19:41:00.819433 kernel:   node   0: [mem 0x0000000000100000-0x00000000007fffff]
Feb  9 19:41:00.819439 kernel:   node   0: [mem 0x0000000000808000-0x000000000080afff]
Feb  9 19:41:00.819445 kernel:   node   0: [mem 0x000000000080c000-0x000000000080ffff]
Feb  9 19:41:00.819452 kernel:   node   0: [mem 0x0000000000900000-0x000000009c8eefff]
Feb  9 19:41:00.819459 kernel:   node   0: [mem 0x000000009cbff000-0x000000009cf3ffff]
Feb  9 19:41:00.819465 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff]
Feb  9 19:41:00.819471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  9 19:41:00.819477 kernel: On node 0, zone DMA: 96 pages in unavailable ranges
Feb  9 19:41:00.819484 kernel: On node 0, zone DMA: 8 pages in unavailable ranges
Feb  9 19:41:00.819490 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  9 19:41:00.819496 kernel: On node 0, zone DMA: 240 pages in unavailable ranges
Feb  9 19:41:00.819502 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges
Feb  9 19:41:00.819510 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges
Feb  9 19:41:00.819516 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb  9 19:41:00.819523 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  9 19:41:00.819529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  9 19:41:00.819535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  9 19:41:00.819542 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  9 19:41:00.819548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  9 19:41:00.819554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  9 19:41:00.819560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  9 19:41:00.819568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  9 19:41:00.819574 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb  9 19:41:00.819580 kernel: TSC deadline timer available
Feb  9 19:41:00.819586 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Feb  9 19:41:00.819593 kernel: kvm-guest: KVM setup pv remote TLB flush
Feb  9 19:41:00.819599 kernel: kvm-guest: setup PV sched yield
Feb  9 19:41:00.819605 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices
Feb  9 19:41:00.819611 kernel: Booting paravirtualized kernel on KVM
Feb  9 19:41:00.819618 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  9 19:41:00.819624 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1
Feb  9 19:41:00.819632 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288
Feb  9 19:41:00.819638 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152
Feb  9 19:41:00.819649 kernel: pcpu-alloc: [0] 0 1 2 3 
Feb  9 19:41:00.819657 kernel: kvm-guest: setup async PF for cpu 0
Feb  9 19:41:00.819663 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0
Feb  9 19:41:00.819670 kernel: kvm-guest: PV spinlocks enabled
Feb  9 19:41:00.819676 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb  9 19:41:00.819690 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 629759
Feb  9 19:41:00.819697 kernel: Policy zone: DMA32
Feb  9 19:41:00.819705 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6
Feb  9 19:41:00.819712 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb  9 19:41:00.819720 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  9 19:41:00.819727 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb  9 19:41:00.819733 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  9 19:41:00.819741 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved)
Feb  9 19:41:00.819748 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb  9 19:41:00.819755 kernel: ftrace: allocating 34475 entries in 135 pages
Feb  9 19:41:00.819762 kernel: ftrace: allocated 135 pages with 4 groups
Feb  9 19:41:00.819769 kernel: rcu: Hierarchical RCU implementation.
Feb  9 19:41:00.819776 kernel: rcu:         RCU event tracing is enabled.
Feb  9 19:41:00.819783 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb  9 19:41:00.819790 kernel:         Rude variant of Tasks RCU enabled.
Feb  9 19:41:00.819796 kernel:         Tracing variant of Tasks RCU enabled.
Feb  9 19:41:00.819803 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  9 19:41:00.819810 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb  9 19:41:00.819818 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16
Feb  9 19:41:00.819824 kernel: Console: colour dummy device 80x25
Feb  9 19:41:00.819831 kernel: printk: console [ttyS0] enabled
Feb  9 19:41:00.819838 kernel: ACPI: Core revision 20210730
Feb  9 19:41:00.819845 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Feb  9 19:41:00.819851 kernel: APIC: Switch to symmetric I/O mode setup
Feb  9 19:41:00.819858 kernel: x2apic enabled
Feb  9 19:41:00.819865 kernel: Switched APIC routing to physical x2apic.
Feb  9 19:41:00.819871 kernel: kvm-guest: setup PV IPIs
Feb  9 19:41:00.819879 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Feb  9 19:41:00.819885 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  9 19:41:00.819892 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750)
Feb  9 19:41:00.819899 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb  9 19:41:00.819906 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb  9 19:41:00.819912 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb  9 19:41:00.819919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  9 19:41:00.819926 kernel: Spectre V2 : Mitigation: Retpolines
Feb  9 19:41:00.819932 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb  9 19:41:00.819940 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb  9 19:41:00.819947 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb  9 19:41:00.819953 kernel: RETBleed: Mitigation: untrained return thunk
Feb  9 19:41:00.819960 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  9 19:41:00.819967 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Feb  9 19:41:00.819974 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  9 19:41:00.819980 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  9 19:41:00.819987 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  9 19:41:00.819995 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  9 19:41:00.820002 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb  9 19:41:00.820008 kernel: Freeing SMP alternatives memory: 32K
Feb  9 19:41:00.820015 kernel: pid_max: default: 32768 minimum: 301
Feb  9 19:41:00.820022 kernel: LSM: Security Framework initializing
Feb  9 19:41:00.820028 kernel: SELinux:  Initializing.
Feb  9 19:41:00.820035 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb  9 19:41:00.820042 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb  9 19:41:00.820048 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb  9 19:41:00.820056 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb  9 19:41:00.820063 kernel: ... version:                0
Feb  9 19:41:00.820069 kernel: ... bit width:              48
Feb  9 19:41:00.820076 kernel: ... generic registers:      6
Feb  9 19:41:00.820082 kernel: ... value mask:             0000ffffffffffff
Feb  9 19:41:00.820089 kernel: ... max period:             00007fffffffffff
Feb  9 19:41:00.820096 kernel: ... fixed-purpose events:   0
Feb  9 19:41:00.820102 kernel: ... event mask:             000000000000003f
Feb  9 19:41:00.820109 kernel: signal: max sigframe size: 1776
Feb  9 19:41:00.820115 kernel: rcu: Hierarchical SRCU implementation.
Feb  9 19:41:00.820123 kernel: smp: Bringing up secondary CPUs ...
Feb  9 19:41:00.820130 kernel: x86: Booting SMP configuration:
Feb  9 19:41:00.820136 kernel: .... node  #0, CPUs:      #1
Feb  9 19:41:00.820143 kernel: kvm-clock: cpu 1, msr 20faa041, secondary cpu clock
Feb  9 19:41:00.820149 kernel: kvm-guest: setup async PF for cpu 1
Feb  9 19:41:00.820156 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0
Feb  9 19:41:00.820163 kernel:  #2
Feb  9 19:41:00.820170 kernel: kvm-clock: cpu 2, msr 20faa081, secondary cpu clock
Feb  9 19:41:00.820176 kernel: kvm-guest: setup async PF for cpu 2
Feb  9 19:41:00.820184 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0
Feb  9 19:41:00.820190 kernel:  #3
Feb  9 19:41:00.820197 kernel: kvm-clock: cpu 3, msr 20faa0c1, secondary cpu clock
Feb  9 19:41:00.820204 kernel: kvm-guest: setup async PF for cpu 3
Feb  9 19:41:00.820210 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0
Feb  9 19:41:00.820217 kernel: smp: Brought up 1 node, 4 CPUs
Feb  9 19:41:00.820223 kernel: smpboot: Max logical packages: 1
Feb  9 19:41:00.820230 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS)
Feb  9 19:41:00.820237 kernel: devtmpfs: initialized
Feb  9 19:41:00.820245 kernel: x86/mm: Memory block size: 128MB
Feb  9 19:41:00.820252 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes)
Feb  9 19:41:00.820258 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes)
Feb  9 19:41:00.820265 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes)
Feb  9 19:41:00.820293 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes)
Feb  9 19:41:00.820300 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes)
Feb  9 19:41:00.820307 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  9 19:41:00.820314 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb  9 19:41:00.820321 kernel: pinctrl core: initialized pinctrl subsystem
Feb  9 19:41:00.820329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  9 19:41:00.820336 kernel: audit: initializing netlink subsys (disabled)
Feb  9 19:41:00.820342 kernel: audit: type=2000 audit(1707507659.496:1): state=initialized audit_enabled=0 res=1
Feb  9 19:41:00.820349 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  9 19:41:00.820356 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  9 19:41:00.820377 kernel: cpuidle: using governor menu
Feb  9 19:41:00.820384 kernel: ACPI: bus type PCI registered
Feb  9 19:41:00.820391 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  9 19:41:00.820397 kernel: dca service started, version 1.12.1
Feb  9 19:41:00.820405 kernel: PCI: Using configuration type 1 for base access
Feb  9 19:41:00.820412 kernel: PCI: Using configuration type 1 for extended access
Feb  9 19:41:00.820419 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  9 19:41:00.820426 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb  9 19:41:00.820433 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb  9 19:41:00.820439 kernel: ACPI: Added _OSI(Module Device)
Feb  9 19:41:00.820446 kernel: ACPI: Added _OSI(Processor Device)
Feb  9 19:41:00.820452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb  9 19:41:00.820459 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  9 19:41:00.820467 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb  9 19:41:00.820474 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb  9 19:41:00.820480 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb  9 19:41:00.820487 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  9 19:41:00.820494 kernel: ACPI: Interpreter enabled
Feb  9 19:41:00.820500 kernel: ACPI: PM: (supports S0 S3 S5)
Feb  9 19:41:00.820507 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  9 19:41:00.820514 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  9 19:41:00.820521 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  9 19:41:00.820529 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  9 19:41:00.820644 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb  9 19:41:00.820657 kernel: acpiphp: Slot [3] registered
Feb  9 19:41:00.820664 kernel: acpiphp: Slot [4] registered
Feb  9 19:41:00.820671 kernel: acpiphp: Slot [5] registered
Feb  9 19:41:00.820677 kernel: acpiphp: Slot [6] registered
Feb  9 19:41:00.820691 kernel: acpiphp: Slot [7] registered
Feb  9 19:41:00.820699 kernel: acpiphp: Slot [8] registered
Feb  9 19:41:00.820705 kernel: acpiphp: Slot [9] registered
Feb  9 19:41:00.820714 kernel: acpiphp: Slot [10] registered
Feb  9 19:41:00.820721 kernel: acpiphp: Slot [11] registered
Feb  9 19:41:00.820727 kernel: acpiphp: Slot [12] registered
Feb  9 19:41:00.820734 kernel: acpiphp: Slot [13] registered
Feb  9 19:41:00.820740 kernel: acpiphp: Slot [14] registered
Feb  9 19:41:00.820747 kernel: acpiphp: Slot [15] registered
Feb  9 19:41:00.820754 kernel: acpiphp: Slot [16] registered
Feb  9 19:41:00.820760 kernel: acpiphp: Slot [17] registered
Feb  9 19:41:00.820767 kernel: acpiphp: Slot [18] registered
Feb  9 19:41:00.820775 kernel: acpiphp: Slot [19] registered
Feb  9 19:41:00.820782 kernel: acpiphp: Slot [20] registered
Feb  9 19:41:00.820788 kernel: acpiphp: Slot [21] registered
Feb  9 19:41:00.820795 kernel: acpiphp: Slot [22] registered
Feb  9 19:41:00.820801 kernel: acpiphp: Slot [23] registered
Feb  9 19:41:00.820808 kernel: acpiphp: Slot [24] registered
Feb  9 19:41:00.820814 kernel: acpiphp: Slot [25] registered
Feb  9 19:41:00.820821 kernel: acpiphp: Slot [26] registered
Feb  9 19:41:00.820827 kernel: acpiphp: Slot [27] registered
Feb  9 19:41:00.820835 kernel: acpiphp: Slot [28] registered
Feb  9 19:41:00.820842 kernel: acpiphp: Slot [29] registered
Feb  9 19:41:00.820848 kernel: acpiphp: Slot [30] registered
Feb  9 19:41:00.820855 kernel: acpiphp: Slot [31] registered
Feb  9 19:41:00.820861 kernel: PCI host bridge to bus 0000:00
Feb  9 19:41:00.820939 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  9 19:41:00.821001 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  9 19:41:00.821061 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  9 19:41:00.821124 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window]
Feb  9 19:41:00.821183 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window]
Feb  9 19:41:00.821244 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  9 19:41:00.821337 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb  9 19:41:00.821421 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb  9 19:41:00.821499 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Feb  9 19:41:00.821569 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc0c0-0xc0cf]
Feb  9 19:41:00.821642 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Feb  9 19:41:00.821743 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Feb  9 19:41:00.821820 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Feb  9 19:41:00.821891 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Feb  9 19:41:00.821968 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb  9 19:41:00.822039 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb  9 19:41:00.822111 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Feb  9 19:41:00.822188 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
Feb  9 19:41:00.822257 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref]
Feb  9 19:41:00.822376 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff]
Feb  9 19:41:00.825428 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref]
Feb  9 19:41:00.825526 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb
Feb  9 19:41:00.825595 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  9 19:41:00.825689 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00
Feb  9 19:41:00.825762 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc0a0-0xc0bf]
Feb  9 19:41:00.825834 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff]
Feb  9 19:41:00.825903 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref]
Feb  9 19:41:00.825985 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Feb  9 19:41:00.826060 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Feb  9 19:41:00.826129 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff]
Feb  9 19:41:00.826290 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref]
Feb  9 19:41:00.826371 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Feb  9 19:41:00.826440 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Feb  9 19:41:00.826508 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff]
Feb  9 19:41:00.826575 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref]
Feb  9 19:41:00.826642 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref]
Feb  9 19:41:00.826651 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  9 19:41:00.826662 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  9 19:41:00.826669 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  9 19:41:00.826677 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  9 19:41:00.826693 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  9 19:41:00.826700 kernel: iommu: Default domain type: Translated 
Feb  9 19:41:00.826707 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb  9 19:41:00.826776 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  9 19:41:00.826858 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  9 19:41:00.826929 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  9 19:41:00.826942 kernel: vgaarb: loaded
Feb  9 19:41:00.826949 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  9 19:41:00.826956 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  9 19:41:00.826965 kernel: PTP clock support registered
Feb  9 19:41:00.826975 kernel: Registered efivars operations
Feb  9 19:41:00.827004 kernel: PCI: Using ACPI for IRQ routing
Feb  9 19:41:00.827011 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb  9 19:41:00.827019 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff]
Feb  9 19:41:00.827026 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff]
Feb  9 19:41:00.827035 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff]
Feb  9 19:41:00.827045 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff]
Feb  9 19:41:00.827052 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff]
Feb  9 19:41:00.827059 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff]
Feb  9 19:41:00.827066 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Feb  9 19:41:00.827073 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Feb  9 19:41:00.827080 kernel: clocksource: Switched to clocksource kvm-clock
Feb  9 19:41:00.827087 kernel: VFS: Disk quotas dquot_6.6.0
Feb  9 19:41:00.827094 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  9 19:41:00.827102 kernel: pnp: PnP ACPI init
Feb  9 19:41:00.827184 kernel: pnp 00:02: [dma 2]
Feb  9 19:41:00.827196 kernel: pnp: PnP ACPI: found 6 devices
Feb  9 19:41:00.827203 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  9 19:41:00.827210 kernel: NET: Registered PF_INET protocol family
Feb  9 19:41:00.827218 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  9 19:41:00.827225 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb  9 19:41:00.827233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  9 19:41:00.827242 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb  9 19:41:00.827249 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear)
Feb  9 19:41:00.827256 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb  9 19:41:00.827264 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb  9 19:41:00.827283 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb  9 19:41:00.827290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  9 19:41:00.827298 kernel: NET: Registered PF_XDP protocol family
Feb  9 19:41:00.827372 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window
Feb  9 19:41:00.827455 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref]
Feb  9 19:41:00.827527 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  9 19:41:00.827591 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  9 19:41:00.827654 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  9 19:41:00.827724 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window]
Feb  9 19:41:00.827843 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window]
Feb  9 19:41:00.827920 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  9 19:41:00.828003 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  9 19:41:00.828080 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Feb  9 19:41:00.828089 kernel: PCI: CLS 0 bytes, default 64
Feb  9 19:41:00.828097 kernel: Initialise system trusted keyrings
Feb  9 19:41:00.828105 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb  9 19:41:00.828112 kernel: Key type asymmetric registered
Feb  9 19:41:00.828119 kernel: Asymmetric key parser 'x509' registered
Feb  9 19:41:00.828127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb  9 19:41:00.828134 kernel: io scheduler mq-deadline registered
Feb  9 19:41:00.828142 kernel: io scheduler kyber registered
Feb  9 19:41:00.828151 kernel: io scheduler bfq registered
Feb  9 19:41:00.828158 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb  9 19:41:00.828168 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  9 19:41:00.828178 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Feb  9 19:41:00.828188 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  9 19:41:00.828197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  9 19:41:00.828207 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  9 19:41:00.828217 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  9 19:41:00.828227 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  9 19:41:00.828237 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  9 19:41:00.828327 kernel: rtc_cmos 00:05: RTC can wake from S4
Feb  9 19:41:00.828341 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb  9 19:41:00.828403 kernel: rtc_cmos 00:05: registered as rtc0
Feb  9 19:41:00.828468 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:41:00 UTC (1707507660)
Feb  9 19:41:00.828529 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
Feb  9 19:41:00.828539 kernel: efifb: probing for efifb
Feb  9 19:41:00.828547 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k
Feb  9 19:41:00.828555 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1
Feb  9 19:41:00.828563 kernel: efifb: scrolling: redraw
Feb  9 19:41:00.828571 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0
Feb  9 19:41:00.828578 kernel: Console: switching to colour frame buffer device 160x50
Feb  9 19:41:00.828585 kernel: fb0: EFI VGA frame buffer device
Feb  9 19:41:00.828596 kernel: pstore: Registered efi as persistent store backend
Feb  9 19:41:00.828603 kernel: NET: Registered PF_INET6 protocol family
Feb  9 19:41:00.828611 kernel: Segment Routing with IPv6
Feb  9 19:41:00.828619 kernel: In-situ OAM (IOAM) with IPv6
Feb  9 19:41:00.828626 kernel: NET: Registered PF_PACKET protocol family
Feb  9 19:41:00.828633 kernel: Key type dns_resolver registered
Feb  9 19:41:00.828640 kernel: IPI shorthand broadcast: enabled
Feb  9 19:41:00.828649 kernel: sched_clock: Marking stable (366171166, 95472120)->(486212078, -24568792)
Feb  9 19:41:00.828656 kernel: registered taskstats version 1
Feb  9 19:41:00.828665 kernel: Loading compiled-in X.509 certificates
Feb  9 19:41:00.828673 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a'
Feb  9 19:41:00.828690 kernel: Key type .fscrypt registered
Feb  9 19:41:00.828697 kernel: Key type fscrypt-provisioning registered
Feb  9 19:41:00.828706 kernel: pstore: Using crash dump compression: deflate
Feb  9 19:41:00.828714 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  9 19:41:00.828721 kernel: ima: Allocated hash algorithm: sha1
Feb  9 19:41:00.828729 kernel: ima: No architecture policies found
Feb  9 19:41:00.828736 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb  9 19:41:00.828745 kernel: Write protecting the kernel read-only data: 28672k
Feb  9 19:41:00.828752 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb  9 19:41:00.828760 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb  9 19:41:00.828767 kernel: Run /init as init process
Feb  9 19:41:00.828776 kernel:   with arguments:
Feb  9 19:41:00.828783 kernel:     /init
Feb  9 19:41:00.828790 kernel:   with environment:
Feb  9 19:41:00.828797 kernel:     HOME=/
Feb  9 19:41:00.828804 kernel:     TERM=linux
Feb  9 19:41:00.828812 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb  9 19:41:00.828822 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 19:41:00.828833 systemd[1]: Detected virtualization kvm.
Feb  9 19:41:00.828841 systemd[1]: Detected architecture x86-64.
Feb  9 19:41:00.828848 systemd[1]: Running in initrd.
Feb  9 19:41:00.828856 systemd[1]: No hostname configured, using default hostname.
Feb  9 19:41:00.828863 systemd[1]: Hostname set to <localhost>.
Feb  9 19:41:00.828872 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 19:41:00.828879 systemd[1]: Queued start job for default target initrd.target.
Feb  9 19:41:00.828887 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 19:41:00.828895 systemd[1]: Reached target cryptsetup.target.
Feb  9 19:41:00.828902 systemd[1]: Reached target paths.target.
Feb  9 19:41:00.828910 systemd[1]: Reached target slices.target.
Feb  9 19:41:00.828917 systemd[1]: Reached target swap.target.
Feb  9 19:41:00.828924 systemd[1]: Reached target timers.target.
Feb  9 19:41:00.828934 systemd[1]: Listening on iscsid.socket.
Feb  9 19:41:00.828941 systemd[1]: Listening on iscsiuio.socket.
Feb  9 19:41:00.828949 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 19:41:00.828957 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 19:41:00.828965 systemd[1]: Listening on systemd-journald.socket.
Feb  9 19:41:00.828973 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 19:41:00.828981 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 19:41:00.828988 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 19:41:00.828996 systemd[1]: Reached target sockets.target.
Feb  9 19:41:00.829004 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 19:41:00.829012 systemd[1]: Finished network-cleanup.service.
Feb  9 19:41:00.829020 systemd[1]: Starting systemd-fsck-usr.service...
Feb  9 19:41:00.829027 systemd[1]: Starting systemd-journald.service...
Feb  9 19:41:00.829035 systemd[1]: Starting systemd-modules-load.service...
Feb  9 19:41:00.829043 systemd[1]: Starting systemd-resolved.service...
Feb  9 19:41:00.829050 systemd[1]: Starting systemd-vconsole-setup.service...
Feb  9 19:41:00.829058 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 19:41:00.829066 systemd[1]: Finished systemd-fsck-usr.service.
Feb  9 19:41:00.829074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 19:41:00.829082 systemd[1]: Finished systemd-vconsole-setup.service.
Feb  9 19:41:00.829090 kernel: audit: type=1130 audit(1707507660.821:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.829098 systemd[1]: Starting dracut-cmdline-ask.service...
Feb  9 19:41:00.829105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 19:41:00.829121 systemd-journald[198]: Journal started
Feb  9 19:41:00.829169 systemd-journald[198]: Runtime Journal (/run/log/journal/02b436c199014b90b8c02ea92f816991) is 6.0M, max 48.4M, 42.4M free.
Feb  9 19:41:00.829205 kernel: audit: type=1130 audit(1707507660.828:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.825051 systemd-modules-load[199]: Inserted module 'overlay'
Feb  9 19:41:00.832315 systemd[1]: Started systemd-journald.service.
Feb  9 19:41:00.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.835365 kernel: audit: type=1130 audit(1707507660.832:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.841959 systemd[1]: Finished dracut-cmdline-ask.service.
Feb  9 19:41:00.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.844092 systemd[1]: Starting dracut-cmdline.service...
Feb  9 19:41:00.845953 kernel: audit: type=1130 audit(1707507660.842:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.848294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  9 19:41:00.850192 systemd-modules-load[199]: Inserted module 'br_netfilter'
Feb  9 19:41:00.850923 kernel: Bridge firewalling registered
Feb  9 19:41:00.853139 dracut-cmdline[215]: dracut-dracut-053
Feb  9 19:41:00.855181 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6
Feb  9 19:41:00.865295 kernel: SCSI subsystem initialized
Feb  9 19:41:00.865980 systemd-resolved[200]: Positive Trust Anchors:
Feb  9 19:41:00.866006 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 19:41:00.866051 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 19:41:00.870311 systemd-resolved[200]: Defaulting to hostname 'linux'.
Feb  9 19:41:00.871397 systemd[1]: Started systemd-resolved.service.
Feb  9 19:41:00.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.872899 systemd[1]: Reached target nss-lookup.target.
Feb  9 19:41:00.875882 kernel: audit: type=1130 audit(1707507660.872:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.878924 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  9 19:41:00.878960 kernel: device-mapper: uevent: version 1.0.3
Feb  9 19:41:00.878970 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb  9 19:41:00.881602 systemd-modules-load[199]: Inserted module 'dm_multipath'
Feb  9 19:41:00.882431 systemd[1]: Finished systemd-modules-load.service.
Feb  9 19:41:00.886554 kernel: audit: type=1130 audit(1707507660.882:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.883221 systemd[1]: Starting systemd-sysctl.service...
Feb  9 19:41:00.894860 systemd[1]: Finished systemd-sysctl.service.
Feb  9 19:41:00.898984 kernel: audit: type=1130 audit(1707507660.894:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:00.934300 kernel: Loading iSCSI transport class v2.0-870.
Feb  9 19:41:00.949298 kernel: iscsi: registered transport (tcp)
Feb  9 19:41:00.978302 kernel: iscsi: registered transport (qla4xxx)
Feb  9 19:41:00.978384 kernel: QLogic iSCSI HBA Driver
Feb  9 19:41:01.011047 systemd[1]: Finished dracut-cmdline.service.
Feb  9 19:41:01.015507 kernel: audit: type=1130 audit(1707507661.010:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.012233 systemd[1]: Starting dracut-pre-udev.service...
Feb  9 19:41:01.062303 kernel: raid6: avx2x4   gen() 29900 MB/s
Feb  9 19:41:01.079298 kernel: raid6: avx2x4   xor()  7501 MB/s
Feb  9 19:41:01.096295 kernel: raid6: avx2x2   gen() 32239 MB/s
Feb  9 19:41:01.113297 kernel: raid6: avx2x2   xor() 19288 MB/s
Feb  9 19:41:01.130297 kernel: raid6: avx2x1   gen() 26545 MB/s
Feb  9 19:41:01.147297 kernel: raid6: avx2x1   xor() 15364 MB/s
Feb  9 19:41:01.164297 kernel: raid6: sse2x4   gen() 14799 MB/s
Feb  9 19:41:01.181310 kernel: raid6: sse2x4   xor()  7037 MB/s
Feb  9 19:41:01.198303 kernel: raid6: sse2x2   gen() 16067 MB/s
Feb  9 19:41:01.215298 kernel: raid6: sse2x2   xor()  9799 MB/s
Feb  9 19:41:01.232299 kernel: raid6: sse2x1   gen() 11957 MB/s
Feb  9 19:41:01.249742 kernel: raid6: sse2x1   xor()  7775 MB/s
Feb  9 19:41:01.249795 kernel: raid6: using algorithm avx2x2 gen() 32239 MB/s
Feb  9 19:41:01.249814 kernel: raid6: .... xor() 19288 MB/s, rmw enabled
Feb  9 19:41:01.249823 kernel: raid6: using avx2x2 recovery algorithm
Feb  9 19:41:01.261301 kernel: xor: automatically using best checksumming function   avx       
Feb  9 19:41:01.348304 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb  9 19:41:01.356746 systemd[1]: Finished dracut-pre-udev.service.
Feb  9 19:41:01.359849 kernel: audit: type=1130 audit(1707507661.356:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.359000 audit: BPF prog-id=7 op=LOAD
Feb  9 19:41:01.359000 audit: BPF prog-id=8 op=LOAD
Feb  9 19:41:01.360244 systemd[1]: Starting systemd-udevd.service...
Feb  9 19:41:01.371572 systemd-udevd[400]: Using default interface naming scheme 'v252'.
Feb  9 19:41:01.381152 systemd[1]: Started systemd-udevd.service.
Feb  9 19:41:01.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.382241 systemd[1]: Starting dracut-pre-trigger.service...
Feb  9 19:41:01.393455 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation
Feb  9 19:41:01.417138 systemd[1]: Finished dracut-pre-trigger.service.
Feb  9 19:41:01.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.418984 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 19:41:01.452297 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 19:41:01.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:01.481453 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb  9 19:41:01.485477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb  9 19:41:01.485503 kernel: GPT:9289727 != 19775487
Feb  9 19:41:01.485516 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb  9 19:41:01.485533 kernel: GPT:9289727 != 19775487
Feb  9 19:41:01.485545 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb  9 19:41:01.485928 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 19:41:01.490304 kernel: cryptd: max_cpu_qlen set to 1000
Feb  9 19:41:01.495294 kernel: libata version 3.00 loaded.
Feb  9 19:41:01.499431 kernel: ata_piix 0000:00:01.1: version 2.13
Feb  9 19:41:01.501704 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  9 19:41:01.501724 kernel: AES CTR mode by8 optimization enabled
Feb  9 19:41:01.502661 kernel: scsi host0: ata_piix
Feb  9 19:41:01.502946 kernel: scsi host1: ata_piix
Feb  9 19:41:01.503689 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
Feb  9 19:41:01.504292 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
Feb  9 19:41:01.531303 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454)
Feb  9 19:41:01.539061 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb  9 19:41:01.544601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb  9 19:41:01.551481 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb  9 19:41:01.553183 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb  9 19:41:01.557465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 19:41:01.558468 systemd[1]: Starting disk-uuid.service...
Feb  9 19:41:01.577305 disk-uuid[519]: Primary Header is updated.
Feb  9 19:41:01.577305 disk-uuid[519]: Secondary Entries is updated.
Feb  9 19:41:01.577305 disk-uuid[519]: Secondary Header is updated.
Feb  9 19:41:01.581347 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 19:41:01.584298 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 19:41:01.587306 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 19:41:01.665111 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb  9 19:41:01.665167 kernel: scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb  9 19:41:01.697304 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb  9 19:41:01.697480 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  9 19:41:01.714299 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0
Feb  9 19:41:02.585361 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 19:41:02.585414 disk-uuid[520]: The operation has completed successfully.
Feb  9 19:41:02.605868 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb  9 19:41:02.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.605945 systemd[1]: Finished disk-uuid.service.
Feb  9 19:41:02.612605 systemd[1]: Starting verity-setup.service...
Feb  9 19:41:02.624296 kernel: device-mapper: verity: sha256 using implementation "sha256-ni"
Feb  9 19:41:02.642564 systemd[1]: Found device dev-mapper-usr.device.
Feb  9 19:41:02.645815 systemd[1]: Mounting sysusr-usr.mount...
Feb  9 19:41:02.647609 systemd[1]: Finished verity-setup.service.
Feb  9 19:41:02.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.704111 systemd[1]: Mounted sysusr-usr.mount.
Feb  9 19:41:02.705180 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb  9 19:41:02.704352 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb  9 19:41:02.705199 systemd[1]: Starting ignition-setup.service...
Feb  9 19:41:02.706681 systemd[1]: Starting parse-ip-for-networkd.service...
Feb  9 19:41:02.712681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  9 19:41:02.712709 kernel: BTRFS info (device vda6): using free space tree
Feb  9 19:41:02.712723 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 19:41:02.720068 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb  9 19:41:02.727234 systemd[1]: Finished ignition-setup.service.
Feb  9 19:41:02.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.729290 systemd[1]: Starting ignition-fetch-offline.service...
Feb  9 19:41:02.765537 ignition[630]: Ignition 2.14.0
Feb  9 19:41:02.765561 ignition[630]: Stage: fetch-offline
Feb  9 19:41:02.765622 ignition[630]: no configs at "/usr/lib/ignition/base.d"
Feb  9 19:41:02.765632 ignition[630]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 19:41:02.765800 ignition[630]: parsed url from cmdline: ""
Feb  9 19:41:02.765804 ignition[630]: no config URL provided
Feb  9 19:41:02.765810 ignition[630]: reading system config file "/usr/lib/ignition/user.ign"
Feb  9 19:41:02.765818 ignition[630]: no config at "/usr/lib/ignition/user.ign"
Feb  9 19:41:02.765841 ignition[630]: op(1): [started]  loading QEMU firmware config module
Feb  9 19:41:02.765846 ignition[630]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb  9 19:41:02.769286 ignition[630]: op(1): [finished] loading QEMU firmware config module
Feb  9 19:41:02.782652 systemd[1]: Finished parse-ip-for-networkd.service.
Feb  9 19:41:02.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.783000 audit: BPF prog-id=9 op=LOAD
Feb  9 19:41:02.784714 systemd[1]: Starting systemd-networkd.service...
Feb  9 19:41:02.831282 ignition[630]: parsing config with SHA512: 247d9eb2638cd8433ef4cde88fd6fecf8612957bdb6a35390d35cc52d66ac0c4b8ec00f51493e69948bb85b0023c14d6705fcc012d8ed5398716f6d4c20089c1
Feb  9 19:41:02.855033 systemd-networkd[714]: lo: Link UP
Feb  9 19:41:02.855044 systemd-networkd[714]: lo: Gained carrier
Feb  9 19:41:02.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.855541 systemd-networkd[714]: Enumeration completed
Feb  9 19:41:02.855679 systemd[1]: Started systemd-networkd.service.
Feb  9 19:41:02.855775 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 19:41:02.856687 systemd-networkd[714]: eth0: Link UP
Feb  9 19:41:02.856691 systemd-networkd[714]: eth0: Gained carrier
Feb  9 19:41:02.857100 systemd[1]: Reached target network.target.
Feb  9 19:41:02.857983 systemd[1]: Starting iscsiuio.service...
Feb  9 19:41:02.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.863024 systemd[1]: Started iscsiuio.service.
Feb  9 19:41:02.865319 systemd[1]: Starting iscsid.service...
Feb  9 19:41:02.868088 iscsid[720]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 19:41:02.868088 iscsid[720]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb  9 19:41:02.868088 iscsid[720]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb  9 19:41:02.868088 iscsid[720]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb  9 19:41:02.868088 iscsid[720]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 19:41:02.868088 iscsid[720]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb  9 19:41:02.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.869727 ignition[630]: fetch-offline: fetch-offline passed
Feb  9 19:41:02.869081 unknown[630]: fetched base config from "system"
Feb  9 19:41:02.869799 ignition[630]: Ignition finished successfully
Feb  9 19:41:02.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.869089 unknown[630]: fetched user config from "qemu"
Feb  9 19:41:02.885421 ignition[726]: Ignition 2.14.0
Feb  9 19:41:02.869154 systemd[1]: Started iscsid.service.
Feb  9 19:41:02.885426 ignition[726]: Stage: kargs
Feb  9 19:41:02.870512 systemd[1]: Starting dracut-initqueue.service...
Feb  9 19:41:02.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.885505 ignition[726]: no configs at "/usr/lib/ignition/base.d"
Feb  9 19:41:02.874924 systemd[1]: Finished ignition-fetch-offline.service.
Feb  9 19:41:02.885513 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 19:41:02.876388 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb  9 19:41:02.886585 ignition[726]: kargs: kargs passed
Feb  9 19:41:02.877151 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb  9 19:41:02.886621 ignition[726]: Ignition finished successfully
Feb  9 19:41:02.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.877786 systemd[1]: Starting ignition-kargs.service...
Feb  9 19:41:02.896483 ignition[736]: Ignition 2.14.0
Feb  9 19:41:02.881763 systemd[1]: Finished dracut-initqueue.service.
Feb  9 19:41:02.896490 ignition[736]: Stage: disks
Feb  9 19:41:02.882688 systemd[1]: Reached target remote-fs-pre.target.
Feb  9 19:41:02.896571 ignition[736]: no configs at "/usr/lib/ignition/base.d"
Feb  9 19:41:02.883424 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 19:41:02.896580 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 19:41:02.884187 systemd[1]: Reached target remote-fs.target.
Feb  9 19:41:02.897665 ignition[736]: disks: disks passed
Feb  9 19:41:02.885702 systemd[1]: Starting dracut-pre-mount.service...
Feb  9 19:41:02.897700 ignition[736]: Ignition finished successfully
Feb  9 19:41:02.887602 systemd[1]: Finished ignition-kargs.service.
Feb  9 19:41:02.889461 systemd[1]: Starting ignition-disks.service...
Feb  9 19:41:02.893323 systemd[1]: Finished dracut-pre-mount.service.
Feb  9 19:41:02.898350 systemd[1]: Finished ignition-disks.service.
Feb  9 19:41:02.899735 systemd[1]: Reached target initrd-root-device.target.
Feb  9 19:41:02.900819 systemd[1]: Reached target local-fs-pre.target.
Feb  9 19:41:02.901386 systemd[1]: Reached target local-fs.target.
Feb  9 19:41:02.901916 systemd[1]: Reached target sysinit.target.
Feb  9 19:41:02.915735 systemd-fsck[749]: ROOT: clean, 602/553520 files, 56014/553472 blocks
Feb  9 19:41:02.902983 systemd[1]: Reached target basic.target.
Feb  9 19:41:02.904127 systemd[1]: Starting systemd-fsck-root.service...
Feb  9 19:41:02.920465 systemd[1]: Finished systemd-fsck-root.service.
Feb  9 19:41:02.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.924184 systemd[1]: Mounting sysroot.mount...
Feb  9 19:41:02.930288 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb  9 19:41:02.930449 systemd[1]: Mounted sysroot.mount.
Feb  9 19:41:02.931449 systemd[1]: Reached target initrd-root-fs.target.
Feb  9 19:41:02.933296 systemd[1]: Mounting sysroot-usr.mount...
Feb  9 19:41:02.934549 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb  9 19:41:02.934582 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb  9 19:41:02.935541 systemd[1]: Reached target ignition-diskful.target.
Feb  9 19:41:02.938776 systemd[1]: Mounted sysroot-usr.mount.
Feb  9 19:41:02.940434 systemd[1]: Starting initrd-setup-root.service...
Feb  9 19:41:02.944307 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory
Feb  9 19:41:02.948550 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory
Feb  9 19:41:02.952453 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory
Feb  9 19:41:02.955608 initrd-setup-root[783]: cut: /sysroot/etc/gshadow: No such file or directory
Feb  9 19:41:02.982266 systemd[1]: Finished initrd-setup-root.service.
Feb  9 19:41:02.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:02.984124 systemd[1]: Starting ignition-mount.service...
Feb  9 19:41:02.985705 systemd[1]: Starting sysroot-boot.service...
Feb  9 19:41:02.990295 bash[800]: umount: /sysroot/usr/share/oem: not mounted.
Feb  9 19:41:02.999500 ignition[802]: INFO     : Ignition 2.14.0
Feb  9 19:41:02.999500 ignition[802]: INFO     : Stage: mount
Feb  9 19:41:03.000672 ignition[802]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 19:41:03.000672 ignition[802]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 19:41:03.002095 ignition[802]: INFO     : mount: mount passed
Feb  9 19:41:03.002095 ignition[802]: INFO     : Ignition finished successfully
Feb  9 19:41:03.003526 systemd[1]: Finished ignition-mount.service.
Feb  9 19:41:03.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:03.005363 systemd[1]: Finished sysroot-boot.service.
Feb  9 19:41:03.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:03.654271 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  9 19:41:03.661331 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810)
Feb  9 19:41:03.661404 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  9 19:41:03.661417 kernel: BTRFS info (device vda6): using free space tree
Feb  9 19:41:03.662450 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 19:41:03.665090 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  9 19:41:03.666747 systemd[1]: Starting ignition-files.service...
Feb  9 19:41:03.679586 ignition[830]: INFO     : Ignition 2.14.0
Feb  9 19:41:03.679586 ignition[830]: INFO     : Stage: files
Feb  9 19:41:03.681224 ignition[830]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 19:41:03.681224 ignition[830]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 19:41:03.683648 ignition[830]: DEBUG    : files: compiled without relabeling support, skipping
Feb  9 19:41:03.683648 ignition[830]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb  9 19:41:03.683648 ignition[830]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb  9 19:41:03.687355 ignition[830]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb  9 19:41:03.688749 ignition[830]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb  9 19:41:03.690332 unknown[830]: wrote ssh authorized keys file for user: core
Feb  9 19:41:03.691361 ignition[830]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb  9 19:41:03.693406 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb  9 19:41:03.695264 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Feb  9 19:41:03.731166 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb  9 19:41:03.825339 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb  9 19:41:03.825339 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb  9 19:41:03.828210 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1
Feb  9 19:41:04.224386 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb  9 19:41:04.245463 systemd-networkd[714]: eth0: Gained IPv6LL
Feb  9 19:41:04.383302 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a
Feb  9 19:41:04.385405 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz"
Feb  9 19:41:04.385405 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb  9 19:41:04.385405 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1
Feb  9 19:41:04.657127 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb  9 19:41:04.859648 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540
Feb  9 19:41:04.861887 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz"
Feb  9 19:41:04.861887 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb  9 19:41:04.861887 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb  9 19:41:04.861887 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubectl"
Feb  9 19:41:04.861887 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1
Feb  9 19:41:05.307556 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb  9 19:41:13.038919 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3
Feb  9 19:41:13.041218 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl"
Feb  9 19:41:13.041218 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb  9 19:41:13.041218 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1
Feb  9 19:41:13.202862 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET result: OK
Feb  9 19:41:28.946498 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75
Feb  9 19:41:28.949189 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb  9 19:41:28.949189 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb  9 19:41:28.949189 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1
Feb  9 19:41:29.183830 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): GET result: OK
Feb  9 19:41:36.333066 ignition[830]: DEBUG    : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1
Feb  9 19:41:36.335475 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb  9 19:41:36.335475 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb  9 19:41:36.335475 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Feb  9 19:41:36.804317 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Feb  9 19:41:36.875097 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb  9 19:41:36.875097 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/home/core/install.sh"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(10): [started]  processing unit "prepare-critools.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(10): op(11): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(10): [finished] processing unit "prepare-critools.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(12): [started]  processing unit "prepare-helm.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(12): op(13): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(12): [finished] processing unit "prepare-helm.service"
Feb  9 19:41:36.879168 ignition[830]: INFO     : files: op(14): [started]  processing unit "coreos-metadata.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(14): op(15): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(14): [finished] processing unit "coreos-metadata.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(16): [started]  processing unit "prepare-cni-plugins.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(16): op(17): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(16): [finished] processing unit "prepare-cni-plugins.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(18): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(19): [started]  setting preset to enabled for "prepare-critools.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(19): [finished] setting preset to enabled for "prepare-critools.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(1a): [started]  setting preset to enabled for "prepare-helm.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(1b): [started]  setting preset to disabled for "coreos-metadata.service"
Feb  9 19:41:36.902985 ignition[830]: INFO     : files: op(1b): op(1c): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb  9 19:41:36.928999 kernel: kauditd_printk_skb: 23 callbacks suppressed
Feb  9 19:41:36.929021 kernel: audit: type=1130 audit(1707507696.922:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.920736 systemd[1]: Finished ignition-files.service.
Feb  9 19:41:36.933475 kernel: audit: type=1130 audit(1707507696.930:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.933491 kernel: audit: type=1130 audit(1707507696.933:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.933545 ignition[830]: INFO     : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb  9 19:41:36.933545 ignition[830]: INFO     : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service"
Feb  9 19:41:36.933545 ignition[830]: INFO     : files: createResultFile: createFiles: op(1d): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb  9 19:41:36.933545 ignition[830]: INFO     : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb  9 19:41:36.933545 ignition[830]: INFO     : files: files passed
Feb  9 19:41:36.933545 ignition[830]: INFO     : Ignition finished successfully
Feb  9 19:41:36.944667 kernel: audit: type=1131 audit(1707507696.933:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.923340 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb  9 19:41:36.927705 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb  9 19:41:36.946839 initrd-setup-root-after-ignition[855]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory
Feb  9 19:41:36.928364 systemd[1]: Starting ignition-quench.service...
Feb  9 19:41:36.948642 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb  9 19:41:36.929823 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb  9 19:41:36.930830 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb  9 19:41:36.930899 systemd[1]: Finished ignition-quench.service.
Feb  9 19:41:36.933570 systemd[1]: Reached target ignition-complete.target.
Feb  9 19:41:36.940223 systemd[1]: Starting initrd-parse-etc.service...
Feb  9 19:41:36.952642 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  9 19:41:36.960487 kernel: audit: type=1130 audit(1707507696.953:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.961287 kernel: audit: type=1131 audit(1707507696.953:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.952732 systemd[1]: Finished initrd-parse-etc.service.
Feb  9 19:41:36.954245 systemd[1]: Reached target initrd-fs.target.
Feb  9 19:41:36.960445 systemd[1]: Reached target initrd.target.
Feb  9 19:41:36.961335 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb  9 19:41:36.962142 systemd[1]: Starting dracut-pre-pivot.service...
Feb  9 19:41:36.975067 systemd[1]: Finished dracut-pre-pivot.service.
Feb  9 19:41:36.979084 kernel: audit: type=1130 audit(1707507696.975:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.976770 systemd[1]: Starting initrd-cleanup.service...
Feb  9 19:41:36.985500 systemd[1]: Stopped target network.target.
Feb  9 19:41:36.986354 systemd[1]: Stopped target nss-lookup.target.
Feb  9 19:41:36.987870 systemd[1]: Stopped target remote-cryptsetup.target.
Feb  9 19:41:36.989445 systemd[1]: Stopped target timers.target.
Feb  9 19:41:36.990925 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  9 19:41:36.995975 kernel: audit: type=1131 audit(1707507696.992:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:36.991020 systemd[1]: Stopped dracut-pre-pivot.service.
Feb  9 19:41:36.992429 systemd[1]: Stopped target initrd.target.
Feb  9 19:41:36.996032 systemd[1]: Stopped target basic.target.
Feb  9 19:41:36.996798 systemd[1]: Stopped target ignition-complete.target.
Feb  9 19:41:36.998323 systemd[1]: Stopped target ignition-diskful.target.
Feb  9 19:41:36.999837 systemd[1]: Stopped target initrd-root-device.target.
Feb  9 19:41:37.001330 systemd[1]: Stopped target remote-fs.target.
Feb  9 19:41:37.002787 systemd[1]: Stopped target remote-fs-pre.target.
Feb  9 19:41:37.004371 systemd[1]: Stopped target sysinit.target.
Feb  9 19:41:37.005623 systemd[1]: Stopped target local-fs.target.
Feb  9 19:41:37.007028 systemd[1]: Stopped target local-fs-pre.target.
Feb  9 19:41:37.015120 kernel: audit: type=1131 audit(1707507697.011:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.008654 systemd[1]: Stopped target swap.target.
Feb  9 19:41:37.009977 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  9 19:41:37.020008 kernel: audit: type=1131 audit(1707507697.016:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.010064 systemd[1]: Stopped dracut-pre-mount.service.
Feb  9 19:41:37.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.011447 systemd[1]: Stopped target cryptsetup.target.
Feb  9 19:41:37.015151 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  9 19:41:37.015231 systemd[1]: Stopped dracut-initqueue.service.
Feb  9 19:41:37.016769 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb  9 19:41:37.016882 systemd[1]: Stopped ignition-fetch-offline.service.
Feb  9 19:41:37.020152 systemd[1]: Stopped target paths.target.
Feb  9 19:41:37.021267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  9 19:41:37.025332 systemd[1]: Stopped systemd-ask-password-console.path.
Feb  9 19:41:37.026494 systemd[1]: Stopped target slices.target.
Feb  9 19:41:37.027683 systemd[1]: Stopped target sockets.target.
Feb  9 19:41:37.028754 systemd[1]: iscsid.socket: Deactivated successfully.
Feb  9 19:41:37.028844 systemd[1]: Closed iscsid.socket.
Feb  9 19:41:37.029782 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb  9 19:41:37.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.029870 systemd[1]: Closed iscsiuio.socket.
Feb  9 19:41:37.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.030963 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb  9 19:41:37.031070 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb  9 19:41:37.032187 systemd[1]: ignition-files.service: Deactivated successfully.
Feb  9 19:41:37.032269 systemd[1]: Stopped ignition-files.service.
Feb  9 19:41:37.034114 systemd[1]: Stopping ignition-mount.service...
Feb  9 19:41:37.035649 systemd[1]: Stopping sysroot-boot.service...
Feb  9 19:41:37.036471 systemd[1]: Stopping systemd-networkd.service...
Feb  9 19:41:37.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.043207 ignition[872]: INFO     : Ignition 2.14.0
Feb  9 19:41:37.043207 ignition[872]: INFO     : Stage: umount
Feb  9 19:41:37.043207 ignition[872]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 19:41:37.043207 ignition[872]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 19:41:37.043207 ignition[872]: INFO     : umount: umount passed
Feb  9 19:41:37.043207 ignition[872]: INFO     : Ignition finished successfully
Feb  9 19:41:37.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.038262 systemd[1]: Stopping systemd-resolved.service...
Feb  9 19:41:37.039097 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  9 19:41:37.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.039248 systemd[1]: Stopped systemd-udev-trigger.service.
Feb  9 19:41:37.040412 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  9 19:41:37.040519 systemd[1]: Stopped dracut-pre-trigger.service.
Feb  9 19:41:37.044339 systemd-networkd[714]: eth0: DHCPv6 lease lost
Feb  9 19:41:37.045672 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb  9 19:41:37.045781 systemd[1]: Stopped systemd-resolved.service.
Feb  9 19:41:37.048674 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  9 19:41:37.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.048777 systemd[1]: Stopped systemd-networkd.service.
Feb  9 19:41:37.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.057000 audit: BPF prog-id=6 op=UNLOAD
Feb  9 19:41:37.057000 audit: BPF prog-id=9 op=UNLOAD
Feb  9 19:41:37.053405 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb  9 19:41:37.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.054328 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb  9 19:41:37.054481 systemd[1]: Stopped ignition-mount.service.
Feb  9 19:41:37.055867 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb  9 19:41:37.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.055931 systemd[1]: Stopped sysroot-boot.service.
Feb  9 19:41:37.057463 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb  9 19:41:37.057573 systemd[1]: Closed systemd-networkd.socket.
Feb  9 19:41:37.058290 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb  9 19:41:37.058353 systemd[1]: Stopped ignition-disks.service.
Feb  9 19:41:37.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.058483 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb  9 19:41:37.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.058536 systemd[1]: Stopped ignition-kargs.service.
Feb  9 19:41:37.058820 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb  9 19:41:37.058857 systemd[1]: Stopped ignition-setup.service.
Feb  9 19:41:37.059080 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb  9 19:41:37.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.059109 systemd[1]: Stopped initrd-setup-root.service.
Feb  9 19:41:37.060248 systemd[1]: Stopping network-cleanup.service...
Feb  9 19:41:37.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.060527 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb  9 19:41:37.060562 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb  9 19:41:37.060689 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 19:41:37.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.060719 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 19:41:37.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.062325 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  9 19:41:37.062365 systemd[1]: Stopped systemd-modules-load.service.
Feb  9 19:41:37.063550 systemd[1]: Stopping systemd-udevd.service...
Feb  9 19:41:37.065240 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  9 19:41:37.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:37.065766 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  9 19:41:37.065851 systemd[1]: Finished initrd-cleanup.service.
Feb  9 19:41:37.069607 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb  9 19:41:37.069672 systemd[1]: Stopped network-cleanup.service.
Feb  9 19:41:37.071101 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  9 19:41:37.071297 systemd[1]: Stopped systemd-udevd.service.
Feb  9 19:41:37.073139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  9 19:41:37.073183 systemd[1]: Closed systemd-udevd-control.socket.
Feb  9 19:41:37.074175 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  9 19:41:37.074212 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb  9 19:41:37.075350 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  9 19:41:37.075398 systemd[1]: Stopped dracut-pre-udev.service.
Feb  9 19:41:37.076538 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  9 19:41:37.076582 systemd[1]: Stopped dracut-cmdline.service.
Feb  9 19:41:37.077701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb  9 19:41:37.077746 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb  9 19:41:37.079724 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb  9 19:41:37.080688 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb  9 19:41:37.080736 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Feb  9 19:41:37.082715 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  9 19:41:37.082760 systemd[1]: Stopped kmod-static-nodes.service.
Feb  9 19:41:37.083415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  9 19:41:37.083445 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb  9 19:41:37.085421 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb  9 19:41:37.085890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  9 19:41:37.085978 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb  9 19:41:37.087062 systemd[1]: Reached target initrd-switch-root.target.
Feb  9 19:41:37.089060 systemd[1]: Starting initrd-switch-root.service...
Feb  9 19:41:37.104694 systemd[1]: Switching root.
Feb  9 19:41:37.121953 iscsid[720]: iscsid shutting down.
Feb  9 19:41:37.122439 systemd-journald[198]: Received SIGTERM from PID 1 (systemd).
Feb  9 19:41:37.122467 systemd-journald[198]: Journal stopped
Feb  9 19:41:41.273398 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb  9 19:41:41.273447 kernel: SELinux:  Class anon_inode not defined in policy.
Feb  9 19:41:41.273460 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb  9 19:41:41.273470 kernel: SELinux:  policy capability network_peer_controls=1
Feb  9 19:41:41.273482 kernel: SELinux:  policy capability open_perms=1
Feb  9 19:41:41.273495 kernel: SELinux:  policy capability extended_socket_class=1
Feb  9 19:41:41.273508 kernel: SELinux:  policy capability always_check_network=0
Feb  9 19:41:41.273521 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  9 19:41:41.273536 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  9 19:41:41.273550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb  9 19:41:41.273560 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb  9 19:41:41.273571 systemd[1]: Successfully loaded SELinux policy in 34.650ms.
Feb  9 19:41:41.273589 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.210ms.
Feb  9 19:41:41.273602 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 19:41:41.273614 systemd[1]: Detected virtualization kvm.
Feb  9 19:41:41.273624 systemd[1]: Detected architecture x86-64.
Feb  9 19:41:41.273634 systemd[1]: Detected first boot.
Feb  9 19:41:41.273644 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 19:41:41.273654 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb  9 19:41:41.273664 systemd[1]: Populated /etc with preset unit settings.
Feb  9 19:41:41.273677 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 19:41:41.273688 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 19:41:41.273700 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 19:41:41.273710 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb  9 19:41:41.273720 systemd[1]: Stopped iscsiuio.service.
Feb  9 19:41:41.273732 systemd[1]: iscsid.service: Deactivated successfully.
Feb  9 19:41:41.273748 systemd[1]: Stopped iscsid.service.
Feb  9 19:41:41.273758 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb  9 19:41:41.273770 systemd[1]: Stopped initrd-switch-root.service.
Feb  9 19:41:41.273781 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb  9 19:41:41.273794 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb  9 19:41:41.273808 systemd[1]: Created slice system-addon\x2drun.slice.
Feb  9 19:41:41.273822 systemd[1]: Created slice system-getty.slice.
Feb  9 19:41:41.273837 systemd[1]: Created slice system-modprobe.slice.
Feb  9 19:41:41.273851 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb  9 19:41:41.273862 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb  9 19:41:41.273872 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb  9 19:41:41.273884 systemd[1]: Created slice user.slice.
Feb  9 19:41:41.273894 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 19:41:41.273904 systemd[1]: Started systemd-ask-password-wall.path.
Feb  9 19:41:41.273914 systemd[1]: Set up automount boot.automount.
Feb  9 19:41:41.273924 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb  9 19:41:41.273934 systemd[1]: Stopped target initrd-switch-root.target.
Feb  9 19:41:41.273944 systemd[1]: Stopped target initrd-fs.target.
Feb  9 19:41:41.273954 systemd[1]: Stopped target initrd-root-fs.target.
Feb  9 19:41:41.273965 systemd[1]: Reached target integritysetup.target.
Feb  9 19:41:41.273975 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 19:41:41.273986 systemd[1]: Reached target remote-fs.target.
Feb  9 19:41:41.273997 systemd[1]: Reached target slices.target.
Feb  9 19:41:41.274010 systemd[1]: Reached target swap.target.
Feb  9 19:41:41.274024 systemd[1]: Reached target torcx.target.
Feb  9 19:41:41.274038 systemd[1]: Reached target veritysetup.target.
Feb  9 19:41:41.274051 systemd[1]: Listening on systemd-coredump.socket.
Feb  9 19:41:41.274061 systemd[1]: Listening on systemd-initctl.socket.
Feb  9 19:41:41.274073 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 19:41:41.274083 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 19:41:41.274094 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 19:41:41.274104 systemd[1]: Listening on systemd-userdbd.socket.
Feb  9 19:41:41.274114 systemd[1]: Mounting dev-hugepages.mount...
Feb  9 19:41:41.274124 systemd[1]: Mounting dev-mqueue.mount...
Feb  9 19:41:41.274136 systemd[1]: Mounting media.mount...
Feb  9 19:41:41.274151 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 19:41:41.274165 systemd[1]: Mounting sys-kernel-debug.mount...
Feb  9 19:41:41.274182 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb  9 19:41:41.274197 systemd[1]: Mounting tmp.mount...
Feb  9 19:41:41.274211 systemd[1]: Starting flatcar-tmpfiles.service...
Feb  9 19:41:41.274224 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  9 19:41:41.274237 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 19:41:41.274247 systemd[1]: Starting modprobe@configfs.service...
Feb  9 19:41:41.274257 systemd[1]: Starting modprobe@dm_mod.service...
Feb  9 19:41:41.274268 systemd[1]: Starting modprobe@drm.service...
Feb  9 19:41:41.274290 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  9 19:41:41.274302 systemd[1]: Starting modprobe@fuse.service...
Feb  9 19:41:41.274312 systemd[1]: Starting modprobe@loop.service...
Feb  9 19:41:41.274322 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  9 19:41:41.274337 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb  9 19:41:41.274348 systemd[1]: Stopped systemd-fsck-root.service.
Feb  9 19:41:41.274360 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb  9 19:41:41.274370 systemd[1]: Stopped systemd-fsck-usr.service.
Feb  9 19:41:41.274379 kernel: loop: module loaded
Feb  9 19:41:41.274389 systemd[1]: Stopped systemd-journald.service.
Feb  9 19:41:41.274399 systemd[1]: Starting systemd-journald.service...
Feb  9 19:41:41.274409 kernel: fuse: init (API version 7.34)
Feb  9 19:41:41.274418 systemd[1]: Starting systemd-modules-load.service...
Feb  9 19:41:41.274428 systemd[1]: Starting systemd-network-generator.service...
Feb  9 19:41:41.274438 systemd[1]: Starting systemd-remount-fs.service...
Feb  9 19:41:41.274450 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 19:41:41.274460 systemd[1]: verity-setup.service: Deactivated successfully.
Feb  9 19:41:41.274471 systemd[1]: Stopped verity-setup.service.
Feb  9 19:41:41.274481 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 19:41:41.274491 systemd[1]: Mounted dev-hugepages.mount.
Feb  9 19:41:41.274501 systemd[1]: Mounted dev-mqueue.mount.
Feb  9 19:41:41.274512 systemd[1]: Mounted media.mount.
Feb  9 19:41:41.274522 systemd[1]: Mounted sys-kernel-debug.mount.
Feb  9 19:41:41.274533 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb  9 19:41:41.274549 systemd[1]: Mounted tmp.mount.
Feb  9 19:41:41.274564 systemd[1]: Finished flatcar-tmpfiles.service.
Feb  9 19:41:41.274579 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 19:41:41.274595 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  9 19:41:41.274605 systemd[1]: Finished modprobe@configfs.service.
Feb  9 19:41:41.274619 systemd-journald[982]: Journal started
Feb  9 19:41:41.274658 systemd-journald[982]: Runtime Journal (/run/log/journal/02b436c199014b90b8c02ea92f816991) is 6.0M, max 48.4M, 42.4M free.
Feb  9 19:41:37.177000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb  9 19:41:38.198000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  9 19:41:38.199000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  9 19:41:38.199000 audit: BPF prog-id=10 op=LOAD
Feb  9 19:41:38.199000 audit: BPF prog-id=10 op=UNLOAD
Feb  9 19:41:38.199000 audit: BPF prog-id=11 op=LOAD
Feb  9 19:41:38.199000 audit: BPF prog-id=11 op=UNLOAD
Feb  9 19:41:38.228000 audit[905]: AVC avc:  denied  { associate } for  pid=905 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Feb  9 19:41:38.228000 audit[905]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 19:41:38.228000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb  9 19:41:38.229000 audit[905]: AVC avc:  denied  { associate } for  pid=905 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Feb  9 19:41:38.229000 audit[905]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079b9 a2=1ed a3=0 items=2 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 19:41:38.229000 audit: CWD cwd="/"
Feb  9 19:41:38.229000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:38.229000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:38.229000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Feb  9 19:41:41.145000 audit: BPF prog-id=12 op=LOAD
Feb  9 19:41:41.145000 audit: BPF prog-id=3 op=UNLOAD
Feb  9 19:41:41.145000 audit: BPF prog-id=13 op=LOAD
Feb  9 19:41:41.145000 audit: BPF prog-id=14 op=LOAD
Feb  9 19:41:41.145000 audit: BPF prog-id=4 op=UNLOAD
Feb  9 19:41:41.145000 audit: BPF prog-id=5 op=UNLOAD
Feb  9 19:41:41.146000 audit: BPF prog-id=15 op=LOAD
Feb  9 19:41:41.146000 audit: BPF prog-id=12 op=UNLOAD
Feb  9 19:41:41.146000 audit: BPF prog-id=16 op=LOAD
Feb  9 19:41:41.146000 audit: BPF prog-id=17 op=LOAD
Feb  9 19:41:41.146000 audit: BPF prog-id=13 op=UNLOAD
Feb  9 19:41:41.146000 audit: BPF prog-id=14 op=UNLOAD
Feb  9 19:41:41.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.157000 audit: BPF prog-id=15 op=UNLOAD
Feb  9 19:41:41.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.276724 systemd[1]: Started systemd-journald.service.
Feb  9 19:41:41.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.245000 audit: BPF prog-id=18 op=LOAD
Feb  9 19:41:41.246000 audit: BPF prog-id=19 op=LOAD
Feb  9 19:41:41.246000 audit: BPF prog-id=20 op=LOAD
Feb  9 19:41:41.246000 audit: BPF prog-id=16 op=UNLOAD
Feb  9 19:41:41.246000 audit: BPF prog-id=17 op=UNLOAD
Feb  9 19:41:41.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.271000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb  9 19:41:41.271000 audit[982]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc6476a540 a2=4000 a3=7ffc6476a5dc items=0 ppid=1 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 19:41:41.271000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb  9 19:41:41.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:38.227440 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 19:41:41.144128 systemd[1]: Queued start job for default target multi-user.target.
Feb  9 19:41:38.227631 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb  9 19:41:41.144139 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Feb  9 19:41:41.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:38.227646 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb  9 19:41:41.147597 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb  9 19:41:38.227672 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Feb  9 19:41:41.276892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  9 19:41:41.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:38.227681 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="skipped missing lower profile" missing profile=oem
Feb  9 19:41:41.277042 systemd[1]: Finished modprobe@dm_mod.service.
Feb  9 19:41:38.227706 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Feb  9 19:41:41.277847 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  9 19:41:38.227716 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Feb  9 19:41:41.277986 systemd[1]: Finished modprobe@drm.service.
Feb  9 19:41:41.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:38.227901 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Feb  9 19:41:41.278771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  9 19:41:38.227931 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Feb  9 19:41:41.278915 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  9 19:41:38.227941 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Feb  9 19:41:41.279780 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  9 19:41:41.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:38.228205 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Feb  9 19:41:41.279934 systemd[1]: Finished modprobe@fuse.service.
Feb  9 19:41:38.228233 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Feb  9 19:41:41.280775 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  9 19:41:41.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:38.228248 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2
Feb  9 19:41:41.280927 systemd[1]: Finished modprobe@loop.service.
Feb  9 19:41:38.228260 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Feb  9 19:41:41.281846 systemd[1]: Finished systemd-modules-load.service.
Feb  9 19:41:38.228285 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2
Feb  9 19:41:41.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.282825 systemd[1]: Finished systemd-network-generator.service.
Feb  9 19:41:38.228298 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Feb  9 19:41:40.896585 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  9 19:41:40.896832 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  9 19:41:41.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.283875 systemd[1]: Finished systemd-remount-fs.service.
Feb  9 19:41:40.896922 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  9 19:41:40.897057 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Feb  9 19:41:40.897101 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Feb  9 19:41:40.897158 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2024-02-09T19:41:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Feb  9 19:41:41.285060 systemd[1]: Reached target network-pre.target.
Feb  9 19:41:41.286735 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb  9 19:41:41.288427 systemd[1]: Mounting sys-kernel-config.mount...
Feb  9 19:41:41.289060 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  9 19:41:41.290197 systemd[1]: Starting systemd-hwdb-update.service...
Feb  9 19:41:41.291870 systemd[1]: Starting systemd-journal-flush.service...
Feb  9 19:41:41.302389 systemd-journald[982]: Time spent on flushing to /var/log/journal/02b436c199014b90b8c02ea92f816991 is 26.664ms for 1191 entries.
Feb  9 19:41:41.302389 systemd-journald[982]: System Journal (/var/log/journal/02b436c199014b90b8c02ea92f816991) is 8.0M, max 195.6M, 187.6M free.
Feb  9 19:41:41.343538 systemd-journald[982]: Received client request to flush runtime journal.
Feb  9 19:41:41.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.292571 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  9 19:41:41.293335 systemd[1]: Starting systemd-random-seed.service...
Feb  9 19:41:41.294023 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  9 19:41:41.294817 systemd[1]: Starting systemd-sysctl.service...
Feb  9 19:41:41.296255 systemd[1]: Starting systemd-sysusers.service...
Feb  9 19:41:41.299769 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb  9 19:41:41.345042 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb  9 19:41:41.300590 systemd[1]: Mounted sys-kernel-config.mount.
Feb  9 19:41:41.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.307704 systemd[1]: Finished systemd-sysctl.service.
Feb  9 19:41:41.310316 systemd[1]: Finished systemd-random-seed.service.
Feb  9 19:41:41.311431 systemd[1]: Reached target first-boot-complete.target.
Feb  9 19:41:41.314663 systemd[1]: Finished systemd-sysusers.service.
Feb  9 19:41:41.316551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 19:41:41.319067 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 19:41:41.320695 systemd[1]: Starting systemd-udev-settle.service...
Feb  9 19:41:41.333866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 19:41:41.344356 systemd[1]: Finished systemd-journal-flush.service.
Feb  9 19:41:41.975197 systemd[1]: Finished systemd-hwdb-update.service.
Feb  9 19:41:41.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.976409 kernel: kauditd_printk_skb: 101 callbacks suppressed
Feb  9 19:41:41.976466 kernel: audit: type=1130 audit(1707507701.975:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:41.978000 audit: BPF prog-id=21 op=LOAD
Feb  9 19:41:41.979290 kernel: audit: type=1334 audit(1707507701.978:137): prog-id=21 op=LOAD
Feb  9 19:41:41.979318 kernel: audit: type=1334 audit(1707507701.978:138): prog-id=22 op=LOAD
Feb  9 19:41:41.978000 audit: BPF prog-id=22 op=LOAD
Feb  9 19:41:41.979000 audit: BPF prog-id=7 op=UNLOAD
Feb  9 19:41:41.979953 systemd[1]: Starting systemd-udevd.service...
Feb  9 19:41:41.980638 kernel: audit: type=1334 audit(1707507701.979:139): prog-id=7 op=UNLOAD
Feb  9 19:41:41.980687 kernel: audit: type=1334 audit(1707507701.979:140): prog-id=8 op=UNLOAD
Feb  9 19:41:41.979000 audit: BPF prog-id=8 op=UNLOAD
Feb  9 19:41:41.995355 systemd-udevd[1013]: Using default interface naming scheme 'v252'.
Feb  9 19:41:42.007653 systemd[1]: Started systemd-udevd.service.
Feb  9 19:41:42.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.011296 kernel: audit: type=1130 audit(1707507702.008:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.011332 kernel: audit: type=1334 audit(1707507702.011:142): prog-id=23 op=LOAD
Feb  9 19:41:42.011000 audit: BPF prog-id=23 op=LOAD
Feb  9 19:41:42.011857 systemd[1]: Starting systemd-networkd.service...
Feb  9 19:41:42.016035 systemd[1]: Starting systemd-userdbd.service...
Feb  9 19:41:42.015000 audit: BPF prog-id=24 op=LOAD
Feb  9 19:41:42.015000 audit: BPF prog-id=25 op=LOAD
Feb  9 19:41:42.015000 audit: BPF prog-id=26 op=LOAD
Feb  9 19:41:42.018739 kernel: audit: type=1334 audit(1707507702.015:143): prog-id=24 op=LOAD
Feb  9 19:41:42.018779 kernel: audit: type=1334 audit(1707507702.015:144): prog-id=25 op=LOAD
Feb  9 19:41:42.018795 kernel: audit: type=1334 audit(1707507702.015:145): prog-id=26 op=LOAD
Feb  9 19:41:42.045308 systemd[1]: Started systemd-userdbd.service.
Feb  9 19:41:42.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.056089 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Feb  9 19:41:42.064160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 19:41:42.081319 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Feb  9 19:41:42.083403 systemd-networkd[1023]: lo: Link UP
Feb  9 19:41:42.083415 systemd-networkd[1023]: lo: Gained carrier
Feb  9 19:41:42.083787 systemd-networkd[1023]: Enumeration completed
Feb  9 19:41:42.083879 systemd[1]: Started systemd-networkd.service.
Feb  9 19:41:42.083880 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 19:41:42.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.084746 systemd-networkd[1023]: eth0: Link UP
Feb  9 19:41:42.084750 systemd-networkd[1023]: eth0: Gained carrier
Feb  9 19:41:42.079000 audit[1024]: AVC avc:  denied  { confidentiality } for  pid=1024 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  9 19:41:42.079000 audit[1024]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5590ea32dc40 a1=32194 a2=7f7160c94bc5 a3=5 items=108 ppid=1013 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 19:41:42.088479 kernel: ACPI: button: Power Button [PWRF]
Feb  9 19:41:42.079000 audit: CWD cwd="/"
Feb  9 19:41:42.079000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=1 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=2 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=3 name=(null) inode=12799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=4 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=5 name=(null) inode=12800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=6 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=7 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=8 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=9 name=(null) inode=12802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=10 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=11 name=(null) inode=12803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=12 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=13 name=(null) inode=12804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=14 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=15 name=(null) inode=12805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=16 name=(null) inode=12801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=17 name=(null) inode=12806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=18 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=19 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=20 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=21 name=(null) inode=12808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=22 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=23 name=(null) inode=12809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=24 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=25 name=(null) inode=12810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=26 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=27 name=(null) inode=12811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=28 name=(null) inode=12807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=29 name=(null) inode=12812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=30 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=31 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=32 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=33 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=34 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=35 name=(null) inode=12815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=36 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=37 name=(null) inode=12816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=38 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=39 name=(null) inode=12817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=40 name=(null) inode=12813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=41 name=(null) inode=12818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=42 name=(null) inode=12798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=43 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=44 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=45 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=46 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=47 name=(null) inode=12821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=48 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=49 name=(null) inode=12822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=50 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=51 name=(null) inode=12823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=52 name=(null) inode=12819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=53 name=(null) inode=12824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=55 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=56 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=57 name=(null) inode=12826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=58 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=59 name=(null) inode=12827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=60 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=61 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=62 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=63 name=(null) inode=12829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=64 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=65 name=(null) inode=12830 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=66 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=67 name=(null) inode=12831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=68 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=69 name=(null) inode=12832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=70 name=(null) inode=12828 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=71 name=(null) inode=12833 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=72 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=73 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=74 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=75 name=(null) inode=12835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=76 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=77 name=(null) inode=12836 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=78 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=79 name=(null) inode=12837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=80 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=81 name=(null) inode=12838 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=82 name=(null) inode=12834 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=83 name=(null) inode=12839 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=84 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=85 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=86 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=87 name=(null) inode=12841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=88 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=89 name=(null) inode=12842 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=90 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=91 name=(null) inode=12843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=92 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=93 name=(null) inode=12844 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=94 name=(null) inode=12840 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=95 name=(null) inode=12845 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=96 name=(null) inode=12825 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=97 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=98 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.091302 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0
Feb  9 19:41:42.079000 audit: PATH item=99 name=(null) inode=12847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=100 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=101 name=(null) inode=12848 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=102 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=103 name=(null) inode=12849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=104 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=105 name=(null) inode=12850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=106 name=(null) inode=12846 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PATH item=107 name=(null) inode=12851 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 19:41:42.079000 audit: PROCTITLE proctitle="(udev-worker)"
Feb  9 19:41:42.097374 systemd-networkd[1023]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb  9 19:41:42.104303 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Feb  9 19:41:42.114315 kernel: mousedev: PS/2 mouse device common for all mice
Feb  9 19:41:42.164426 kernel: kvm: Nested Virtualization enabled
Feb  9 19:41:42.164517 kernel: SVM: kvm: Nested Paging enabled
Feb  9 19:41:42.165378 kernel: SVM: Virtual VMLOAD VMSAVE supported
Feb  9 19:41:42.165408 kernel: SVM: Virtual GIF supported
Feb  9 19:41:42.178303 kernel: EDAC MC: Ver: 3.0.0
Feb  9 19:41:42.198597 systemd[1]: Finished systemd-udev-settle.service.
Feb  9 19:41:42.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.200425 systemd[1]: Starting lvm2-activation-early.service...
Feb  9 19:41:42.209059 lvm[1050]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 19:41:42.237139 systemd[1]: Finished lvm2-activation-early.service.
Feb  9 19:41:42.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.237912 systemd[1]: Reached target cryptsetup.target.
Feb  9 19:41:42.239390 systemd[1]: Starting lvm2-activation.service...
Feb  9 19:41:42.242031 lvm[1051]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 19:41:42.267311 systemd[1]: Finished lvm2-activation.service.
Feb  9 19:41:42.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.268125 systemd[1]: Reached target local-fs-pre.target.
Feb  9 19:41:42.268732 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  9 19:41:42.268761 systemd[1]: Reached target local-fs.target.
Feb  9 19:41:42.269418 systemd[1]: Reached target machines.target.
Feb  9 19:41:42.271293 systemd[1]: Starting ldconfig.service...
Feb  9 19:41:42.272072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb  9 19:41:42.272128 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 19:41:42.273130 systemd[1]: Starting systemd-boot-update.service...
Feb  9 19:41:42.274502 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb  9 19:41:42.276503 systemd[1]: Starting systemd-machine-id-commit.service...
Feb  9 19:41:42.277421 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb  9 19:41:42.277467 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb  9 19:41:42.278883 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb  9 19:41:42.280042 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1053 (bootctl)
Feb  9 19:41:42.282090 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb  9 19:41:42.289241 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb  9 19:41:42.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.291616 systemd-tmpfiles[1056]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb  9 19:41:42.292918 systemd-tmpfiles[1056]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb  9 19:41:42.294430 systemd-tmpfiles[1056]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb  9 19:41:42.325300 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31)
Feb  9 19:41:42.325300 systemd-fsck[1061]: /dev/vda1: 790 files, 115362/258078 clusters
Feb  9 19:41:42.326561 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb  9 19:41:42.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.328482 systemd[1]: Mounting boot.mount...
Feb  9 19:41:42.543777 systemd[1]: Mounted boot.mount.
Feb  9 19:41:42.554828 systemd[1]: Finished systemd-boot-update.service.
Feb  9 19:41:42.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.597910 ldconfig[1052]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb  9 19:41:42.598654 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb  9 19:41:42.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.604256 systemd[1]: Starting audit-rules.service...
Feb  9 19:41:42.605694 systemd[1]: Starting clean-ca-certificates.service...
Feb  9 19:41:42.608000 audit: BPF prog-id=27 op=LOAD
Feb  9 19:41:42.607291 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb  9 19:41:42.609565 systemd[1]: Starting systemd-resolved.service...
Feb  9 19:41:42.610000 audit: BPF prog-id=28 op=LOAD
Feb  9 19:41:42.611246 systemd[1]: Starting systemd-timesyncd.service...
Feb  9 19:41:42.612549 systemd[1]: Starting systemd-update-utmp.service...
Feb  9 19:41:42.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.613971 systemd[1]: Finished clean-ca-certificates.service.
Feb  9 19:41:42.616378 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb  9 19:41:42.621000 audit[1071]: SYSTEM_BOOT pid=1071 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.622687 systemd[1]: Finished systemd-update-utmp.service.
Feb  9 19:41:42.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.669642 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb  9 19:41:42.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.672619 systemd[1]: Started systemd-timesyncd.service.
Feb  9 19:41:42.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 19:41:42.673359 systemd[1]: Reached target time-set.target.
Feb  9 19:41:42.673388 systemd-resolved[1068]: Positive Trust Anchors:
Feb  9 19:41:42.230789 systemd-journald[982]: Time jumped backwards, rotating.
Feb  9 19:41:42.065000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  9 19:41:42.065000 audit[1084]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce20e25e0 a2=420 a3=0 items=0 ppid=1064 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 19:41:42.065000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  9 19:41:42.231053 augenrules[1084]: No rules
Feb  9 19:41:42.673400 systemd-resolved[1068]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 19:41:42.673432 systemd-resolved[1068]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 19:41:42.063994 systemd-timesyncd[1069]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb  9 19:41:42.064027 systemd-timesyncd[1069]: Initial clock synchronization to Fri 2024-02-09 19:41:42.063941 UTC.
Feb  9 19:41:42.068039 systemd[1]: Finished audit-rules.service.
Feb  9 19:41:42.112099 systemd-resolved[1068]: Defaulting to hostname 'linux'.
Feb  9 19:41:42.113429 systemd[1]: Started systemd-resolved.service.
Feb  9 19:41:42.114044 systemd[1]: Reached target network.target.
Feb  9 19:41:42.114554 systemd[1]: Reached target nss-lookup.target.
Feb  9 19:41:42.211723 systemd[1]: Finished ldconfig.service.
Feb  9 19:41:42.213878 systemd[1]: Starting systemd-update-done.service...
Feb  9 19:41:42.221148 systemd[1]: Finished systemd-update-done.service.
Feb  9 19:41:42.221967 systemd[1]: Reached target sysinit.target.
Feb  9 19:41:42.222707 systemd[1]: Started motdgen.path.
Feb  9 19:41:42.223354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb  9 19:41:42.224353 systemd[1]: Started logrotate.timer.
Feb  9 19:41:42.225042 systemd[1]: Started mdadm.timer.
Feb  9 19:41:42.225739 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb  9 19:41:42.226404 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb  9 19:41:42.226427 systemd[1]: Reached target paths.target.
Feb  9 19:41:42.227096 systemd[1]: Reached target timers.target.
Feb  9 19:41:42.228035 systemd[1]: Listening on dbus.socket.
Feb  9 19:41:42.229514 systemd[1]: Starting docker.socket...
Feb  9 19:41:42.232578 systemd[1]: Listening on sshd.socket.
Feb  9 19:41:42.233255 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 19:41:42.234368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb  9 19:41:42.234862 systemd[1]: Finished systemd-machine-id-commit.service.
Feb  9 19:41:42.235695 systemd[1]: Listening on docker.socket.
Feb  9 19:41:42.236400 systemd[1]: Reached target sockets.target.
Feb  9 19:41:42.237072 systemd[1]: Reached target basic.target.
Feb  9 19:41:42.237741 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 19:41:42.237763 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 19:41:42.238649 systemd[1]: Starting containerd.service...
Feb  9 19:41:42.240467 systemd[1]: Starting dbus.service...
Feb  9 19:41:42.241864 systemd[1]: Starting enable-oem-cloudinit.service...
Feb  9 19:41:42.243727 systemd[1]: Starting extend-filesystems.service...
Feb  9 19:41:42.244529 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb  9 19:41:42.245853 systemd[1]: Starting motdgen.service...
Feb  9 19:41:42.248052 jq[1096]: false
Feb  9 19:41:42.250558 systemd[1]: Starting prepare-cni-plugins.service...
Feb  9 19:41:42.252302 systemd[1]: Starting prepare-critools.service...
Feb  9 19:41:42.253942 systemd[1]: Starting prepare-helm.service...
Feb  9 19:41:42.254897 dbus-daemon[1095]: [system] SELinux support is enabled
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found sr0
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda1
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda2
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda3
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found usr
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda4
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda6
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda7
Feb  9 19:41:42.255605 extend-filesystems[1097]: Found vda9
Feb  9 19:41:42.255605 extend-filesystems[1097]: Checking size of /dev/vda9
Feb  9 19:41:42.287175 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb  9 19:41:42.255580 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb  9 19:41:42.287293 extend-filesystems[1097]: Resized partition /dev/vda9
Feb  9 19:41:42.257531 systemd[1]: Starting sshd-keygen.service...
Feb  9 19:41:42.293220 extend-filesystems[1115]: resize2fs 1.46.5 (30-Dec-2021)
Feb  9 19:41:42.260645 systemd[1]: Starting systemd-logind.service...
Feb  9 19:41:42.261308 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 19:41:42.261356 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb  9 19:41:42.262058 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb  9 19:41:42.262525 systemd[1]: Starting update-engine.service...
Feb  9 19:41:42.265249 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb  9 19:41:42.266529 systemd[1]: Started dbus.service.
Feb  9 19:41:42.283839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb  9 19:41:42.283995 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb  9 19:41:42.286297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb  9 19:41:42.286704 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb  9 19:41:42.289832 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb  9 19:41:42.289856 systemd[1]: Reached target system-config.target.
Feb  9 19:41:42.290771 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb  9 19:41:42.290788 systemd[1]: Reached target user-config.target.
Feb  9 19:41:42.297421 systemd[1]: motdgen.service: Deactivated successfully.
Feb  9 19:41:42.297603 systemd[1]: Finished motdgen.service.
Feb  9 19:41:42.297994 jq[1117]: true
Feb  9 19:41:42.309249 tar[1122]: crictl
Feb  9 19:41:42.309863 jq[1129]: true
Feb  9 19:41:42.310000 tar[1121]: ./
Feb  9 19:41:42.310000 tar[1121]: ./loopback
Feb  9 19:41:42.389573 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb  9 19:41:42.425067 update_engine[1114]: I0209 19:41:42.407012  1114 main.cc:92] Flatcar Update Engine starting
Feb  9 19:41:42.425067 update_engine[1114]: I0209 19:41:42.419003  1114 update_check_scheduler.cc:74] Next update check in 4m33s
Feb  9 19:41:42.418076 systemd[1]: Started update-engine.service.
Feb  9 19:41:42.424643 systemd[1]: Started locksmithd.service.
Feb  9 19:41:42.427278 tar[1123]: linux-amd64/helm
Feb  9 19:41:42.428241 extend-filesystems[1115]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb  9 19:41:42.428241 extend-filesystems[1115]: old_desc_blocks = 1, new_desc_blocks = 1
Feb  9 19:41:42.428241 extend-filesystems[1115]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb  9 19:41:42.428064 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb  9 19:41:42.432779 extend-filesystems[1097]: Resized filesystem in /dev/vda9
Feb  9 19:41:42.428189 systemd[1]: Finished extend-filesystems.service.
Feb  9 19:41:42.435601 systemd-logind[1109]: Watching system buttons on /dev/input/event1 (Power Button)
Feb  9 19:41:42.435617 systemd-logind[1109]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb  9 19:41:42.436216 systemd-logind[1109]: New seat seat0.
Feb  9 19:41:42.442311 systemd[1]: Started systemd-logind.service.
Feb  9 19:41:42.458782 env[1127]: time="2024-02-09T19:41:42.458649434Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb  9 19:41:42.508318 tar[1121]: ./bandwidth
Feb  9 19:41:42.548581 env[1127]: time="2024-02-09T19:41:42.548523047Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb  9 19:41:42.549131 env[1127]: time="2024-02-09T19:41:42.549104226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb  9 19:41:42.559306 env[1127]: time="2024-02-09T19:41:42.559223891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb  9 19:41:42.559569 env[1127]: time="2024-02-09T19:41:42.559520216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb  9 19:41:42.560245 env[1127]: time="2024-02-09T19:41:42.560195152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 19:41:42.560443 env[1127]: time="2024-02-09T19:41:42.560398984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb  9 19:41:42.560615 env[1127]: time="2024-02-09T19:41:42.560572139Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb  9 19:41:42.560753 env[1127]: time="2024-02-09T19:41:42.560717692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb  9 19:41:42.561031 env[1127]: time="2024-02-09T19:41:42.560989411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb  9 19:41:42.561526 env[1127]: time="2024-02-09T19:41:42.561494418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb  9 19:41:42.561896 env[1127]: time="2024-02-09T19:41:42.561843362Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 19:41:42.562043 env[1127]: time="2024-02-09T19:41:42.562011538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb  9 19:41:42.562283 env[1127]: time="2024-02-09T19:41:42.562236690Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb  9 19:41:42.562480 env[1127]: time="2024-02-09T19:41:42.562434390Z" level=info msg="metadata content store policy set" policy=shared
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575593283Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575642516Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575685907Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575759485Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575872347Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575910288Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575929203Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575943320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575958428Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575976712Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.575991560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.576004244Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.576095545Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb  9 19:41:42.577478 env[1127]: time="2024-02-09T19:41:42.576210951Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576566568Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576609859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576621932Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576686543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576702573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576721689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576801519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576820134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576840963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576855240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576874115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.576896487Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.577070563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.577096312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578071 env[1127]: time="2024-02-09T19:41:42.577114275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578664 env[1127]: time="2024-02-09T19:41:42.577130335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb  9 19:41:42.578664 env[1127]: time="2024-02-09T19:41:42.577145173Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb  9 19:41:42.578664 env[1127]: time="2024-02-09T19:41:42.577160251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb  9 19:41:42.578664 env[1127]: time="2024-02-09T19:41:42.577196710Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb  9 19:41:42.578664 env[1127]: time="2024-02-09T19:41:42.577246233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb  9 19:41:42.578995 env[1127]: time="2024-02-09T19:41:42.578927255Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb  9 19:41:42.579831 env[1127]: time="2024-02-09T19:41:42.579168707Z" level=info msg="Connect containerd service"
Feb  9 19:41:42.579831 env[1127]: time="2024-02-09T19:41:42.579210596Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb  9 19:41:42.580750 env[1127]: time="2024-02-09T19:41:42.580722691Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 19:41:42.580950 env[1127]: time="2024-02-09T19:41:42.580913268Z" level=info msg="Start subscribing containerd event"
Feb  9 19:41:42.581098 env[1127]: time="2024-02-09T19:41:42.581069151Z" level=info msg="Start recovering state"
Feb  9 19:41:42.581268 env[1127]: time="2024-02-09T19:41:42.581235332Z" level=info msg="Start event monitor"
Feb  9 19:41:42.581417 env[1127]: time="2024-02-09T19:41:42.581393609Z" level=info msg="Start snapshots syncer"
Feb  9 19:41:42.581541 env[1127]: time="2024-02-09T19:41:42.581515107Z" level=info msg="Start cni network conf syncer for default"
Feb  9 19:41:42.581621 env[1127]: time="2024-02-09T19:41:42.581603362Z" level=info msg="Start streaming server"
Feb  9 19:41:42.581932 tar[1121]: ./ptp
Feb  9 19:41:42.582076 env[1127]: time="2024-02-09T19:41:42.581985569Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb  9 19:41:42.582076 env[1127]: time="2024-02-09T19:41:42.582021316Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb  9 19:41:42.582212 systemd[1]: Started containerd.service.
Feb  9 19:41:42.596832 env[1127]: time="2024-02-09T19:41:42.596776843Z" level=info msg="containerd successfully booted in 0.177247s"
Feb  9 19:41:42.611564 systemd-networkd[1023]: eth0: Gained IPv6LL
Feb  9 19:41:42.615275 bash[1155]: Updated "/home/core/.ssh/authorized_keys"
Feb  9 19:41:42.615939 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb  9 19:41:42.626089 tar[1121]: ./vlan
Feb  9 19:41:42.679719 tar[1121]: ./host-device
Feb  9 19:41:42.738603 tar[1121]: ./tuning
Feb  9 19:41:42.787897 tar[1121]: ./vrf
Feb  9 19:41:42.821705 tar[1121]: ./sbr
Feb  9 19:41:42.855813 tar[1121]: ./tap
Feb  9 19:41:42.911100 tar[1121]: ./dhcp
Feb  9 19:41:43.017904 tar[1121]: ./static
Feb  9 19:41:43.029993 tar[1123]: linux-amd64/LICENSE
Feb  9 19:41:43.030156 tar[1123]: linux-amd64/README.md
Feb  9 19:41:43.034309 systemd[1]: Finished prepare-helm.service.
Feb  9 19:41:43.041348 locksmithd[1144]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb  9 19:41:43.043565 tar[1121]: ./firewall
Feb  9 19:41:43.076717 systemd[1]: Finished prepare-critools.service.
Feb  9 19:41:43.086388 tar[1121]: ./macvlan
Feb  9 19:41:43.115179 tar[1121]: ./dummy
Feb  9 19:41:43.143643 tar[1121]: ./bridge
Feb  9 19:41:43.175501 tar[1121]: ./ipvlan
Feb  9 19:41:43.204043 tar[1121]: ./portmap
Feb  9 19:41:43.231187 tar[1121]: ./host-local
Feb  9 19:41:43.262126 systemd[1]: Finished prepare-cni-plugins.service.
Feb  9 19:41:43.594815 sshd_keygen[1120]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb  9 19:41:43.611022 systemd[1]: Finished sshd-keygen.service.
Feb  9 19:41:43.612869 systemd[1]: Starting issuegen.service...
Feb  9 19:41:43.617080 systemd[1]: issuegen.service: Deactivated successfully.
Feb  9 19:41:43.617187 systemd[1]: Finished issuegen.service.
Feb  9 19:41:43.618816 systemd[1]: Starting systemd-user-sessions.service...
Feb  9 19:41:43.623169 systemd[1]: Finished systemd-user-sessions.service.
Feb  9 19:41:43.624737 systemd[1]: Started getty@tty1.service.
Feb  9 19:41:43.626145 systemd[1]: Started serial-getty@ttyS0.service.
Feb  9 19:41:43.626919 systemd[1]: Reached target getty.target.
Feb  9 19:41:43.627557 systemd[1]: Reached target multi-user.target.
Feb  9 19:41:43.628994 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb  9 19:41:43.634944 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  9 19:41:43.635049 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb  9 19:41:43.635823 systemd[1]: Startup finished in 531ms (kernel) + 36.461s (initrd) + 7.104s (userspace) = 44.097s.
Feb  9 19:41:45.461623 systemd[1]: Created slice system-sshd.slice.
Feb  9 19:41:45.462498 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:37984.service.
Feb  9 19:41:45.500375 sshd[1185]: Accepted publickey for core from 10.0.0.1 port 37984 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:41:45.501794 sshd[1185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:41:45.509338 systemd-logind[1109]: New session 1 of user core.
Feb  9 19:41:45.510289 systemd[1]: Created slice user-500.slice.
Feb  9 19:41:45.511195 systemd[1]: Starting user-runtime-dir@500.service...
Feb  9 19:41:45.518406 systemd[1]: Finished user-runtime-dir@500.service.
Feb  9 19:41:45.519549 systemd[1]: Starting user@500.service...
Feb  9 19:41:45.521547 (systemd)[1188]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:41:45.581318 systemd[1188]: Queued start job for default target default.target.
Feb  9 19:41:45.581711 systemd[1188]: Reached target paths.target.
Feb  9 19:41:45.581734 systemd[1188]: Reached target sockets.target.
Feb  9 19:41:45.581749 systemd[1188]: Reached target timers.target.
Feb  9 19:41:45.581763 systemd[1188]: Reached target basic.target.
Feb  9 19:41:45.581803 systemd[1188]: Reached target default.target.
Feb  9 19:41:45.581839 systemd[1188]: Startup finished in 55ms.
Feb  9 19:41:45.581869 systemd[1]: Started user@500.service.
Feb  9 19:41:45.582688 systemd[1]: Started session-1.scope.
Feb  9 19:41:45.633265 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:37992.service.
Feb  9 19:41:45.670574 sshd[1197]: Accepted publickey for core from 10.0.0.1 port 37992 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:41:45.671488 sshd[1197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:41:45.674609 systemd-logind[1109]: New session 2 of user core.
Feb  9 19:41:45.675303 systemd[1]: Started session-2.scope.
Feb  9 19:41:45.727085 sshd[1197]: pam_unix(sshd:session): session closed for user core
Feb  9 19:41:45.729744 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:37992.service: Deactivated successfully.
Feb  9 19:41:45.730303 systemd[1]: session-2.scope: Deactivated successfully.
Feb  9 19:41:45.730823 systemd-logind[1109]: Session 2 logged out. Waiting for processes to exit.
Feb  9 19:41:45.732057 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:38004.service.
Feb  9 19:41:45.732638 systemd-logind[1109]: Removed session 2.
Feb  9 19:41:45.764754 sshd[1203]: Accepted publickey for core from 10.0.0.1 port 38004 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:41:45.765487 sshd[1203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:41:45.768343 systemd-logind[1109]: New session 3 of user core.
Feb  9 19:41:45.769107 systemd[1]: Started session-3.scope.
Feb  9 19:41:45.817682 sshd[1203]: pam_unix(sshd:session): session closed for user core
Feb  9 19:41:45.820195 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:38004.service: Deactivated successfully.
Feb  9 19:41:45.820664 systemd[1]: session-3.scope: Deactivated successfully.
Feb  9 19:41:45.821122 systemd-logind[1109]: Session 3 logged out. Waiting for processes to exit.
Feb  9 19:41:45.821849 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:38016.service.
Feb  9 19:41:45.822623 systemd-logind[1109]: Removed session 3.
Feb  9 19:41:45.854335 sshd[1210]: Accepted publickey for core from 10.0.0.1 port 38016 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:41:45.855199 sshd[1210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:41:45.858356 systemd-logind[1109]: New session 4 of user core.
Feb  9 19:41:45.859149 systemd[1]: Started session-4.scope.
Feb  9 19:41:45.912094 sshd[1210]: pam_unix(sshd:session): session closed for user core
Feb  9 19:41:45.914685 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:38016.service: Deactivated successfully.
Feb  9 19:41:45.915211 systemd[1]: session-4.scope: Deactivated successfully.
Feb  9 19:41:45.915702 systemd-logind[1109]: Session 4 logged out. Waiting for processes to exit.
Feb  9 19:41:45.916651 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:38028.service.
Feb  9 19:41:45.917343 systemd-logind[1109]: Removed session 4.
Feb  9 19:41:45.949355 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 38028 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:41:45.950183 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:41:45.953112 systemd-logind[1109]: New session 5 of user core.
Feb  9 19:41:45.953856 systemd[1]: Started session-5.scope.
Feb  9 19:41:46.008409 sudo[1219]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb  9 19:41:46.008597 sudo[1219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb  9 19:41:46.524851 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  9 19:41:46.529256 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  9 19:41:46.529498 systemd[1]: Reached target network-online.target.
Feb  9 19:41:46.530392 systemd[1]: Starting docker.service...
Feb  9 19:41:46.568527 env[1237]: time="2024-02-09T19:41:46.568446853Z" level=info msg="Starting up"
Feb  9 19:41:46.569430 env[1237]: time="2024-02-09T19:41:46.569405741Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb  9 19:41:46.569430 env[1237]: time="2024-02-09T19:41:46.569422903Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb  9 19:41:46.569507 env[1237]: time="2024-02-09T19:41:46.569438632Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb  9 19:41:46.569507 env[1237]: time="2024-02-09T19:41:46.569448110Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb  9 19:41:46.571182 env[1237]: time="2024-02-09T19:41:46.571151644Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb  9 19:41:46.571182 env[1237]: time="2024-02-09T19:41:46.571167805Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb  9 19:41:46.571182 env[1237]: time="2024-02-09T19:41:46.571179717Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb  9 19:41:46.571182 env[1237]: time="2024-02-09T19:41:46.571188373Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb  9 19:41:46.575362 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport650148853-merged.mount: Deactivated successfully.
Feb  9 19:41:47.286140 env[1237]: time="2024-02-09T19:41:47.285888556Z" level=info msg="Loading containers: start."
Feb  9 19:41:47.373492 kernel: Initializing XFRM netlink socket
Feb  9 19:41:47.398480 env[1237]: time="2024-02-09T19:41:47.398436283Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb  9 19:41:47.445132 systemd-networkd[1023]: docker0: Link UP
Feb  9 19:41:47.455041 env[1237]: time="2024-02-09T19:41:47.455008205Z" level=info msg="Loading containers: done."
Feb  9 19:41:47.464042 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3767625037-merged.mount: Deactivated successfully.
Feb  9 19:41:47.497109 env[1237]: time="2024-02-09T19:41:47.497057618Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb  9 19:41:47.497250 env[1237]: time="2024-02-09T19:41:47.497229430Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Feb  9 19:41:47.497341 env[1237]: time="2024-02-09T19:41:47.497323707Z" level=info msg="Daemon has completed initialization"
Feb  9 19:41:47.639501 systemd[1]: Started docker.service.
Feb  9 19:41:47.643480 env[1237]: time="2024-02-09T19:41:47.643415980Z" level=info msg="API listen on /run/docker.sock"
Feb  9 19:41:47.657644 systemd[1]: Reloading.
Feb  9 19:41:47.716581 /usr/lib/systemd/system-generators/torcx-generator[1380]: time="2024-02-09T19:41:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 19:41:47.716610 /usr/lib/systemd/system-generators/torcx-generator[1380]: time="2024-02-09T19:41:47Z" level=info msg="torcx already run"
Feb  9 19:41:47.777442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 19:41:47.777482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 19:41:47.797576 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 19:41:47.862054 systemd[1]: Started kubelet.service.
Feb  9 19:41:47.911973 kubelet[1420]: E0209 19:41:47.911849    1420 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb  9 19:41:47.914174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 19:41:47.914281 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 19:41:48.416020 env[1127]: time="2024-02-09T19:41:48.415965921Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\""
Feb  9 19:41:49.093995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425356663.mount: Deactivated successfully.
Feb  9 19:41:51.818612 env[1127]: time="2024-02-09T19:41:51.818538994Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:51.820160 env[1127]: time="2024-02-09T19:41:51.820104299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:51.821741 env[1127]: time="2024-02-09T19:41:51.821718115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:51.823142 env[1127]: time="2024-02-09T19:41:51.823113652Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:51.823748 env[1127]: time="2024-02-09T19:41:51.823713216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\""
Feb  9 19:41:51.849842 env[1127]: time="2024-02-09T19:41:51.849813273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\""
Feb  9 19:41:54.874237 env[1127]: time="2024-02-09T19:41:54.874154242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:54.895316 env[1127]: time="2024-02-09T19:41:54.895274573Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:54.904611 env[1127]: time="2024-02-09T19:41:54.904552048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:54.919287 env[1127]: time="2024-02-09T19:41:54.919234487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:54.919931 env[1127]: time="2024-02-09T19:41:54.919879988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\""
Feb  9 19:41:54.929825 env[1127]: time="2024-02-09T19:41:54.929789658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\""
Feb  9 19:41:56.405883 env[1127]: time="2024-02-09T19:41:56.405827829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:56.407542 env[1127]: time="2024-02-09T19:41:56.407512598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:56.409130 env[1127]: time="2024-02-09T19:41:56.409100987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:56.410677 env[1127]: time="2024-02-09T19:41:56.410654530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:56.412320 env[1127]: time="2024-02-09T19:41:56.412290317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\""
Feb  9 19:41:56.424900 env[1127]: time="2024-02-09T19:41:56.424874141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\""
Feb  9 19:41:57.590926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916931721.mount: Deactivated successfully.
Feb  9 19:41:57.981350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb  9 19:41:57.981602 systemd[1]: Stopped kubelet.service.
Feb  9 19:41:57.982944 systemd[1]: Started kubelet.service.
Feb  9 19:41:58.059998 kubelet[1463]: E0209 19:41:58.059929    1463 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb  9 19:41:58.063613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 19:41:58.063760 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 19:41:58.297249 env[1127]: time="2024-02-09T19:41:58.297118028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:58.298922 env[1127]: time="2024-02-09T19:41:58.298896893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:58.300398 env[1127]: time="2024-02-09T19:41:58.300365407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:58.301615 env[1127]: time="2024-02-09T19:41:58.301590484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:58.301960 env[1127]: time="2024-02-09T19:41:58.301926264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\""
Feb  9 19:41:58.313744 env[1127]: time="2024-02-09T19:41:58.313692365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb  9 19:41:59.012094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879694936.mount: Deactivated successfully.
Feb  9 19:41:59.173808 env[1127]: time="2024-02-09T19:41:59.173727050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:59.176707 env[1127]: time="2024-02-09T19:41:59.176667653Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:59.178024 env[1127]: time="2024-02-09T19:41:59.178003218Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:59.179404 env[1127]: time="2024-02-09T19:41:59.179377915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:41:59.179893 env[1127]: time="2024-02-09T19:41:59.179857675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Feb  9 19:41:59.190710 env[1127]: time="2024-02-09T19:41:59.190679476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\""
Feb  9 19:41:59.761394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount972484520.mount: Deactivated successfully.
Feb  9 19:42:07.180616 env[1127]: time="2024-02-09T19:42:07.180556663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:07.182356 env[1127]: time="2024-02-09T19:42:07.182325339Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:07.184796 env[1127]: time="2024-02-09T19:42:07.184765875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:07.186660 env[1127]: time="2024-02-09T19:42:07.186622687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:07.188164 env[1127]: time="2024-02-09T19:42:07.188129833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\""
Feb  9 19:42:07.197472 env[1127]: time="2024-02-09T19:42:07.197415744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\""
Feb  9 19:42:07.751115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970803625.mount: Deactivated successfully.
Feb  9 19:42:08.231302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb  9 19:42:08.231500 systemd[1]: Stopped kubelet.service.
Feb  9 19:42:08.232703 systemd[1]: Started kubelet.service.
Feb  9 19:42:08.275801 kubelet[1488]: E0209 19:42:08.275744    1488 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb  9 19:42:08.277691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 19:42:08.277805 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 19:42:08.454307 env[1127]: time="2024-02-09T19:42:08.454241322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:08.455721 env[1127]: time="2024-02-09T19:42:08.455695949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:08.457230 env[1127]: time="2024-02-09T19:42:08.457190622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:08.458496 env[1127]: time="2024-02-09T19:42:08.458445154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:08.458867 env[1127]: time="2024-02-09T19:42:08.458836317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\""
Feb  9 19:42:10.357406 systemd[1]: Stopped kubelet.service.
Feb  9 19:42:10.374159 systemd[1]: Reloading.
Feb  9 19:42:10.438345 /usr/lib/systemd/system-generators/torcx-generator[1594]: time="2024-02-09T19:42:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 19:42:10.438887 /usr/lib/systemd/system-generators/torcx-generator[1594]: time="2024-02-09T19:42:10Z" level=info msg="torcx already run"
Feb  9 19:42:10.508621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 19:42:10.508640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 19:42:10.532264 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 19:42:10.610938 systemd[1]: Started kubelet.service.
Feb  9 19:42:10.662873 kubelet[1635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 19:42:10.662873 kubelet[1635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb  9 19:42:10.662873 kubelet[1635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 19:42:10.663301 kubelet[1635]: I0209 19:42:10.662919    1635 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  9 19:42:11.106635 kubelet[1635]: I0209 19:42:11.106587    1635 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Feb  9 19:42:11.106635 kubelet[1635]: I0209 19:42:11.106624    1635 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  9 19:42:11.106857 kubelet[1635]: I0209 19:42:11.106845    1635 server.go:895] "Client rotation is on, will bootstrap in background"
Feb  9 19:42:11.110333 kubelet[1635]: I0209 19:42:11.110310    1635 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 19:42:11.111610 kubelet[1635]: E0209 19:42:11.111572    1635 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.115956 kubelet[1635]: I0209 19:42:11.115940    1635 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  9 19:42:11.116133 kubelet[1635]: I0209 19:42:11.116118    1635 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  9 19:42:11.116294 kubelet[1635]: I0209 19:42:11.116278    1635 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb  9 19:42:11.116383 kubelet[1635]: I0209 19:42:11.116302    1635 topology_manager.go:138] "Creating topology manager with none policy"
Feb  9 19:42:11.116383 kubelet[1635]: I0209 19:42:11.116309    1635 container_manager_linux.go:301] "Creating device plugin manager"
Feb  9 19:42:11.116427 kubelet[1635]: I0209 19:42:11.116393    1635 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 19:42:11.116504 kubelet[1635]: I0209 19:42:11.116496    1635 kubelet.go:393] "Attempting to sync node with API server"
Feb  9 19:42:11.116526 kubelet[1635]: I0209 19:42:11.116514    1635 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  9 19:42:11.116565 kubelet[1635]: I0209 19:42:11.116535    1635 kubelet.go:309] "Adding apiserver pod source"
Feb  9 19:42:11.116565 kubelet[1635]: I0209 19:42:11.116556    1635 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  9 19:42:11.117710 kubelet[1635]: I0209 19:42:11.117698    1635 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  9 19:42:11.118066 kubelet[1635]: W0209 19:42:11.118021    1635 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.118120 kubelet[1635]: E0209 19:42:11.118083    1635 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.118172 kubelet[1635]: W0209 19:42:11.117180    1635 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.118230 kubelet[1635]: E0209 19:42:11.118185    1635 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.118474 kubelet[1635]: W0209 19:42:11.118447    1635 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb  9 19:42:11.119690 kubelet[1635]: I0209 19:42:11.119677    1635 server.go:1232] "Started kubelet"
Feb  9 19:42:11.120000 kubelet[1635]: E0209 19:42:11.119891    1635 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b2493c149ead6d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 42, 11, 119648109, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 42, 11, 119648109, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.35:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.35:6443: connect: connection refused'(may retry after sleeping)
Feb  9 19:42:11.120155 kubelet[1635]: I0209 19:42:11.120028    1635 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb  9 19:42:11.120597 kubelet[1635]: I0209 19:42:11.120388    1635 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb  9 19:42:11.120808 kubelet[1635]: I0209 19:42:11.120790    1635 server.go:462] "Adding debug handlers to kubelet server"
Feb  9 19:42:11.120808 kubelet[1635]: I0209 19:42:11.120801    1635 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb  9 19:42:11.121236 kubelet[1635]: E0209 19:42:11.120899    1635 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  9 19:42:11.121236 kubelet[1635]: E0209 19:42:11.120937    1635 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  9 19:42:11.122903 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb  9 19:42:11.123036 kubelet[1635]: I0209 19:42:11.123017    1635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  9 19:42:11.123528 kubelet[1635]: E0209 19:42:11.123504    1635 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb  9 19:42:11.123592 kubelet[1635]: I0209 19:42:11.123544    1635 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb  9 19:42:11.123705 kubelet[1635]: I0209 19:42:11.123683    1635 reconciler_new.go:29] "Reconciler: start to sync state"
Feb  9 19:42:11.123759 kubelet[1635]: I0209 19:42:11.123712    1635 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  9 19:42:11.124275 kubelet[1635]: W0209 19:42:11.124231    1635 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.125086 kubelet[1635]: E0209 19:42:11.124892    1635 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms"
Feb  9 19:42:11.127412 kubelet[1635]: E0209 19:42:11.125531    1635 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.138021 kubelet[1635]: I0209 19:42:11.137989    1635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb  9 19:42:11.138769 kubelet[1635]: I0209 19:42:11.138745    1635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb  9 19:42:11.138843 kubelet[1635]: I0209 19:42:11.138774    1635 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb  9 19:42:11.138843 kubelet[1635]: I0209 19:42:11.138791    1635 kubelet.go:2303] "Starting kubelet main sync loop"
Feb  9 19:42:11.138885 kubelet[1635]: E0209 19:42:11.138850    1635 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb  9 19:42:11.141680 kubelet[1635]: W0209 19:42:11.141651    1635 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.141740 kubelet[1635]: E0209 19:42:11.141684    1635 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused
Feb  9 19:42:11.142577 kubelet[1635]: I0209 19:42:11.142545    1635 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  9 19:42:11.142577 kubelet[1635]: I0209 19:42:11.142561    1635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  9 19:42:11.142577 kubelet[1635]: I0209 19:42:11.142583    1635 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 19:42:11.145298 kubelet[1635]: I0209 19:42:11.145270    1635 policy_none.go:49] "None policy: Start"
Feb  9 19:42:11.145930 kubelet[1635]: I0209 19:42:11.145908    1635 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  9 19:42:11.146010 kubelet[1635]: I0209 19:42:11.145937    1635 state_mem.go:35] "Initializing new in-memory state store"
Feb  9 19:42:11.151903 systemd[1]: Created slice kubepods.slice.
Feb  9 19:42:11.156208 systemd[1]: Created slice kubepods-burstable.slice.
Feb  9 19:42:11.158679 systemd[1]: Created slice kubepods-besteffort.slice.
Feb  9 19:42:11.166039 kubelet[1635]: I0209 19:42:11.166016    1635 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  9 19:42:11.166433 kubelet[1635]: I0209 19:42:11.166405    1635 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  9 19:42:11.166945 kubelet[1635]: E0209 19:42:11.166932    1635 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Feb  9 19:42:11.225880 kubelet[1635]: I0209 19:42:11.225826    1635 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb  9 19:42:11.226252 kubelet[1635]: E0209 19:42:11.226213    1635 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost"
Feb  9 19:42:11.239482 kubelet[1635]: I0209 19:42:11.239408    1635 topology_manager.go:215] "Topology Admit Handler" podUID="5e3a05bd084cb559afcd1e47f3bcc2c5" podNamespace="kube-system" podName="kube-apiserver-localhost"
Feb  9 19:42:11.240716 kubelet[1635]: I0209 19:42:11.240681    1635 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Feb  9 19:42:11.241448 kubelet[1635]: I0209 19:42:11.241432    1635 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost"
Feb  9 19:42:11.246600 systemd[1]: Created slice kubepods-burstable-pod5e3a05bd084cb559afcd1e47f3bcc2c5.slice.
Feb  9 19:42:11.266354 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice.
Feb  9 19:42:11.278221 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice.
Feb  9 19:42:11.325299 kubelet[1635]: E0209 19:42:11.325259    1635 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms"
Feb  9 19:42:11.425882 kubelet[1635]: I0209 19:42:11.425747    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost"
Feb  9 19:42:11.425882 kubelet[1635]: I0209 19:42:11.425806    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e3a05bd084cb559afcd1e47f3bcc2c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e3a05bd084cb559afcd1e47f3bcc2c5\") " pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:11.425882 kubelet[1635]: I0209 19:42:11.425828    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:11.426091 kubelet[1635]: I0209 19:42:11.425912    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e3a05bd084cb559afcd1e47f3bcc2c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3a05bd084cb559afcd1e47f3bcc2c5\") " pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:11.426091 kubelet[1635]: I0209 19:42:11.425970    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e3a05bd084cb559afcd1e47f3bcc2c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3a05bd084cb559afcd1e47f3bcc2c5\") " pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:11.426091 kubelet[1635]: I0209 19:42:11.426000    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:11.426091 kubelet[1635]: I0209 19:42:11.426020    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:11.426091 kubelet[1635]: I0209 19:42:11.426041    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:11.426278 kubelet[1635]: I0209 19:42:11.426069    1635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:11.427889 kubelet[1635]: I0209 19:42:11.427858    1635 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb  9 19:42:11.428270 kubelet[1635]: E0209 19:42:11.428245    1635 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost"
Feb  9 19:42:11.564969 kubelet[1635]: E0209 19:42:11.564929    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:11.565710 env[1127]: time="2024-02-09T19:42:11.565655332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e3a05bd084cb559afcd1e47f3bcc2c5,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:11.575853 kubelet[1635]: E0209 19:42:11.575817    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:11.576410 env[1127]: time="2024-02-09T19:42:11.576365784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:11.581626 kubelet[1635]: E0209 19:42:11.581582    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:11.582091 env[1127]: time="2024-02-09T19:42:11.582052076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:11.726266 kubelet[1635]: E0209 19:42:11.726166    1635 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms"
Feb  9 19:42:11.829477 kubelet[1635]: I0209 19:42:11.829423    1635 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb  9 19:42:11.829793 kubelet[1635]: E0209 19:42:11.829759    1635 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost"
Feb  9 19:42:12.105244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1107679126.mount: Deactivated successfully.
Feb  9 19:42:12.111312 env[1127]: time="2024-02-09T19:42:12.111257316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.112158 env[1127]: time="2024-02-09T19:42:12.112123259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.114598 env[1127]: time="2024-02-09T19:42:12.114545531Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.115422 env[1127]: time="2024-02-09T19:42:12.115394473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.117302 env[1127]: time="2024-02-09T19:42:12.117281080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.117853 env[1127]: time="2024-02-09T19:42:12.117837794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.118938 env[1127]: time="2024-02-09T19:42:12.118911417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.120596 env[1127]: time="2024-02-09T19:42:12.120565098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.121383 env[1127]: time="2024-02-09T19:42:12.121348136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.122487 env[1127]: time="2024-02-09T19:42:12.122449591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.126634 env[1127]: time="2024-02-09T19:42:12.126605283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.130333 env[1127]: time="2024-02-09T19:42:12.130291455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:12.142775 env[1127]: time="2024-02-09T19:42:12.142565008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:12.142775 env[1127]: time="2024-02-09T19:42:12.142597468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:12.142775 env[1127]: time="2024-02-09T19:42:12.142606766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:12.143114 env[1127]: time="2024-02-09T19:42:12.143075785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd1b394830992a3bc640eaed65a376ba7a0b956604e1c3d719ed68388c79ea15 pid=1677 runtime=io.containerd.runc.v2
Feb  9 19:42:12.146163 env[1127]: time="2024-02-09T19:42:12.145370728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:12.146163 env[1127]: time="2024-02-09T19:42:12.145421734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:12.146163 env[1127]: time="2024-02-09T19:42:12.145439036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:12.146163 env[1127]: time="2024-02-09T19:42:12.145637629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f651b5dbba94e28fd4fc13487193efa6dcc0fb8f177bad6febfdbb700a164c0 pid=1689 runtime=io.containerd.runc.v2
Feb  9 19:42:12.158593 systemd[1]: Started cri-containerd-fd1b394830992a3bc640eaed65a376ba7a0b956604e1c3d719ed68388c79ea15.scope.
Feb  9 19:42:12.162888 systemd[1]: Started cri-containerd-9f651b5dbba94e28fd4fc13487193efa6dcc0fb8f177bad6febfdbb700a164c0.scope.
Feb  9 19:42:12.165872 env[1127]: time="2024-02-09T19:42:12.165665541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:12.165872 env[1127]: time="2024-02-09T19:42:12.165738748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:12.165872 env[1127]: time="2024-02-09T19:42:12.165760239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:12.168439 env[1127]: time="2024-02-09T19:42:12.167255532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be9819452f2f8290f89bcedccf175094ac5faf945d69a44e9bd92b51bfa4a97a pid=1727 runtime=io.containerd.runc.v2
Feb  9 19:42:12.182588 systemd[1]: Started cri-containerd-be9819452f2f8290f89bcedccf175094ac5faf945d69a44e9bd92b51bfa4a97a.scope.
Feb  9 19:42:12.204711 env[1127]: time="2024-02-09T19:42:12.204657511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f651b5dbba94e28fd4fc13487193efa6dcc0fb8f177bad6febfdbb700a164c0\""
Feb  9 19:42:12.207203 kubelet[1635]: E0209 19:42:12.207153    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:12.210000 env[1127]: time="2024-02-09T19:42:12.209962738Z" level=info msg="CreateContainer within sandbox \"9f651b5dbba94e28fd4fc13487193efa6dcc0fb8f177bad6febfdbb700a164c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb  9 19:42:12.210283 env[1127]: time="2024-02-09T19:42:12.210251490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd1b394830992a3bc640eaed65a376ba7a0b956604e1c3d719ed68388c79ea15\""
Feb  9 19:42:12.211551 kubelet[1635]: E0209 19:42:12.211528    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:12.213311 env[1127]: time="2024-02-09T19:42:12.213268657Z" level=info msg="CreateContainer within sandbox \"fd1b394830992a3bc640eaed65a376ba7a0b956604e1c3d719ed68388c79ea15\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb  9 19:42:12.227740 env[1127]: time="2024-02-09T19:42:12.227685539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5e3a05bd084cb559afcd1e47f3bcc2c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"be9819452f2f8290f89bcedccf175094ac5faf945d69a44e9bd92b51bfa4a97a\""
Feb  9 19:42:12.228440 kubelet[1635]: E0209 19:42:12.228414    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:12.229729 env[1127]: time="2024-02-09T19:42:12.229680269Z" level=info msg="CreateContainer within sandbox \"9f651b5dbba94e28fd4fc13487193efa6dcc0fb8f177bad6febfdbb700a164c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c9ce35ca0dd8e75f8055e8e1f28e8fa17e4f05c07c6a2374c6ad2318082ce521\""
Feb  9 19:42:12.230200 env[1127]: time="2024-02-09T19:42:12.230156381Z" level=info msg="StartContainer for \"c9ce35ca0dd8e75f8055e8e1f28e8fa17e4f05c07c6a2374c6ad2318082ce521\""
Feb  9 19:42:12.230603 env[1127]: time="2024-02-09T19:42:12.230579304Z" level=info msg="CreateContainer within sandbox \"be9819452f2f8290f89bcedccf175094ac5faf945d69a44e9bd92b51bfa4a97a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb  9 19:42:12.233701 env[1127]: time="2024-02-09T19:42:12.233659550Z" level=info msg="CreateContainer within sandbox \"fd1b394830992a3bc640eaed65a376ba7a0b956604e1c3d719ed68388c79ea15\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b9b7b6f3115344288bcb14246c3a6fffde73195f70e7f8be5102cad5c24f3a8\""
Feb  9 19:42:12.234082 env[1127]: time="2024-02-09T19:42:12.234051835Z" level=info msg="StartContainer for \"4b9b7b6f3115344288bcb14246c3a6fffde73195f70e7f8be5102cad5c24f3a8\""
Feb  9 19:42:12.248930 systemd[1]: Started cri-containerd-c9ce35ca0dd8e75f8055e8e1f28e8fa17e4f05c07c6a2374c6ad2318082ce521.scope.
Feb  9 19:42:12.254138 systemd[1]: Started cri-containerd-4b9b7b6f3115344288bcb14246c3a6fffde73195f70e7f8be5102cad5c24f3a8.scope.
Feb  9 19:42:12.255747 env[1127]: time="2024-02-09T19:42:12.255704404Z" level=info msg="CreateContainer within sandbox \"be9819452f2f8290f89bcedccf175094ac5faf945d69a44e9bd92b51bfa4a97a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43334fcdf1606d9f556c6968cacba9c1dc8f13adf0e192161987d2d8cfcb1670\""
Feb  9 19:42:12.256341 env[1127]: time="2024-02-09T19:42:12.256317513Z" level=info msg="StartContainer for \"43334fcdf1606d9f556c6968cacba9c1dc8f13adf0e192161987d2d8cfcb1670\""
Feb  9 19:42:12.270761 systemd[1]: Started cri-containerd-43334fcdf1606d9f556c6968cacba9c1dc8f13adf0e192161987d2d8cfcb1670.scope.
Feb  9 19:42:12.302801 env[1127]: time="2024-02-09T19:42:12.302761466Z" level=info msg="StartContainer for \"c9ce35ca0dd8e75f8055e8e1f28e8fa17e4f05c07c6a2374c6ad2318082ce521\" returns successfully"
Feb  9 19:42:12.314030 env[1127]: time="2024-02-09T19:42:12.313967247Z" level=info msg="StartContainer for \"43334fcdf1606d9f556c6968cacba9c1dc8f13adf0e192161987d2d8cfcb1670\" returns successfully"
Feb  9 19:42:12.315403 env[1127]: time="2024-02-09T19:42:12.315363505Z" level=info msg="StartContainer for \"4b9b7b6f3115344288bcb14246c3a6fffde73195f70e7f8be5102cad5c24f3a8\" returns successfully"
Feb  9 19:42:12.631570 kubelet[1635]: I0209 19:42:12.631084    1635 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb  9 19:42:13.146851 kubelet[1635]: E0209 19:42:13.146813    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:13.148853 kubelet[1635]: E0209 19:42:13.148832    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:13.150403 kubelet[1635]: E0209 19:42:13.150374    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:13.450736 kubelet[1635]: E0209 19:42:13.450646    1635 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Feb  9 19:42:13.515677 kubelet[1635]: I0209 19:42:13.515651    1635 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Feb  9 19:42:14.118924 kubelet[1635]: I0209 19:42:14.118885    1635 apiserver.go:52] "Watching apiserver"
Feb  9 19:42:14.124249 kubelet[1635]: I0209 19:42:14.124225    1635 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  9 19:42:14.154833 kubelet[1635]: E0209 19:42:14.154794    1635 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:14.155197 kubelet[1635]: E0209 19:42:14.155183    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:16.005814 kubelet[1635]: E0209 19:42:16.005777    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:16.153778 kubelet[1635]: E0209 19:42:16.153751    1635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:16.326970 systemd[1]: Reloading.
Feb  9 19:42:16.385955 /usr/lib/systemd/system-generators/torcx-generator[1930]: time="2024-02-09T19:42:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 19:42:16.385990 /usr/lib/systemd/system-generators/torcx-generator[1930]: time="2024-02-09T19:42:16Z" level=info msg="torcx already run"
Feb  9 19:42:16.447975 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 19:42:16.447992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 19:42:16.467641 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 19:42:16.555869 kubelet[1635]: I0209 19:42:16.555829    1635 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 19:42:16.555920 systemd[1]: Stopping kubelet.service...
Feb  9 19:42:16.575023 systemd[1]: kubelet.service: Deactivated successfully.
Feb  9 19:42:16.575269 systemd[1]: Stopped kubelet.service.
Feb  9 19:42:16.576926 systemd[1]: Started kubelet.service.
Feb  9 19:42:16.617283 kubelet[1972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 19:42:16.617283 kubelet[1972]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb  9 19:42:16.617283 kubelet[1972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 19:42:16.617723 kubelet[1972]: I0209 19:42:16.617338    1972 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  9 19:42:16.621857 kubelet[1972]: I0209 19:42:16.621837    1972 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Feb  9 19:42:16.621857 kubelet[1972]: I0209 19:42:16.621854    1972 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  9 19:42:16.622014 kubelet[1972]: I0209 19:42:16.622004    1972 server.go:895] "Client rotation is on, will bootstrap in background"
Feb  9 19:42:16.623413 kubelet[1972]: I0209 19:42:16.623396    1972 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb  9 19:42:16.624216 kubelet[1972]: I0209 19:42:16.624179    1972 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 19:42:16.629205 kubelet[1972]: I0209 19:42:16.629187    1972 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  9 19:42:16.629367 kubelet[1972]: I0209 19:42:16.629350    1972 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  9 19:42:16.629505 kubelet[1972]: I0209 19:42:16.629493    1972 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb  9 19:42:16.629589 kubelet[1972]: I0209 19:42:16.629513    1972 topology_manager.go:138] "Creating topology manager with none policy"
Feb  9 19:42:16.629589 kubelet[1972]: I0209 19:42:16.629520    1972 container_manager_linux.go:301] "Creating device plugin manager"
Feb  9 19:42:16.629589 kubelet[1972]: I0209 19:42:16.629551    1972 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 19:42:16.629657 kubelet[1972]: I0209 19:42:16.629613    1972 kubelet.go:393] "Attempting to sync node with API server"
Feb  9 19:42:16.629657 kubelet[1972]: I0209 19:42:16.629626    1972 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  9 19:42:16.629657 kubelet[1972]: I0209 19:42:16.629644    1972 kubelet.go:309] "Adding apiserver pod source"
Feb  9 19:42:16.629657 kubelet[1972]: I0209 19:42:16.629655    1972 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  9 19:42:16.630076 kubelet[1972]: I0209 19:42:16.630063    1972 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  9 19:42:16.637847 kubelet[1972]: I0209 19:42:16.635433    1972 server.go:1232] "Started kubelet"
Feb  9 19:42:16.642406 kubelet[1972]: E0209 19:42:16.638434    1972 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  9 19:42:16.642406 kubelet[1972]: E0209 19:42:16.638511    1972 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  9 19:42:16.642406 kubelet[1972]: I0209 19:42:16.639044    1972 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Feb  9 19:42:16.642406 kubelet[1972]: I0209 19:42:16.639257    1972 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb  9 19:42:16.642406 kubelet[1972]: I0209 19:42:16.639311    1972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  9 19:42:16.642406 kubelet[1972]: I0209 19:42:16.639466    1972 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb  9 19:42:16.642406 kubelet[1972]: I0209 19:42:16.640765    1972 server.go:462] "Adding debug handlers to kubelet server"
Feb  9 19:42:16.647393 kubelet[1972]: I0209 19:42:16.647354    1972 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb  9 19:42:16.647556 kubelet[1972]: E0209 19:42:16.647531    1972 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb  9 19:42:16.647716 kubelet[1972]: I0209 19:42:16.647694    1972 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  9 19:42:16.647946 kubelet[1972]: I0209 19:42:16.647896    1972 reconciler_new.go:29] "Reconciler: start to sync state"
Feb  9 19:42:16.663557 kubelet[1972]: I0209 19:42:16.663523    1972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb  9 19:42:16.664531 kubelet[1972]: I0209 19:42:16.664504    1972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb  9 19:42:16.664582 kubelet[1972]: I0209 19:42:16.664542    1972 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb  9 19:42:16.664582 kubelet[1972]: I0209 19:42:16.664563    1972 kubelet.go:2303] "Starting kubelet main sync loop"
Feb  9 19:42:16.664663 kubelet[1972]: E0209 19:42:16.664643    1972 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb  9 19:42:16.692279 kubelet[1972]: I0209 19:42:16.692239    1972 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  9 19:42:16.692279 kubelet[1972]: I0209 19:42:16.692259    1972 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  9 19:42:16.692279 kubelet[1972]: I0209 19:42:16.692275    1972 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 19:42:16.692540 kubelet[1972]: I0209 19:42:16.692526    1972 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb  9 19:42:16.692572 kubelet[1972]: I0209 19:42:16.692556    1972 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb  9 19:42:16.692572 kubelet[1972]: I0209 19:42:16.692563    1972 policy_none.go:49] "None policy: Start"
Feb  9 19:42:16.693197 kubelet[1972]: I0209 19:42:16.693180    1972 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  9 19:42:16.693249 kubelet[1972]: I0209 19:42:16.693202    1972 state_mem.go:35] "Initializing new in-memory state store"
Feb  9 19:42:16.693424 kubelet[1972]: I0209 19:42:16.693401    1972 state_mem.go:75] "Updated machine memory state"
Feb  9 19:42:16.696742 kubelet[1972]: I0209 19:42:16.696705    1972 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  9 19:42:16.696925 kubelet[1972]: I0209 19:42:16.696903    1972 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  9 19:42:16.752859 kubelet[1972]: I0209 19:42:16.752836    1972 kubelet_node_status.go:70] "Attempting to register node" node="localhost"
Feb  9 19:42:16.765002 kubelet[1972]: I0209 19:42:16.764978    1972 topology_manager.go:215] "Topology Admit Handler" podUID="5e3a05bd084cb559afcd1e47f3bcc2c5" podNamespace="kube-system" podName="kube-apiserver-localhost"
Feb  9 19:42:16.765143 kubelet[1972]: I0209 19:42:16.765062    1972 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Feb  9 19:42:16.765143 kubelet[1972]: I0209 19:42:16.765110    1972 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost"
Feb  9 19:42:16.865784 kubelet[1972]: E0209 19:42:16.865743    1972 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:16.882154 kubelet[1972]: I0209 19:42:16.882061    1972 kubelet_node_status.go:108] "Node was previously registered" node="localhost"
Feb  9 19:42:16.882154 kubelet[1972]: I0209 19:42:16.882143    1972 kubelet_node_status.go:73] "Successfully registered node" node="localhost"
Feb  9 19:42:16.949621 kubelet[1972]: I0209 19:42:16.949579    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5e3a05bd084cb559afcd1e47f3bcc2c5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3a05bd084cb559afcd1e47f3bcc2c5\") " pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:16.949621 kubelet[1972]: I0209 19:42:16.949617    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5e3a05bd084cb559afcd1e47f3bcc2c5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5e3a05bd084cb559afcd1e47f3bcc2c5\") " pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:16.949816 kubelet[1972]: I0209 19:42:16.949639    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e3a05bd084cb559afcd1e47f3bcc2c5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5e3a05bd084cb559afcd1e47f3bcc2c5\") " pod="kube-system/kube-apiserver-localhost"
Feb  9 19:42:16.949816 kubelet[1972]: I0209 19:42:16.949656    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:16.949816 kubelet[1972]: I0209 19:42:16.949674    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:16.949816 kubelet[1972]: I0209 19:42:16.949691    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:16.949816 kubelet[1972]: I0209 19:42:16.949707    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost"
Feb  9 19:42:16.949942 kubelet[1972]: I0209 19:42:16.949765    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:16.949942 kubelet[1972]: I0209 19:42:16.949793    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost"
Feb  9 19:42:17.039797 sudo[2005]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb  9 19:42:17.039963 sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb  9 19:42:17.146683 kubelet[1972]: E0209 19:42:17.146557    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:17.146683 kubelet[1972]: E0209 19:42:17.146661    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:17.167043 kubelet[1972]: E0209 19:42:17.167006    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:17.507144 sudo[2005]: pam_unix(sudo:session): session closed for user root
Feb  9 19:42:17.630686 kubelet[1972]: I0209 19:42:17.630644    1972 apiserver.go:52] "Watching apiserver"
Feb  9 19:42:17.648721 kubelet[1972]: I0209 19:42:17.648684    1972 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  9 19:42:17.675503 kubelet[1972]: E0209 19:42:17.675477    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:17.675723 kubelet[1972]: E0209 19:42:17.675519    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:17.676044 kubelet[1972]: E0209 19:42:17.676016    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:17.704145 kubelet[1972]: I0209 19:42:17.704105    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.704048496 podCreationTimestamp="2024-02-09 19:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:17.665863564 +0000 UTC m=+1.085048249" watchObservedRunningTime="2024-02-09 19:42:17.704048496 +0000 UTC m=+1.123233181"
Feb  9 19:42:17.711843 kubelet[1972]: I0209 19:42:17.711818    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7116651109999999 podCreationTimestamp="2024-02-09 19:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:17.711676162 +0000 UTC m=+1.130860857" watchObservedRunningTime="2024-02-09 19:42:17.711665111 +0000 UTC m=+1.130849796"
Feb  9 19:42:17.712138 kubelet[1972]: I0209 19:42:17.712125    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.712082579 podCreationTimestamp="2024-02-09 19:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:17.705138368 +0000 UTC m=+1.124323053" watchObservedRunningTime="2024-02-09 19:42:17.712082579 +0000 UTC m=+1.131267274"
Feb  9 19:42:18.563804 sudo[1219]: pam_unix(sudo:session): session closed for user root
Feb  9 19:42:18.565033 sshd[1216]: pam_unix(sshd:session): session closed for user core
Feb  9 19:42:18.567030 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:38028.service: Deactivated successfully.
Feb  9 19:42:18.567742 systemd[1]: session-5.scope: Deactivated successfully.
Feb  9 19:42:18.567920 systemd[1]: session-5.scope: Consumed 3.329s CPU time.
Feb  9 19:42:18.568322 systemd-logind[1109]: Session 5 logged out. Waiting for processes to exit.
Feb  9 19:42:18.569114 systemd-logind[1109]: Removed session 5.
Feb  9 19:42:18.676785 kubelet[1972]: E0209 19:42:18.676746    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:19.677997 kubelet[1972]: E0209 19:42:19.677968    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:22.030413 kubelet[1972]: E0209 19:42:22.030382    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:22.681928 kubelet[1972]: E0209 19:42:22.681898    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:23.301805 kubelet[1972]: E0209 19:42:23.301771    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:23.683836 kubelet[1972]: E0209 19:42:23.683726    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:27.541258 update_engine[1114]: I0209 19:42:27.541211  1114 update_attempter.cc:509] Updating boot flags...
Feb  9 19:42:28.398828 kubelet[1972]: E0209 19:42:28.398805    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:30.675806 kubelet[1972]: I0209 19:42:30.675771    1972 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb  9 19:42:30.676166 env[1127]: time="2024-02-09T19:42:30.676130461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb  9 19:42:30.676343 kubelet[1972]: I0209 19:42:30.676326    1972 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb  9 19:42:31.223054 kubelet[1972]: I0209 19:42:31.223007    1972 topology_manager.go:215] "Topology Admit Handler" podUID="e7071b02-83a8-4dfa-8735-de88b9d43554" podNamespace="kube-system" podName="kube-proxy-rlwk8"
Feb  9 19:42:31.227393 systemd[1]: Created slice kubepods-besteffort-pode7071b02_83a8_4dfa_8735_de88b9d43554.slice.
Feb  9 19:42:31.230964 kubelet[1972]: I0209 19:42:31.230926    1972 topology_manager.go:215] "Topology Admit Handler" podUID="96938e21-d672-4b3e-abab-137a982bc520" podNamespace="kube-system" podName="cilium-kjcqw"
Feb  9 19:42:31.239862 systemd[1]: Created slice kubepods-burstable-pod96938e21_d672_4b3e_abab_137a982bc520.slice.
Feb  9 19:42:31.251068 kubelet[1972]: I0209 19:42:31.251033    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-xtables-lock\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251068 kubelet[1972]: I0209 19:42:31.251071    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-cgroup\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251270 kubelet[1972]: I0209 19:42:31.251094    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-kernel\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251270 kubelet[1972]: I0209 19:42:31.251114    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-run\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251270 kubelet[1972]: I0209 19:42:31.251134    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-bpf-maps\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251270 kubelet[1972]: I0209 19:42:31.251155    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7071b02-83a8-4dfa-8735-de88b9d43554-kube-proxy\") pod \"kube-proxy-rlwk8\" (UID: \"e7071b02-83a8-4dfa-8735-de88b9d43554\") " pod="kube-system/kube-proxy-rlwk8"
Feb  9 19:42:31.251270 kubelet[1972]: I0209 19:42:31.251175    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-net\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251270 kubelet[1972]: I0209 19:42:31.251199    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-hubble-tls\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251406 kubelet[1972]: I0209 19:42:31.251254    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62zb\" (UniqueName: \"kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-kube-api-access-x62zb\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251406 kubelet[1972]: I0209 19:42:31.251319    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cni-path\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251406 kubelet[1972]: I0209 19:42:31.251366    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7071b02-83a8-4dfa-8735-de88b9d43554-lib-modules\") pod \"kube-proxy-rlwk8\" (UID: \"e7071b02-83a8-4dfa-8735-de88b9d43554\") " pod="kube-system/kube-proxy-rlwk8"
Feb  9 19:42:31.251406 kubelet[1972]: I0209 19:42:31.251396    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv9x9\" (UniqueName: \"kubernetes.io/projected/e7071b02-83a8-4dfa-8735-de88b9d43554-kube-api-access-zv9x9\") pod \"kube-proxy-rlwk8\" (UID: \"e7071b02-83a8-4dfa-8735-de88b9d43554\") " pod="kube-system/kube-proxy-rlwk8"
Feb  9 19:42:31.251510 kubelet[1972]: I0209 19:42:31.251440    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-etc-cni-netd\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251510 kubelet[1972]: I0209 19:42:31.251475    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96938e21-d672-4b3e-abab-137a982bc520-clustermesh-secrets\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251510 kubelet[1972]: I0209 19:42:31.251500    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96938e21-d672-4b3e-abab-137a982bc520-cilium-config-path\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251575 kubelet[1972]: I0209 19:42:31.251523    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7071b02-83a8-4dfa-8735-de88b9d43554-xtables-lock\") pod \"kube-proxy-rlwk8\" (UID: \"e7071b02-83a8-4dfa-8735-de88b9d43554\") " pod="kube-system/kube-proxy-rlwk8"
Feb  9 19:42:31.251575 kubelet[1972]: I0209 19:42:31.251544    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-hostproc\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.251620 kubelet[1972]: I0209 19:42:31.251575    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-lib-modules\") pod \"cilium-kjcqw\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") " pod="kube-system/cilium-kjcqw"
Feb  9 19:42:31.537042 kubelet[1972]: E0209 19:42:31.536990    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:31.537784 env[1127]: time="2024-02-09T19:42:31.537733373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rlwk8,Uid:e7071b02-83a8-4dfa-8735-de88b9d43554,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:31.543281 kubelet[1972]: E0209 19:42:31.543255    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:31.543746 env[1127]: time="2024-02-09T19:42:31.543698836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjcqw,Uid:96938e21-d672-4b3e-abab-137a982bc520,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:31.714952 kubelet[1972]: I0209 19:42:31.714858    1972 topology_manager.go:215] "Topology Admit Handler" podUID="03c0ab39-2bbf-4c6c-86bb-a6fb161453f7" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-9hrwp"
Feb  9 19:42:31.719168 env[1127]: time="2024-02-09T19:42:31.719098609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:31.719370 env[1127]: time="2024-02-09T19:42:31.719194130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:31.719370 env[1127]: time="2024-02-09T19:42:31.719218666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:31.719370 env[1127]: time="2024-02-09T19:42:31.719344554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b6a1ad0437e7220f27c31c98dbc69a7e633f0aa1509be212d5d5223b9cf72b1 pid=2079 runtime=io.containerd.runc.v2
Feb  9 19:42:31.721620 systemd[1]: Created slice kubepods-besteffort-pod03c0ab39_2bbf_4c6c_86bb_a6fb161453f7.slice.
Feb  9 19:42:31.728492 env[1127]: time="2024-02-09T19:42:31.728398172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:31.728587 env[1127]: time="2024-02-09T19:42:31.728494454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:31.728587 env[1127]: time="2024-02-09T19:42:31.728518219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:31.728677 env[1127]: time="2024-02-09T19:42:31.728645660Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c pid=2095 runtime=io.containerd.runc.v2
Feb  9 19:42:31.736317 systemd[1]: Started cri-containerd-1b6a1ad0437e7220f27c31c98dbc69a7e633f0aa1509be212d5d5223b9cf72b1.scope.
Feb  9 19:42:31.751510 systemd[1]: Started cri-containerd-29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c.scope.
Feb  9 19:42:31.757966 kubelet[1972]: I0209 19:42:31.755252    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjb2x\" (UniqueName: \"kubernetes.io/projected/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-kube-api-access-gjb2x\") pod \"cilium-operator-6bc8ccdb58-9hrwp\" (UID: \"03c0ab39-2bbf-4c6c-86bb-a6fb161453f7\") " pod="kube-system/cilium-operator-6bc8ccdb58-9hrwp"
Feb  9 19:42:31.757966 kubelet[1972]: I0209 19:42:31.755296    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-9hrwp\" (UID: \"03c0ab39-2bbf-4c6c-86bb-a6fb161453f7\") " pod="kube-system/cilium-operator-6bc8ccdb58-9hrwp"
Feb  9 19:42:31.768100 env[1127]: time="2024-02-09T19:42:31.768050328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rlwk8,Uid:e7071b02-83a8-4dfa-8735-de88b9d43554,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b6a1ad0437e7220f27c31c98dbc69a7e633f0aa1509be212d5d5223b9cf72b1\""
Feb  9 19:42:31.768750 kubelet[1972]: E0209 19:42:31.768652    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:31.770348 env[1127]: time="2024-02-09T19:42:31.770316270Z" level=info msg="CreateContainer within sandbox \"1b6a1ad0437e7220f27c31c98dbc69a7e633f0aa1509be212d5d5223b9cf72b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb  9 19:42:31.774112 env[1127]: time="2024-02-09T19:42:31.772836302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjcqw,Uid:96938e21-d672-4b3e-abab-137a982bc520,Namespace:kube-system,Attempt:0,} returns sandbox id \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\""
Feb  9 19:42:31.774112 env[1127]: time="2024-02-09T19:42:31.773933044Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb  9 19:42:31.774243 kubelet[1972]: E0209 19:42:31.773227    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:31.794657 env[1127]: time="2024-02-09T19:42:31.794530134Z" level=info msg="CreateContainer within sandbox \"1b6a1ad0437e7220f27c31c98dbc69a7e633f0aa1509be212d5d5223b9cf72b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8576d3333cbb11d5093688ab7f33c39802644250a30ca1974dd108581103bcca\""
Feb  9 19:42:31.796637 env[1127]: time="2024-02-09T19:42:31.796600055Z" level=info msg="StartContainer for \"8576d3333cbb11d5093688ab7f33c39802644250a30ca1974dd108581103bcca\""
Feb  9 19:42:31.813031 systemd[1]: Started cri-containerd-8576d3333cbb11d5093688ab7f33c39802644250a30ca1974dd108581103bcca.scope.
Feb  9 19:42:31.838692 env[1127]: time="2024-02-09T19:42:31.838639243Z" level=info msg="StartContainer for \"8576d3333cbb11d5093688ab7f33c39802644250a30ca1974dd108581103bcca\" returns successfully"
Feb  9 19:42:32.023836 kubelet[1972]: E0209 19:42:32.023798    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:32.024279 env[1127]: time="2024-02-09T19:42:32.024230252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-9hrwp,Uid:03c0ab39-2bbf-4c6c-86bb-a6fb161453f7,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:32.115327 env[1127]: time="2024-02-09T19:42:32.115192270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:32.115327 env[1127]: time="2024-02-09T19:42:32.115232485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:32.115327 env[1127]: time="2024-02-09T19:42:32.115242484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:32.115519 env[1127]: time="2024-02-09T19:42:32.115371308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157 pid=2326 runtime=io.containerd.runc.v2
Feb  9 19:42:32.125602 systemd[1]: Started cri-containerd-2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157.scope.
Feb  9 19:42:32.163622 env[1127]: time="2024-02-09T19:42:32.163584367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-9hrwp,Uid:03c0ab39-2bbf-4c6c-86bb-a6fb161453f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157\""
Feb  9 19:42:32.164346 kubelet[1972]: E0209 19:42:32.164325    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:32.696873 kubelet[1972]: E0209 19:42:32.696834    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:41.728942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050255161.mount: Deactivated successfully.
Feb  9 19:42:42.678888 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:54256.service.
Feb  9 19:42:43.074983 sshd[2363]: Accepted publickey for core from 10.0.0.1 port 54256 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:42:43.075798 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:42:43.079285 systemd-logind[1109]: New session 6 of user core.
Feb  9 19:42:43.080001 systemd[1]: Started session-6.scope.
Feb  9 19:42:43.205257 sshd[2363]: pam_unix(sshd:session): session closed for user core
Feb  9 19:42:43.207217 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:54256.service: Deactivated successfully.
Feb  9 19:42:43.207857 systemd[1]: session-6.scope: Deactivated successfully.
Feb  9 19:42:43.208299 systemd-logind[1109]: Session 6 logged out. Waiting for processes to exit.
Feb  9 19:42:43.208916 systemd-logind[1109]: Removed session 6.
Feb  9 19:42:45.610203 env[1127]: time="2024-02-09T19:42:45.610153233Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:45.611936 env[1127]: time="2024-02-09T19:42:45.611896211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:45.613740 env[1127]: time="2024-02-09T19:42:45.613708099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:45.614281 env[1127]: time="2024-02-09T19:42:45.614249347Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb  9 19:42:45.614971 env[1127]: time="2024-02-09T19:42:45.614834208Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb  9 19:42:45.615766 env[1127]: time="2024-02-09T19:42:45.615742105Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 19:42:45.627110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197236658.mount: Deactivated successfully.
Feb  9 19:42:45.632538 env[1127]: time="2024-02-09T19:42:45.632501778Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\""
Feb  9 19:42:45.633171 env[1127]: time="2024-02-09T19:42:45.633009233Z" level=info msg="StartContainer for \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\""
Feb  9 19:42:45.648559 systemd[1]: Started cri-containerd-e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2.scope.
Feb  9 19:42:45.762120 systemd[1]: cri-containerd-e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2.scope: Deactivated successfully.
Feb  9 19:42:46.056547 env[1127]: time="2024-02-09T19:42:46.056481956Z" level=info msg="StartContainer for \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\" returns successfully"
Feb  9 19:42:46.221657 env[1127]: time="2024-02-09T19:42:46.221608764Z" level=info msg="shim disconnected" id=e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2
Feb  9 19:42:46.221657 env[1127]: time="2024-02-09T19:42:46.221654380Z" level=warning msg="cleaning up after shim disconnected" id=e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2 namespace=k8s.io
Feb  9 19:42:46.221657 env[1127]: time="2024-02-09T19:42:46.221663006Z" level=info msg="cleaning up dead shim"
Feb  9 19:42:46.228048 env[1127]: time="2024-02-09T19:42:46.228012516Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2425 runtime=io.containerd.runc.v2\n"
Feb  9 19:42:46.625068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2-rootfs.mount: Deactivated successfully.
Feb  9 19:42:47.067578 kubelet[1972]: E0209 19:42:47.067541    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:47.071616 env[1127]: time="2024-02-09T19:42:47.071566129Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 19:42:47.089873 kubelet[1972]: I0209 19:42:47.089831    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rlwk8" podStartSLOduration=16.089785966 podCreationTimestamp="2024-02-09 19:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:32.707285843 +0000 UTC m=+16.126470528" watchObservedRunningTime="2024-02-09 19:42:47.089785966 +0000 UTC m=+30.508970651"
Feb  9 19:42:47.242476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164056993.mount: Deactivated successfully.
Feb  9 19:42:47.245532 env[1127]: time="2024-02-09T19:42:47.245403207Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\""
Feb  9 19:42:47.245992 env[1127]: time="2024-02-09T19:42:47.245968019Z" level=info msg="StartContainer for \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\""
Feb  9 19:42:47.262904 systemd[1]: Started cri-containerd-827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464.scope.
Feb  9 19:42:47.286919 env[1127]: time="2024-02-09T19:42:47.286860775Z" level=info msg="StartContainer for \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\" returns successfully"
Feb  9 19:42:47.295468 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 19:42:47.295737 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 19:42:47.295976 systemd[1]: Stopping systemd-sysctl.service...
Feb  9 19:42:47.297450 systemd[1]: Starting systemd-sysctl.service...
Feb  9 19:42:47.299117 systemd[1]: cri-containerd-827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464.scope: Deactivated successfully.
Feb  9 19:42:47.307479 systemd[1]: Finished systemd-sysctl.service.
Feb  9 19:42:47.322487 env[1127]: time="2024-02-09T19:42:47.322361433Z" level=info msg="shim disconnected" id=827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464
Feb  9 19:42:47.322487 env[1127]: time="2024-02-09T19:42:47.322420154Z" level=warning msg="cleaning up after shim disconnected" id=827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464 namespace=k8s.io
Feb  9 19:42:47.322487 env[1127]: time="2024-02-09T19:42:47.322436505Z" level=info msg="cleaning up dead shim"
Feb  9 19:42:47.328875 env[1127]: time="2024-02-09T19:42:47.328827691Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2488 runtime=io.containerd.runc.v2\n"
Feb  9 19:42:47.625248 systemd[1]: run-containerd-runc-k8s.io-827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464-runc.2Wm0bq.mount: Deactivated successfully.
Feb  9 19:42:47.625357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464-rootfs.mount: Deactivated successfully.
Feb  9 19:42:48.071581 kubelet[1972]: E0209 19:42:48.071497    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:48.072855 env[1127]: time="2024-02-09T19:42:48.072793108Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 19:42:48.086883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208832746.mount: Deactivated successfully.
Feb  9 19:42:48.088697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206121385.mount: Deactivated successfully.
Feb  9 19:42:48.090682 env[1127]: time="2024-02-09T19:42:48.090634938Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\""
Feb  9 19:42:48.091262 env[1127]: time="2024-02-09T19:42:48.091219948Z" level=info msg="StartContainer for \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\""
Feb  9 19:42:48.107340 systemd[1]: Started cri-containerd-5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db.scope.
Feb  9 19:42:48.136300 env[1127]: time="2024-02-09T19:42:48.136235412Z" level=info msg="StartContainer for \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\" returns successfully"
Feb  9 19:42:48.136986 systemd[1]: cri-containerd-5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db.scope: Deactivated successfully.
Feb  9 19:42:48.209748 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:46838.service.
Feb  9 19:42:48.386534 env[1127]: time="2024-02-09T19:42:48.386421468Z" level=info msg="shim disconnected" id=5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db
Feb  9 19:42:48.386737 env[1127]: time="2024-02-09T19:42:48.386714420Z" level=warning msg="cleaning up after shim disconnected" id=5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db namespace=k8s.io
Feb  9 19:42:48.386814 env[1127]: time="2024-02-09T19:42:48.386795201Z" level=info msg="cleaning up dead shim"
Feb  9 19:42:48.397867 env[1127]: time="2024-02-09T19:42:48.397807533Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545 runtime=io.containerd.runc.v2\n"
Feb  9 19:42:48.406476 env[1127]: time="2024-02-09T19:42:48.406415214Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:48.410288 env[1127]: time="2024-02-09T19:42:48.410257968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:48.411959 env[1127]: time="2024-02-09T19:42:48.411916897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 19:42:48.412411 env[1127]: time="2024-02-09T19:42:48.412367854Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb  9 19:42:48.413964 env[1127]: time="2024-02-09T19:42:48.413921235Z" level=info msg="CreateContainer within sandbox \"2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb  9 19:42:48.419191 sshd[2544]: Accepted publickey for core from 10.0.0.1 port 46838 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:42:48.420384 sshd[2544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:42:48.424116 systemd-logind[1109]: New session 7 of user core.
Feb  9 19:42:48.424930 systemd[1]: Started session-7.scope.
Feb  9 19:42:48.425217 env[1127]: time="2024-02-09T19:42:48.424927516Z" level=info msg="CreateContainer within sandbox \"2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\""
Feb  9 19:42:48.425808 env[1127]: time="2024-02-09T19:42:48.425777223Z" level=info msg="StartContainer for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\""
Feb  9 19:42:48.439708 systemd[1]: Started cri-containerd-c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db.scope.
Feb  9 19:42:48.463060 env[1127]: time="2024-02-09T19:42:48.463012059Z" level=info msg="StartContainer for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" returns successfully"
Feb  9 19:42:48.619338 sshd[2544]: pam_unix(sshd:session): session closed for user core
Feb  9 19:42:48.622092 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:46838.service: Deactivated successfully.
Feb  9 19:42:48.622815 systemd[1]: session-7.scope: Deactivated successfully.
Feb  9 19:42:48.623318 systemd-logind[1109]: Session 7 logged out. Waiting for processes to exit.
Feb  9 19:42:48.625914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db-rootfs.mount: Deactivated successfully.
Feb  9 19:42:48.627145 systemd-logind[1109]: Removed session 7.
Feb  9 19:42:49.074068 kubelet[1972]: E0209 19:42:49.074035    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:49.076411 kubelet[1972]: E0209 19:42:49.076398    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:49.077801 env[1127]: time="2024-02-09T19:42:49.077771945Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 19:42:49.090061 kubelet[1972]: I0209 19:42:49.090028    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-9hrwp" podStartSLOduration=1.8425004980000002 podCreationTimestamp="2024-02-09 19:42:31 +0000 UTC" firstStartedPulling="2024-02-09 19:42:32.165143551 +0000 UTC m=+15.584328237" lastFinishedPulling="2024-02-09 19:42:48.412623556 +0000 UTC m=+31.831808231" observedRunningTime="2024-02-09 19:42:49.089694545 +0000 UTC m=+32.508879230" watchObservedRunningTime="2024-02-09 19:42:49.089980492 +0000 UTC m=+32.509165187"
Feb  9 19:42:49.098569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005385793.mount: Deactivated successfully.
Feb  9 19:42:49.104353 env[1127]: time="2024-02-09T19:42:49.104303392Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\""
Feb  9 19:42:49.105217 env[1127]: time="2024-02-09T19:42:49.105164200Z" level=info msg="StartContainer for \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\""
Feb  9 19:42:49.131222 systemd[1]: Started cri-containerd-d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f.scope.
Feb  9 19:42:49.182040 systemd[1]: cri-containerd-d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f.scope: Deactivated successfully.
Feb  9 19:42:49.184270 env[1127]: time="2024-02-09T19:42:49.184214010Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96938e21_d672_4b3e_abab_137a982bc520.slice/cri-containerd-d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f.scope/memory.events\": no such file or directory"
Feb  9 19:42:49.189643 env[1127]: time="2024-02-09T19:42:49.189597829Z" level=info msg="StartContainer for \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\" returns successfully"
Feb  9 19:42:49.220745 env[1127]: time="2024-02-09T19:42:49.220698064Z" level=info msg="shim disconnected" id=d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f
Feb  9 19:42:49.220745 env[1127]: time="2024-02-09T19:42:49.220741265Z" level=warning msg="cleaning up after shim disconnected" id=d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f namespace=k8s.io
Feb  9 19:42:49.220745 env[1127]: time="2024-02-09T19:42:49.220750302Z" level=info msg="cleaning up dead shim"
Feb  9 19:42:49.228011 env[1127]: time="2024-02-09T19:42:49.227966996Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2649 runtime=io.containerd.runc.v2\n"
Feb  9 19:42:49.625432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f-rootfs.mount: Deactivated successfully.
Feb  9 19:42:50.081267 kubelet[1972]: E0209 19:42:50.081241    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:50.081645 kubelet[1972]: E0209 19:42:50.081287    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:50.083264 env[1127]: time="2024-02-09T19:42:50.083219646Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 19:42:50.395612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530117041.mount: Deactivated successfully.
Feb  9 19:42:50.671966 env[1127]: time="2024-02-09T19:42:50.671843827Z" level=info msg="CreateContainer within sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\""
Feb  9 19:42:50.672445 env[1127]: time="2024-02-09T19:42:50.672415972Z" level=info msg="StartContainer for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\""
Feb  9 19:42:50.692029 systemd[1]: Started cri-containerd-25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899.scope.
Feb  9 19:42:50.890683 env[1127]: time="2024-02-09T19:42:50.890440020Z" level=info msg="StartContainer for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" returns successfully"
Feb  9 19:42:51.011167 kubelet[1972]: I0209 19:42:51.010964    1972 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb  9 19:42:51.086108 kubelet[1972]: E0209 19:42:51.086079    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:51.232333 kubelet[1972]: I0209 19:42:51.232281    1972 topology_manager.go:215] "Topology Admit Handler" podUID="fe88e656-63af-4d01-84a3-f2ae6287f8ce" podNamespace="kube-system" podName="coredns-5dd5756b68-s4d72"
Feb  9 19:42:51.233690 kubelet[1972]: I0209 19:42:51.233654    1972 topology_manager.go:215] "Topology Admit Handler" podUID="b1911326-e7ff-4aae-9a59-19337a552fc2" podNamespace="kube-system" podName="coredns-5dd5756b68-mlmpb"
Feb  9 19:42:51.235130 kubelet[1972]: I0209 19:42:51.235107    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kjcqw" podStartSLOduration=6.394015529 podCreationTimestamp="2024-02-09 19:42:31 +0000 UTC" firstStartedPulling="2024-02-09 19:42:31.773590998 +0000 UTC m=+15.192775673" lastFinishedPulling="2024-02-09 19:42:45.614644802 +0000 UTC m=+29.033829487" observedRunningTime="2024-02-09 19:42:51.234766724 +0000 UTC m=+34.653951419" watchObservedRunningTime="2024-02-09 19:42:51.235069343 +0000 UTC m=+34.654254028"
Feb  9 19:42:51.239540 systemd[1]: Created slice kubepods-burstable-podfe88e656_63af_4d01_84a3_f2ae6287f8ce.slice.
Feb  9 19:42:51.243752 systemd[1]: Created slice kubepods-burstable-podb1911326_e7ff_4aae_9a59_19337a552fc2.slice.
Feb  9 19:42:51.298915 kubelet[1972]: I0209 19:42:51.298763    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe88e656-63af-4d01-84a3-f2ae6287f8ce-config-volume\") pod \"coredns-5dd5756b68-s4d72\" (UID: \"fe88e656-63af-4d01-84a3-f2ae6287f8ce\") " pod="kube-system/coredns-5dd5756b68-s4d72"
Feb  9 19:42:51.298915 kubelet[1972]: I0209 19:42:51.298826    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1911326-e7ff-4aae-9a59-19337a552fc2-config-volume\") pod \"coredns-5dd5756b68-mlmpb\" (UID: \"b1911326-e7ff-4aae-9a59-19337a552fc2\") " pod="kube-system/coredns-5dd5756b68-mlmpb"
Feb  9 19:42:51.298915 kubelet[1972]: I0209 19:42:51.298850    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g45g\" (UniqueName: \"kubernetes.io/projected/b1911326-e7ff-4aae-9a59-19337a552fc2-kube-api-access-8g45g\") pod \"coredns-5dd5756b68-mlmpb\" (UID: \"b1911326-e7ff-4aae-9a59-19337a552fc2\") " pod="kube-system/coredns-5dd5756b68-mlmpb"
Feb  9 19:42:51.298915 kubelet[1972]: I0209 19:42:51.298893    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhh89\" (UniqueName: \"kubernetes.io/projected/fe88e656-63af-4d01-84a3-f2ae6287f8ce-kube-api-access-xhh89\") pod \"coredns-5dd5756b68-s4d72\" (UID: \"fe88e656-63af-4d01-84a3-f2ae6287f8ce\") " pod="kube-system/coredns-5dd5756b68-s4d72"
Feb  9 19:42:51.542450 kubelet[1972]: E0209 19:42:51.542409    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:51.543059 env[1127]: time="2024-02-09T19:42:51.543021359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-s4d72,Uid:fe88e656-63af-4d01-84a3-f2ae6287f8ce,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:51.545573 kubelet[1972]: E0209 19:42:51.545556    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:51.545926 env[1127]: time="2024-02-09T19:42:51.545888726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mlmpb,Uid:b1911326-e7ff-4aae-9a59-19337a552fc2,Namespace:kube-system,Attempt:0,}"
Feb  9 19:42:51.683691 systemd[1]: run-containerd-runc-k8s.io-25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899-runc.25jdGo.mount: Deactivated successfully.
Feb  9 19:42:52.088157 kubelet[1972]: E0209 19:42:52.088114    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:52.691668 systemd-networkd[1023]: cilium_host: Link UP
Feb  9 19:42:52.691772 systemd-networkd[1023]: cilium_net: Link UP
Feb  9 19:42:52.691775 systemd-networkd[1023]: cilium_net: Gained carrier
Feb  9 19:42:52.691896 systemd-networkd[1023]: cilium_host: Gained carrier
Feb  9 19:42:52.695500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb  9 19:42:52.695376 systemd-networkd[1023]: cilium_host: Gained IPv6LL
Feb  9 19:42:52.766178 systemd-networkd[1023]: cilium_vxlan: Link UP
Feb  9 19:42:52.766187 systemd-networkd[1023]: cilium_vxlan: Gained carrier
Feb  9 19:42:53.017501 kernel: NET: Registered PF_ALG protocol family
Feb  9 19:42:53.089544 kubelet[1972]: E0209 19:42:53.089508    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:53.139621 systemd-networkd[1023]: cilium_net: Gained IPv6LL
Feb  9 19:42:53.591476 systemd-networkd[1023]: lxc_health: Link UP
Feb  9 19:42:53.601535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 19:42:53.601275 systemd-networkd[1023]: lxc_health: Gained carrier
Feb  9 19:42:53.624134 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:46844.service.
Feb  9 19:42:53.668554 sshd[3180]: Accepted publickey for core from 10.0.0.1 port 46844 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:42:53.669826 sshd[3180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:42:53.674731 systemd[1]: Started session-8.scope.
Feb  9 19:42:53.676343 systemd-logind[1109]: New session 8 of user core.
Feb  9 19:42:53.822256 sshd[3180]: pam_unix(sshd:session): session closed for user core
Feb  9 19:42:53.824787 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:46844.service: Deactivated successfully.
Feb  9 19:42:53.825449 systemd[1]: session-8.scope: Deactivated successfully.
Feb  9 19:42:53.826199 systemd-logind[1109]: Session 8 logged out. Waiting for processes to exit.
Feb  9 19:42:53.826819 systemd-logind[1109]: Removed session 8.
Feb  9 19:42:54.129739 systemd-networkd[1023]: lxc4c7408913bd9: Link UP
Feb  9 19:42:54.138502 kernel: eth0: renamed from tmp8544f
Feb  9 19:42:54.157780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 19:42:54.157900 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4c7408913bd9: link becomes ready
Feb  9 19:42:54.159249 systemd-networkd[1023]: lxc4c7408913bd9: Gained carrier
Feb  9 19:42:54.159408 systemd-networkd[1023]: lxcd9db266ffb63: Link UP
Feb  9 19:42:54.165529 kernel: eth0: renamed from tmpacc72
Feb  9 19:42:54.174156 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 19:42:54.174281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd9db266ffb63: link becomes ready
Feb  9 19:42:54.174187 systemd-networkd[1023]: lxcd9db266ffb63: Gained carrier
Feb  9 19:42:54.294740 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL
Feb  9 19:42:54.995915 systemd-networkd[1023]: lxc_health: Gained IPv6LL
Feb  9 19:42:55.443658 systemd-networkd[1023]: lxcd9db266ffb63: Gained IPv6LL
Feb  9 19:42:55.545341 kubelet[1972]: E0209 19:42:55.545313    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:55.955610 systemd-networkd[1023]: lxc4c7408913bd9: Gained IPv6LL
Feb  9 19:42:56.096250 kubelet[1972]: E0209 19:42:56.096213    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:57.504723 env[1127]: time="2024-02-09T19:42:57.504644884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:57.504723 env[1127]: time="2024-02-09T19:42:57.504697573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:57.505100 env[1127]: time="2024-02-09T19:42:57.504706810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:57.505353 env[1127]: time="2024-02-09T19:42:57.505322617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/acc72d1ff04580e869b3d8b68e3b56173e9f67ed556fe4897f0a6f6cd0df8819 pid=3234 runtime=io.containerd.runc.v2
Feb  9 19:42:57.519884 systemd[1]: Started cri-containerd-acc72d1ff04580e869b3d8b68e3b56173e9f67ed556fe4897f0a6f6cd0df8819.scope.
Feb  9 19:42:57.529389 systemd-resolved[1068]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 19:42:57.538769 env[1127]: time="2024-02-09T19:42:57.538679562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:42:57.538769 env[1127]: time="2024-02-09T19:42:57.538732060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:42:57.538937 env[1127]: time="2024-02-09T19:42:57.538744664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:42:57.539186 env[1127]: time="2024-02-09T19:42:57.539139906Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8544fdab364cb02020a523ed0424c80aa6439f4b91d6f644266bf34bdb11f496 pid=3267 runtime=io.containerd.runc.v2
Feb  9 19:42:57.557341 env[1127]: time="2024-02-09T19:42:57.557293504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mlmpb,Uid:b1911326-e7ff-4aae-9a59-19337a552fc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"acc72d1ff04580e869b3d8b68e3b56173e9f67ed556fe4897f0a6f6cd0df8819\""
Feb  9 19:42:57.558786 kubelet[1972]: E0209 19:42:57.558220    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:57.560440 systemd[1]: Started cri-containerd-8544fdab364cb02020a523ed0424c80aa6439f4b91d6f644266bf34bdb11f496.scope.
Feb  9 19:42:57.561986 env[1127]: time="2024-02-09T19:42:57.561950980Z" level=info msg="CreateContainer within sandbox \"acc72d1ff04580e869b3d8b68e3b56173e9f67ed556fe4897f0a6f6cd0df8819\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb  9 19:42:57.574055 systemd-resolved[1068]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 19:42:57.597717 env[1127]: time="2024-02-09T19:42:57.597663337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-s4d72,Uid:fe88e656-63af-4d01-84a3-f2ae6287f8ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"8544fdab364cb02020a523ed0424c80aa6439f4b91d6f644266bf34bdb11f496\""
Feb  9 19:42:57.598203 kubelet[1972]: E0209 19:42:57.598189    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:57.601036 env[1127]: time="2024-02-09T19:42:57.601011644Z" level=info msg="CreateContainer within sandbox \"8544fdab364cb02020a523ed0424c80aa6439f4b91d6f644266bf34bdb11f496\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb  9 19:42:57.761958 env[1127]: time="2024-02-09T19:42:57.761818405Z" level=info msg="CreateContainer within sandbox \"8544fdab364cb02020a523ed0424c80aa6439f4b91d6f644266bf34bdb11f496\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b86e456c75185ccbcc491fb0bcc7b58704cc48d30283d2df4fcae958f658d93b\""
Feb  9 19:42:57.763627 env[1127]: time="2024-02-09T19:42:57.763578059Z" level=info msg="StartContainer for \"b86e456c75185ccbcc491fb0bcc7b58704cc48d30283d2df4fcae958f658d93b\""
Feb  9 19:42:57.763627 env[1127]: time="2024-02-09T19:42:57.763595532Z" level=info msg="CreateContainer within sandbox \"acc72d1ff04580e869b3d8b68e3b56173e9f67ed556fe4897f0a6f6cd0df8819\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23df5eee5d10bd18702d8b0177bfac18eb754513ccc2097c9f8b9f6364308af9\""
Feb  9 19:42:57.766859 env[1127]: time="2024-02-09T19:42:57.766807884Z" level=info msg="StartContainer for \"23df5eee5d10bd18702d8b0177bfac18eb754513ccc2097c9f8b9f6364308af9\""
Feb  9 19:42:57.782207 systemd[1]: Started cri-containerd-b86e456c75185ccbcc491fb0bcc7b58704cc48d30283d2df4fcae958f658d93b.scope.
Feb  9 19:42:57.786934 systemd[1]: Started cri-containerd-23df5eee5d10bd18702d8b0177bfac18eb754513ccc2097c9f8b9f6364308af9.scope.
Feb  9 19:42:57.811199 env[1127]: time="2024-02-09T19:42:57.811137533Z" level=info msg="StartContainer for \"b86e456c75185ccbcc491fb0bcc7b58704cc48d30283d2df4fcae958f658d93b\" returns successfully"
Feb  9 19:42:57.812725 env[1127]: time="2024-02-09T19:42:57.812671543Z" level=info msg="StartContainer for \"23df5eee5d10bd18702d8b0177bfac18eb754513ccc2097c9f8b9f6364308af9\" returns successfully"
Feb  9 19:42:58.100771 kubelet[1972]: E0209 19:42:58.100608    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:58.102234 kubelet[1972]: E0209 19:42:58.102163    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:58.212440 kubelet[1972]: I0209 19:42:58.212362    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mlmpb" podStartSLOduration=27.212324816 podCreationTimestamp="2024-02-09 19:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:58.211892675 +0000 UTC m=+41.631077360" watchObservedRunningTime="2024-02-09 19:42:58.212324816 +0000 UTC m=+41.631509501"
Feb  9 19:42:58.226991 kubelet[1972]: I0209 19:42:58.226951    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-s4d72" podStartSLOduration=27.22690468 podCreationTimestamp="2024-02-09 19:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:58.226299434 +0000 UTC m=+41.645484119" watchObservedRunningTime="2024-02-09 19:42:58.22690468 +0000 UTC m=+41.646089366"
Feb  9 19:42:58.508811 systemd[1]: run-containerd-runc-k8s.io-8544fdab364cb02020a523ed0424c80aa6439f4b91d6f644266bf34bdb11f496-runc.69OR3E.mount: Deactivated successfully.
Feb  9 19:42:58.826525 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:36938.service.
Feb  9 19:42:58.860692 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 36938 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:42:58.861926 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:42:58.864957 systemd-logind[1109]: New session 9 of user core.
Feb  9 19:42:58.865686 systemd[1]: Started session-9.scope.
Feb  9 19:42:58.991612 sshd[3394]: pam_unix(sshd:session): session closed for user core
Feb  9 19:42:58.993438 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:36938.service: Deactivated successfully.
Feb  9 19:42:58.994160 systemd[1]: session-9.scope: Deactivated successfully.
Feb  9 19:42:58.994650 systemd-logind[1109]: Session 9 logged out. Waiting for processes to exit.
Feb  9 19:42:58.995240 systemd-logind[1109]: Removed session 9.
Feb  9 19:42:59.104090 kubelet[1972]: E0209 19:42:59.103976    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:42:59.104090 kubelet[1972]: E0209 19:42:59.104052    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:00.105453 kubelet[1972]: E0209 19:43:00.105424    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:00.105453 kubelet[1972]: E0209 19:43:00.105431    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:03.996314 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:36944.service.
Feb  9 19:43:04.030761 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 36944 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:04.032105 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:04.035357 systemd-logind[1109]: New session 10 of user core.
Feb  9 19:43:04.036410 systemd[1]: Started session-10.scope.
Feb  9 19:43:04.167255 sshd[3412]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:04.169892 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:36944.service: Deactivated successfully.
Feb  9 19:43:04.170488 systemd[1]: session-10.scope: Deactivated successfully.
Feb  9 19:43:04.170990 systemd-logind[1109]: Session 10 logged out. Waiting for processes to exit.
Feb  9 19:43:04.172046 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:36952.service.
Feb  9 19:43:04.172693 systemd-logind[1109]: Removed session 10.
Feb  9 19:43:04.204988 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 36952 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:04.206103 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:04.208898 systemd-logind[1109]: New session 11 of user core.
Feb  9 19:43:04.209825 systemd[1]: Started session-11.scope.
Feb  9 19:43:04.842100 sshd[3426]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:04.845865 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:36962.service.
Feb  9 19:43:04.848816 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:36952.service: Deactivated successfully.
Feb  9 19:43:04.849658 systemd[1]: session-11.scope: Deactivated successfully.
Feb  9 19:43:04.852200 systemd-logind[1109]: Session 11 logged out. Waiting for processes to exit.
Feb  9 19:43:04.853975 systemd-logind[1109]: Removed session 11.
Feb  9 19:43:04.885261 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:04.886483 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:04.889950 systemd-logind[1109]: New session 12 of user core.
Feb  9 19:43:04.891055 systemd[1]: Started session-12.scope.
Feb  9 19:43:05.147985 sshd[3436]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:05.150113 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:36962.service: Deactivated successfully.
Feb  9 19:43:05.150784 systemd[1]: session-12.scope: Deactivated successfully.
Feb  9 19:43:05.151489 systemd-logind[1109]: Session 12 logged out. Waiting for processes to exit.
Feb  9 19:43:05.152059 systemd-logind[1109]: Removed session 12.
Feb  9 19:43:10.152018 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:36466.service.
Feb  9 19:43:10.184548 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 36466 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:10.185698 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:10.188721 systemd-logind[1109]: New session 13 of user core.
Feb  9 19:43:10.189498 systemd[1]: Started session-13.scope.
Feb  9 19:43:10.292436 sshd[3450]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:10.294733 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:36466.service: Deactivated successfully.
Feb  9 19:43:10.295620 systemd[1]: session-13.scope: Deactivated successfully.
Feb  9 19:43:10.296545 systemd-logind[1109]: Session 13 logged out. Waiting for processes to exit.
Feb  9 19:43:10.297260 systemd-logind[1109]: Removed session 13.
Feb  9 19:43:15.296740 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:36470.service.
Feb  9 19:43:15.329394 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 36470 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:15.330392 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:15.333229 systemd-logind[1109]: New session 14 of user core.
Feb  9 19:43:15.333949 systemd[1]: Started session-14.scope.
Feb  9 19:43:15.431891 sshd[3463]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:15.434718 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:36470.service: Deactivated successfully.
Feb  9 19:43:15.435253 systemd[1]: session-14.scope: Deactivated successfully.
Feb  9 19:43:15.435848 systemd-logind[1109]: Session 14 logged out. Waiting for processes to exit.
Feb  9 19:43:15.436902 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:36480.service.
Feb  9 19:43:15.437711 systemd-logind[1109]: Removed session 14.
Feb  9 19:43:15.469526 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 36480 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:15.470440 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:15.473745 systemd-logind[1109]: New session 15 of user core.
Feb  9 19:43:15.474730 systemd[1]: Started session-15.scope.
Feb  9 19:43:16.051119 sshd[3476]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:16.053850 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:36480.service: Deactivated successfully.
Feb  9 19:43:16.054426 systemd[1]: session-15.scope: Deactivated successfully.
Feb  9 19:43:16.055042 systemd-logind[1109]: Session 15 logged out. Waiting for processes to exit.
Feb  9 19:43:16.056099 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:36484.service.
Feb  9 19:43:16.057095 systemd-logind[1109]: Removed session 15.
Feb  9 19:43:16.090039 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 36484 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:16.090945 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:16.093781 systemd-logind[1109]: New session 16 of user core.
Feb  9 19:43:16.094497 systemd[1]: Started session-16.scope.
Feb  9 19:43:16.896893 sshd[3487]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:16.898979 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:36498.service.
Feb  9 19:43:16.900283 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:36484.service: Deactivated successfully.
Feb  9 19:43:16.900814 systemd[1]: session-16.scope: Deactivated successfully.
Feb  9 19:43:16.903599 systemd-logind[1109]: Session 16 logged out. Waiting for processes to exit.
Feb  9 19:43:16.904527 systemd-logind[1109]: Removed session 16.
Feb  9 19:43:16.932634 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 36498 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:16.933830 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:16.937281 systemd-logind[1109]: New session 17 of user core.
Feb  9 19:43:16.938051 systemd[1]: Started session-17.scope.
Feb  9 19:43:17.204999 sshd[3505]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:17.208218 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:36506.service.
Feb  9 19:43:17.209213 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:36498.service: Deactivated successfully.
Feb  9 19:43:17.209733 systemd[1]: session-17.scope: Deactivated successfully.
Feb  9 19:43:17.210387 systemd-logind[1109]: Session 17 logged out. Waiting for processes to exit.
Feb  9 19:43:17.211946 systemd-logind[1109]: Removed session 17.
Feb  9 19:43:17.245789 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 36506 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:17.246917 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:17.250323 systemd-logind[1109]: New session 18 of user core.
Feb  9 19:43:17.251115 systemd[1]: Started session-18.scope.
Feb  9 19:43:17.355753 sshd[3517]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:17.358597 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:36506.service: Deactivated successfully.
Feb  9 19:43:17.359291 systemd[1]: session-18.scope: Deactivated successfully.
Feb  9 19:43:17.360153 systemd-logind[1109]: Session 18 logged out. Waiting for processes to exit.
Feb  9 19:43:17.361123 systemd-logind[1109]: Removed session 18.
Feb  9 19:43:22.359857 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:38224.service.
Feb  9 19:43:22.393439 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 38224 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:22.394599 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:22.398195 systemd-logind[1109]: New session 19 of user core.
Feb  9 19:43:22.398901 systemd[1]: Started session-19.scope.
Feb  9 19:43:22.498437 sshd[3531]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:22.500414 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:38224.service: Deactivated successfully.
Feb  9 19:43:22.501303 systemd[1]: session-19.scope: Deactivated successfully.
Feb  9 19:43:22.501871 systemd-logind[1109]: Session 19 logged out. Waiting for processes to exit.
Feb  9 19:43:22.502554 systemd-logind[1109]: Removed session 19.
Feb  9 19:43:27.502155 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:38228.service.
Feb  9 19:43:27.535536 sshd[3549]: Accepted publickey for core from 10.0.0.1 port 38228 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:27.536534 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:27.539507 systemd-logind[1109]: New session 20 of user core.
Feb  9 19:43:27.540272 systemd[1]: Started session-20.scope.
Feb  9 19:43:27.638507 sshd[3549]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:27.640432 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:38228.service: Deactivated successfully.
Feb  9 19:43:27.641093 systemd[1]: session-20.scope: Deactivated successfully.
Feb  9 19:43:27.641621 systemd-logind[1109]: Session 20 logged out. Waiting for processes to exit.
Feb  9 19:43:27.642187 systemd-logind[1109]: Removed session 20.
Feb  9 19:43:32.643245 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:46184.service.
Feb  9 19:43:32.676762 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 46184 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:32.678022 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:32.681557 systemd-logind[1109]: New session 21 of user core.
Feb  9 19:43:32.682299 systemd[1]: Started session-21.scope.
Feb  9 19:43:32.795526 sshd[3564]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:32.797398 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:46184.service: Deactivated successfully.
Feb  9 19:43:32.798145 systemd[1]: session-21.scope: Deactivated successfully.
Feb  9 19:43:32.799019 systemd-logind[1109]: Session 21 logged out. Waiting for processes to exit.
Feb  9 19:43:32.799803 systemd-logind[1109]: Removed session 21.
Feb  9 19:43:37.799566 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:46194.service.
Feb  9 19:43:37.832891 sshd[3578]: Accepted publickey for core from 10.0.0.1 port 46194 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:37.833788 sshd[3578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:37.836759 systemd-logind[1109]: New session 22 of user core.
Feb  9 19:43:37.837603 systemd[1]: Started session-22.scope.
Feb  9 19:43:37.934231 sshd[3578]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:37.937140 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:46194.service: Deactivated successfully.
Feb  9 19:43:37.937730 systemd[1]: session-22.scope: Deactivated successfully.
Feb  9 19:43:37.938267 systemd-logind[1109]: Session 22 logged out. Waiting for processes to exit.
Feb  9 19:43:37.939283 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:46198.service.
Feb  9 19:43:37.940080 systemd-logind[1109]: Removed session 22.
Feb  9 19:43:37.972990 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 46198 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:37.973807 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:37.976702 systemd-logind[1109]: New session 23 of user core.
Feb  9 19:43:37.977427 systemd[1]: Started session-23.scope.
Feb  9 19:43:38.665879 kubelet[1972]: E0209 19:43:38.665839    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:39.363085 env[1127]: time="2024-02-09T19:43:39.363022898Z" level=info msg="StopContainer for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" with timeout 30 (s)"
Feb  9 19:43:39.363518 env[1127]: time="2024-02-09T19:43:39.363444281Z" level=info msg="Stop container \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" with signal terminated"
Feb  9 19:43:39.371481 systemd[1]: run-containerd-runc-k8s.io-25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899-runc.EwrNBJ.mount: Deactivated successfully.
Feb  9 19:43:39.375738 systemd[1]: cri-containerd-c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db.scope: Deactivated successfully.
Feb  9 19:43:39.391714 env[1127]: time="2024-02-09T19:43:39.391635514Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 19:43:39.395255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db-rootfs.mount: Deactivated successfully.
Feb  9 19:43:39.398070 env[1127]: time="2024-02-09T19:43:39.398043791Z" level=info msg="StopContainer for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" with timeout 2 (s)"
Feb  9 19:43:39.398291 env[1127]: time="2024-02-09T19:43:39.398265654Z" level=info msg="Stop container \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" with signal terminated"
Feb  9 19:43:39.402266 env[1127]: time="2024-02-09T19:43:39.402201667Z" level=info msg="shim disconnected" id=c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db
Feb  9 19:43:39.402266 env[1127]: time="2024-02-09T19:43:39.402257994Z" level=warning msg="cleaning up after shim disconnected" id=c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db namespace=k8s.io
Feb  9 19:43:39.402266 env[1127]: time="2024-02-09T19:43:39.402267622Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:39.406074 systemd-networkd[1023]: lxc_health: Link DOWN
Feb  9 19:43:39.406081 systemd-networkd[1023]: lxc_health: Lost carrier
Feb  9 19:43:39.412108 env[1127]: time="2024-02-09T19:43:39.412067746Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3644 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:39.414803 env[1127]: time="2024-02-09T19:43:39.414768073Z" level=info msg="StopContainer for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" returns successfully"
Feb  9 19:43:39.415582 env[1127]: time="2024-02-09T19:43:39.415544884Z" level=info msg="StopPodSandbox for \"2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157\""
Feb  9 19:43:39.415649 env[1127]: time="2024-02-09T19:43:39.415624576Z" level=info msg="Container to stop \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 19:43:39.417444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157-shm.mount: Deactivated successfully.
Feb  9 19:43:39.422474 systemd[1]: cri-containerd-2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157.scope: Deactivated successfully.
Feb  9 19:43:39.438728 systemd[1]: cri-containerd-25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899.scope: Deactivated successfully.
Feb  9 19:43:39.439000 systemd[1]: cri-containerd-25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899.scope: Consumed 6.144s CPU time.
Feb  9 19:43:39.450066 env[1127]: time="2024-02-09T19:43:39.450004347Z" level=info msg="shim disconnected" id=2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157
Feb  9 19:43:39.450066 env[1127]: time="2024-02-09T19:43:39.450061466Z" level=warning msg="cleaning up after shim disconnected" id=2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157 namespace=k8s.io
Feb  9 19:43:39.450206 env[1127]: time="2024-02-09T19:43:39.450071054Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:39.457384 env[1127]: time="2024-02-09T19:43:39.457326926Z" level=info msg="shim disconnected" id=25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899
Feb  9 19:43:39.457555 env[1127]: time="2024-02-09T19:43:39.457397231Z" level=warning msg="cleaning up after shim disconnected" id=25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899 namespace=k8s.io
Feb  9 19:43:39.457555 env[1127]: time="2024-02-09T19:43:39.457411548Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:39.458182 env[1127]: time="2024-02-09T19:43:39.458148773Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3689 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:39.458539 env[1127]: time="2024-02-09T19:43:39.458515733Z" level=info msg="TearDown network for sandbox \"2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157\" successfully"
Feb  9 19:43:39.458640 env[1127]: time="2024-02-09T19:43:39.458619631Z" level=info msg="StopPodSandbox for \"2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157\" returns successfully"
Feb  9 19:43:39.464100 env[1127]: time="2024-02-09T19:43:39.463894086Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3702 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:39.466474 env[1127]: time="2024-02-09T19:43:39.466396777Z" level=info msg="StopContainer for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" returns successfully"
Feb  9 19:43:39.467885 env[1127]: time="2024-02-09T19:43:39.467857221Z" level=info msg="StopPodSandbox for \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\""
Feb  9 19:43:39.467950 env[1127]: time="2024-02-09T19:43:39.467931754Z" level=info msg="Container to stop \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 19:43:39.467982 env[1127]: time="2024-02-09T19:43:39.467948515Z" level=info msg="Container to stop \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 19:43:39.467982 env[1127]: time="2024-02-09T19:43:39.467960098Z" level=info msg="Container to stop \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 19:43:39.467982 env[1127]: time="2024-02-09T19:43:39.467972731Z" level=info msg="Container to stop \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 19:43:39.468137 env[1127]: time="2024-02-09T19:43:39.467983201Z" level=info msg="Container to stop \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 19:43:39.474114 systemd[1]: cri-containerd-29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c.scope: Deactivated successfully.
Feb  9 19:43:39.500422 env[1127]: time="2024-02-09T19:43:39.500364542Z" level=info msg="shim disconnected" id=29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c
Feb  9 19:43:39.500422 env[1127]: time="2024-02-09T19:43:39.500422743Z" level=warning msg="cleaning up after shim disconnected" id=29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c namespace=k8s.io
Feb  9 19:43:39.500422 env[1127]: time="2024-02-09T19:43:39.500432001Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:39.508253 env[1127]: time="2024-02-09T19:43:39.508191994Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3731 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:39.508540 env[1127]: time="2024-02-09T19:43:39.508512245Z" level=info msg="TearDown network for sandbox \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" successfully"
Feb  9 19:43:39.508540 env[1127]: time="2024-02-09T19:43:39.508537583Z" level=info msg="StopPodSandbox for \"29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c\" returns successfully"
Feb  9 19:43:39.564801 kubelet[1972]: I0209 19:43:39.564749    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-hubble-tls\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.564801 kubelet[1972]: I0209 19:43:39.564801    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cni-path\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.564801 kubelet[1972]: I0209 19:43:39.564826    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-cilium-config-path\") pod \"03c0ab39-2bbf-4c6c-86bb-a6fb161453f7\" (UID: \"03c0ab39-2bbf-4c6c-86bb-a6fb161453f7\") "
Feb  9 19:43:39.565033 kubelet[1972]: I0209 19:43:39.564846    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-kernel\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565033 kubelet[1972]: I0209 19:43:39.564865    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-run\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565033 kubelet[1972]: I0209 19:43:39.564884    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96938e21-d672-4b3e-abab-137a982bc520-clustermesh-secrets\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565033 kubelet[1972]: I0209 19:43:39.564902    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96938e21-d672-4b3e-abab-137a982bc520-cilium-config-path\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565033 kubelet[1972]: I0209 19:43:39.564906    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cni-path" (OuterVolumeSpecName: "cni-path") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565033 kubelet[1972]: I0209 19:43:39.564920    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-cgroup\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565168 kubelet[1972]: I0209 19:43:39.564965    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565168 kubelet[1972]: I0209 19:43:39.564993    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565168 kubelet[1972]: I0209 19:43:39.564995    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjb2x\" (UniqueName: \"kubernetes.io/projected/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-kube-api-access-gjb2x\") pod \"03c0ab39-2bbf-4c6c-86bb-a6fb161453f7\" (UID: \"03c0ab39-2bbf-4c6c-86bb-a6fb161453f7\") "
Feb  9 19:43:39.565168 kubelet[1972]: I0209 19:43:39.565020    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-xtables-lock\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565168 kubelet[1972]: I0209 19:43:39.565039    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-hostproc\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565054    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-lib-modules\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565069    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-etc-cni-netd\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565089    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62zb\" (UniqueName: \"kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-kube-api-access-x62zb\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565107    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-net\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565140    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-bpf-maps\") pod \"96938e21-d672-4b3e-abab-137a982bc520\" (UID: \"96938e21-d672-4b3e-abab-137a982bc520\") "
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565185    1972 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cni-path\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.565281 kubelet[1972]: I0209 19:43:39.565198    1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.565435 kubelet[1972]: I0209 19:43:39.565207    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-run\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.565435 kubelet[1972]: I0209 19:43:39.565226    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565435 kubelet[1972]: I0209 19:43:39.564953    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565435 kubelet[1972]: I0209 19:43:39.565410    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565435 kubelet[1972]: I0209 19:43:39.565428    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-hostproc" (OuterVolumeSpecName: "hostproc") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565567 kubelet[1972]: I0209 19:43:39.565441    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565567 kubelet[1972]: I0209 19:43:39.565474    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.565640 kubelet[1972]: I0209 19:43:39.565621    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:39.566669 kubelet[1972]: I0209 19:43:39.566642    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03c0ab39-2bbf-4c6c-86bb-a6fb161453f7" (UID: "03c0ab39-2bbf-4c6c-86bb-a6fb161453f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 19:43:39.567576 kubelet[1972]: I0209 19:43:39.567547    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96938e21-d672-4b3e-abab-137a982bc520-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 19:43:39.568126 kubelet[1972]: I0209 19:43:39.568072    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96938e21-d672-4b3e-abab-137a982bc520-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 19:43:39.568981 kubelet[1972]: I0209 19:43:39.568957    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-kube-api-access-gjb2x" (OuterVolumeSpecName: "kube-api-access-gjb2x") pod "03c0ab39-2bbf-4c6c-86bb-a6fb161453f7" (UID: "03c0ab39-2bbf-4c6c-86bb-a6fb161453f7"). InnerVolumeSpecName "kube-api-access-gjb2x". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 19:43:39.569410 kubelet[1972]: I0209 19:43:39.569368    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 19:43:39.570081 kubelet[1972]: I0209 19:43:39.570046    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-kube-api-access-x62zb" (OuterVolumeSpecName: "kube-api-access-x62zb") pod "96938e21-d672-4b3e-abab-137a982bc520" (UID: "96938e21-d672-4b3e-abab-137a982bc520"). InnerVolumeSpecName "kube-api-access-x62zb". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666305    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666332    1972 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-hostproc\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666343    1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gjb2x\" (UniqueName: \"kubernetes.io/projected/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-kube-api-access-gjb2x\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666353    1972 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-xtables-lock\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666362    1972 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-lib-modules\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666370    1972 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666379    1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x62zb\" (UniqueName: \"kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-kube-api-access-x62zb\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666416 kubelet[1972]: I0209 19:43:39.666387    1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666959 kubelet[1972]: I0209 19:43:39.666395    1972 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/96938e21-d672-4b3e-abab-137a982bc520-bpf-maps\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666959 kubelet[1972]: I0209 19:43:39.666406    1972 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/96938e21-d672-4b3e-abab-137a982bc520-hubble-tls\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666959 kubelet[1972]: I0209 19:43:39.666414    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/96938e21-d672-4b3e-abab-137a982bc520-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666959 kubelet[1972]: I0209 19:43:39.666423    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:39.666959 kubelet[1972]: I0209 19:43:39.666433    1972 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/96938e21-d672-4b3e-abab-137a982bc520-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:40.179005 kubelet[1972]: I0209 19:43:40.178962    1972 scope.go:117] "RemoveContainer" containerID="25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899"
Feb  9 19:43:40.181558 env[1127]: time="2024-02-09T19:43:40.181467788Z" level=info msg="RemoveContainer for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\""
Feb  9 19:43:40.182303 systemd[1]: Removed slice kubepods-burstable-pod96938e21_d672_4b3e_abab_137a982bc520.slice.
Feb  9 19:43:40.182374 systemd[1]: kubepods-burstable-pod96938e21_d672_4b3e_abab_137a982bc520.slice: Consumed 6.233s CPU time.
Feb  9 19:43:40.183891 systemd[1]: Removed slice kubepods-besteffort-pod03c0ab39_2bbf_4c6c_86bb_a6fb161453f7.slice.
Feb  9 19:43:40.184985 env[1127]: time="2024-02-09T19:43:40.184936247Z" level=info msg="RemoveContainer for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" returns successfully"
Feb  9 19:43:40.185209 kubelet[1972]: I0209 19:43:40.185184    1972 scope.go:117] "RemoveContainer" containerID="d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f"
Feb  9 19:43:40.186707 env[1127]: time="2024-02-09T19:43:40.186572024Z" level=info msg="RemoveContainer for \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\""
Feb  9 19:43:40.189629 env[1127]: time="2024-02-09T19:43:40.189585527Z" level=info msg="RemoveContainer for \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\" returns successfully"
Feb  9 19:43:40.190238 kubelet[1972]: I0209 19:43:40.189770    1972 scope.go:117] "RemoveContainer" containerID="5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db"
Feb  9 19:43:40.192080 env[1127]: time="2024-02-09T19:43:40.191855442Z" level=info msg="RemoveContainer for \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\""
Feb  9 19:43:40.194699 env[1127]: time="2024-02-09T19:43:40.194649857Z" level=info msg="RemoveContainer for \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\" returns successfully"
Feb  9 19:43:40.194960 kubelet[1972]: I0209 19:43:40.194839    1972 scope.go:117] "RemoveContainer" containerID="827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464"
Feb  9 19:43:40.196661 env[1127]: time="2024-02-09T19:43:40.196186445Z" level=info msg="RemoveContainer for \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\""
Feb  9 19:43:40.199124 env[1127]: time="2024-02-09T19:43:40.199082434Z" level=info msg="RemoveContainer for \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\" returns successfully"
Feb  9 19:43:40.199265 kubelet[1972]: I0209 19:43:40.199229    1972 scope.go:117] "RemoveContainer" containerID="e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2"
Feb  9 19:43:40.200619 env[1127]: time="2024-02-09T19:43:40.200593363Z" level=info msg="RemoveContainer for \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\""
Feb  9 19:43:40.204249 env[1127]: time="2024-02-09T19:43:40.204218701Z" level=info msg="RemoveContainer for \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\" returns successfully"
Feb  9 19:43:40.204477 kubelet[1972]: I0209 19:43:40.204442    1972 scope.go:117] "RemoveContainer" containerID="25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899"
Feb  9 19:43:40.204748 env[1127]: time="2024-02-09T19:43:40.204654732Z" level=error msg="ContainerStatus for \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\": not found"
Feb  9 19:43:40.204909 kubelet[1972]: E0209 19:43:40.204886    1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\": not found" containerID="25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899"
Feb  9 19:43:40.204996 kubelet[1972]: I0209 19:43:40.204977    1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899"} err="failed to get container status \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\": rpc error: code = NotFound desc = an error occurred when try to find container \"25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899\": not found"
Feb  9 19:43:40.205023 kubelet[1972]: I0209 19:43:40.205004    1972 scope.go:117] "RemoveContainer" containerID="d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f"
Feb  9 19:43:40.205256 env[1127]: time="2024-02-09T19:43:40.205188530Z" level=error msg="ContainerStatus for \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\": not found"
Feb  9 19:43:40.205432 kubelet[1972]: E0209 19:43:40.205355    1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\": not found" containerID="d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f"
Feb  9 19:43:40.205432 kubelet[1972]: I0209 19:43:40.205394    1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f"} err="failed to get container status \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d05da038c886fc3294b1ec5fd2c50d330ff7a4543ffff684d16482d95dc7df0f\": not found"
Feb  9 19:43:40.205432 kubelet[1972]: I0209 19:43:40.205404    1972 scope.go:117] "RemoveContainer" containerID="5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db"
Feb  9 19:43:40.205622 env[1127]: time="2024-02-09T19:43:40.205543635Z" level=error msg="ContainerStatus for \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\": not found"
Feb  9 19:43:40.205689 kubelet[1972]: E0209 19:43:40.205663    1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\": not found" containerID="5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db"
Feb  9 19:43:40.205738 kubelet[1972]: I0209 19:43:40.205695    1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db"} err="failed to get container status \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d7fdd40939721af4d426e46d0889fd79ad23a2a198d12154e3e88b0135e81db\": not found"
Feb  9 19:43:40.205738 kubelet[1972]: I0209 19:43:40.205705    1972 scope.go:117] "RemoveContainer" containerID="827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464"
Feb  9 19:43:40.205884 env[1127]: time="2024-02-09T19:43:40.205838899Z" level=error msg="ContainerStatus for \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\": not found"
Feb  9 19:43:40.205976 kubelet[1972]: E0209 19:43:40.205961    1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\": not found" containerID="827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464"
Feb  9 19:43:40.206024 kubelet[1972]: I0209 19:43:40.205984    1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464"} err="failed to get container status \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\": rpc error: code = NotFound desc = an error occurred when try to find container \"827f91a9a468804b3b1b70e389722fdd3a017a1e5e36757d8685a5e2f276f464\": not found"
Feb  9 19:43:40.206024 kubelet[1972]: I0209 19:43:40.205993    1972 scope.go:117] "RemoveContainer" containerID="e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2"
Feb  9 19:43:40.206184 env[1127]: time="2024-02-09T19:43:40.206138029Z" level=error msg="ContainerStatus for \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\": not found"
Feb  9 19:43:40.206393 kubelet[1972]: E0209 19:43:40.206361    1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\": not found" containerID="e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2"
Feb  9 19:43:40.206450 kubelet[1972]: I0209 19:43:40.206415    1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2"} err="failed to get container status \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9b7e501c7befcff9203b71de23ee9c03f8de34d21cbe6c34554237d3be7e5f2\": not found"
Feb  9 19:43:40.206450 kubelet[1972]: I0209 19:43:40.206434    1972 scope.go:117] "RemoveContainer" containerID="c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db"
Feb  9 19:43:40.207507 env[1127]: time="2024-02-09T19:43:40.207475107Z" level=info msg="RemoveContainer for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\""
Feb  9 19:43:40.209937 env[1127]: time="2024-02-09T19:43:40.209906870Z" level=info msg="RemoveContainer for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" returns successfully"
Feb  9 19:43:40.210077 kubelet[1972]: I0209 19:43:40.210057    1972 scope.go:117] "RemoveContainer" containerID="c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db"
Feb  9 19:43:40.210238 env[1127]: time="2024-02-09T19:43:40.210196102Z" level=error msg="ContainerStatus for \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\": not found"
Feb  9 19:43:40.210347 kubelet[1972]: E0209 19:43:40.210309    1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\": not found" containerID="c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db"
Feb  9 19:43:40.210347 kubelet[1972]: I0209 19:43:40.210330    1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db"} err="failed to get container status \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\": rpc error: code = NotFound desc = an error occurred when try to find container \"c06063aeeecd74b8629b20b060b3b76f1c2710edabfdf8b1168d2a9cbb81a7db\": not found"
Feb  9 19:43:40.368465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ea5ecbc26f6f4509ecb6f4d29641fc158ba36b0708089df5e7258fe7db5899-rootfs.mount: Deactivated successfully.
Feb  9 19:43:40.368587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2924aa8927a8a66c39f6e4f1160703b683bf4ed94dcaf44b0fdb58ae593f1157-rootfs.mount: Deactivated successfully.
Feb  9 19:43:40.368655 systemd[1]: var-lib-kubelet-pods-03c0ab39\x2d2bbf\x2d4c6c\x2d86bb\x2da6fb161453f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjb2x.mount: Deactivated successfully.
Feb  9 19:43:40.368735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c-rootfs.mount: Deactivated successfully.
Feb  9 19:43:40.368833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29579b5908be98695f9d5ba0b087acd8ad33ab42c117bf4cf64dfbe67c32405c-shm.mount: Deactivated successfully.
Feb  9 19:43:40.368902 systemd[1]: var-lib-kubelet-pods-96938e21\x2dd672\x2d4b3e\x2dabab\x2d137a982bc520-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx62zb.mount: Deactivated successfully.
Feb  9 19:43:40.368969 systemd[1]: var-lib-kubelet-pods-96938e21\x2dd672\x2d4b3e\x2dabab\x2d137a982bc520-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 19:43:40.369035 systemd[1]: var-lib-kubelet-pods-96938e21\x2dd672\x2d4b3e\x2dabab\x2d137a982bc520-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 19:43:40.667476 kubelet[1972]: I0209 19:43:40.667428    1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="03c0ab39-2bbf-4c6c-86bb-a6fb161453f7" path="/var/lib/kubelet/pods/03c0ab39-2bbf-4c6c-86bb-a6fb161453f7/volumes"
Feb  9 19:43:40.667806 kubelet[1972]: I0209 19:43:40.667796    1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="96938e21-d672-4b3e-abab-137a982bc520" path="/var/lib/kubelet/pods/96938e21-d672-4b3e-abab-137a982bc520/volumes"
Feb  9 19:43:41.331578 sshd[3591]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:41.334063 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:46198.service: Deactivated successfully.
Feb  9 19:43:41.334601 systemd[1]: session-23.scope: Deactivated successfully.
Feb  9 19:43:41.335098 systemd-logind[1109]: Session 23 logged out. Waiting for processes to exit.
Feb  9 19:43:41.336147 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:43336.service.
Feb  9 19:43:41.336834 systemd-logind[1109]: Removed session 23.
Feb  9 19:43:41.371259 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 43336 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:41.372361 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:41.375674 systemd-logind[1109]: New session 24 of user core.
Feb  9 19:43:41.376358 systemd[1]: Started session-24.scope.
Feb  9 19:43:41.715818 kubelet[1972]: E0209 19:43:41.715716    1972 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 19:43:41.990691 sshd[3751]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:41.995273 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:43342.service.
Feb  9 19:43:41.995972 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:43336.service: Deactivated successfully.
Feb  9 19:43:42.006578 kubelet[1972]: I0209 19:43:42.003603    1972 topology_manager.go:215] "Topology Admit Handler" podUID="747be12f-55b9-48fd-b4c9-bf5ab26fcae1" podNamespace="kube-system" podName="cilium-b6gk4"
Feb  9 19:43:42.006578 kubelet[1972]: E0209 19:43:42.003679    1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96938e21-d672-4b3e-abab-137a982bc520" containerName="mount-bpf-fs"
Feb  9 19:43:42.006578 kubelet[1972]: E0209 19:43:42.003690    1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03c0ab39-2bbf-4c6c-86bb-a6fb161453f7" containerName="cilium-operator"
Feb  9 19:43:42.006578 kubelet[1972]: E0209 19:43:42.003709    1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96938e21-d672-4b3e-abab-137a982bc520" containerName="mount-cgroup"
Feb  9 19:43:42.006578 kubelet[1972]: E0209 19:43:42.003717    1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96938e21-d672-4b3e-abab-137a982bc520" containerName="apply-sysctl-overwrites"
Feb  9 19:43:42.006578 kubelet[1972]: E0209 19:43:42.003724    1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96938e21-d672-4b3e-abab-137a982bc520" containerName="clean-cilium-state"
Feb  9 19:43:42.006578 kubelet[1972]: E0209 19:43:42.003732    1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="96938e21-d672-4b3e-abab-137a982bc520" containerName="cilium-agent"
Feb  9 19:43:42.006578 kubelet[1972]: I0209 19:43:42.003761    1972 memory_manager.go:346] "RemoveStaleState removing state" podUID="96938e21-d672-4b3e-abab-137a982bc520" containerName="cilium-agent"
Feb  9 19:43:42.006578 kubelet[1972]: I0209 19:43:42.003769    1972 memory_manager.go:346] "RemoveStaleState removing state" podUID="03c0ab39-2bbf-4c6c-86bb-a6fb161453f7" containerName="cilium-operator"
Feb  9 19:43:42.005864 systemd[1]: session-24.scope: Deactivated successfully.
Feb  9 19:43:42.006728 systemd-logind[1109]: Session 24 logged out. Waiting for processes to exit.
Feb  9 19:43:42.008688 systemd-logind[1109]: Removed session 24.
Feb  9 19:43:42.010717 systemd[1]: Created slice kubepods-burstable-pod747be12f_55b9_48fd_b4c9_bf5ab26fcae1.slice.
Feb  9 19:43:42.042994 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 43342 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:42.044183 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:42.051045 systemd[1]: Started session-25.scope.
Feb  9 19:43:42.051372 systemd-logind[1109]: New session 25 of user core.
Feb  9 19:43:42.078076 kubelet[1972]: I0209 19:43:42.078032    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hostproc\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.078382 kubelet[1972]: I0209 19:43:42.078366    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-cgroup\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.078517 kubelet[1972]: I0209 19:43:42.078501    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-lib-modules\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.078621 kubelet[1972]: I0209 19:43:42.078605    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-ipsec-secrets\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.078753 kubelet[1972]: I0209 19:43:42.078736    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-clustermesh-secrets\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.078865 kubelet[1972]: I0209 19:43:42.078849    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-kernel\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.078975 kubelet[1972]: I0209 19:43:42.078959    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-run\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079089 kubelet[1972]: I0209 19:43:42.079069    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cni-path\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079195 kubelet[1972]: I0209 19:43:42.079179    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmqr\" (UniqueName: \"kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-kube-api-access-lsmqr\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079306 kubelet[1972]: I0209 19:43:42.079290    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-bpf-maps\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079417 kubelet[1972]: I0209 19:43:42.079401    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-config-path\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079543 kubelet[1972]: I0209 19:43:42.079527    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-net\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079651 kubelet[1972]: I0209 19:43:42.079632    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hubble-tls\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079765 kubelet[1972]: I0209 19:43:42.079749    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-etc-cni-netd\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.079881 kubelet[1972]: I0209 19:43:42.079864    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-xtables-lock\") pod \"cilium-b6gk4\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") " pod="kube-system/cilium-b6gk4"
Feb  9 19:43:42.167354 sshd[3762]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:42.170217 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:43342.service: Deactivated successfully.
Feb  9 19:43:42.170930 systemd[1]: session-25.scope: Deactivated successfully.
Feb  9 19:43:42.173252 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:43352.service.
Feb  9 19:43:42.179533 systemd-logind[1109]: Session 25 logged out. Waiting for processes to exit.
Feb  9 19:43:42.181361 kubelet[1972]: E0209 19:43:42.181334    1972 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-lsmqr xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-b6gk4" podUID="747be12f-55b9-48fd-b4c9-bf5ab26fcae1"
Feb  9 19:43:42.182914 systemd-logind[1109]: Removed session 25.
Feb  9 19:43:42.218612 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 43352 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo
Feb  9 19:43:42.219787 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 19:43:42.222853 systemd-logind[1109]: New session 26 of user core.
Feb  9 19:43:42.223578 systemd[1]: Started session-26.scope.
Feb  9 19:43:42.281831 kubelet[1972]: I0209 19:43:42.281655    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-xtables-lock\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.281831 kubelet[1972]: I0209 19:43:42.281763    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-bpf-maps\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282014 kubelet[1972]: I0209 19:43:42.281915    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-net\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282014 kubelet[1972]: I0209 19:43:42.281974    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hostproc\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282090 kubelet[1972]: I0209 19:43:42.282042    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-lib-modules\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282187 kubelet[1972]: I0209 19:43:42.282069    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-clustermesh-secrets\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282230 kubelet[1972]: I0209 19:43:42.282191    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-etc-cni-netd\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282292 kubelet[1972]: I0209 19:43:42.282280    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-kernel\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282331 kubelet[1972]: I0209 19:43:42.282326    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-config-path\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282503 kubelet[1972]: I0209 19:43:42.282379    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsmqr\" (UniqueName: \"kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-kube-api-access-lsmqr\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282643 kubelet[1972]: I0209 19:43:42.282517    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-ipsec-secrets\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282810 kubelet[1972]: I0209 19:43:42.282648    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cni-path\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282894 kubelet[1972]: I0209 19:43:42.282817    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hubble-tls\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282947 kubelet[1972]: I0209 19:43:42.282902    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-cgroup\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.282984 kubelet[1972]: I0209 19:43:42.282957    1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-run\") pod \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\" (UID: \"747be12f-55b9-48fd-b4c9-bf5ab26fcae1\") "
Feb  9 19:43:42.288581 kubelet[1972]: I0209 19:43:42.288543    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.288647 kubelet[1972]: I0209 19:43:42.288597    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hostproc" (OuterVolumeSpecName: "hostproc") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.288647 kubelet[1972]: I0209 19:43:42.288620    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.288826 kubelet[1972]: I0209 19:43:42.288784    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.288941 kubelet[1972]: I0209 19:43:42.288908    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.289079 kubelet[1972]: I0209 19:43:42.289056    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.290453 kubelet[1972]: I0209 19:43:42.290418    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 19:43:42.290453 kubelet[1972]: I0209 19:43:42.290452    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.290941 kubelet[1972]: I0209 19:43:42.290915    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.291148 kubelet[1972]: I0209 19:43:42.291131    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 19:43:42.291257 kubelet[1972]: I0209 19:43:42.291241    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cni-path" (OuterVolumeSpecName: "cni-path") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.291421 kubelet[1972]: I0209 19:43:42.291407    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 19:43:42.292932 systemd[1]: var-lib-kubelet-pods-747be12f\x2d55b9\x2d48fd\x2db4c9\x2dbf5ab26fcae1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 19:43:42.293094 systemd[1]: var-lib-kubelet-pods-747be12f\x2d55b9\x2d48fd\x2db4c9\x2dbf5ab26fcae1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb  9 19:43:42.294791 kubelet[1972]: I0209 19:43:42.294766    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 19:43:42.294868 kubelet[1972]: I0209 19:43:42.294845    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-kube-api-access-lsmqr" (OuterVolumeSpecName: "kube-api-access-lsmqr") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "kube-api-access-lsmqr". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 19:43:42.295517 kubelet[1972]: I0209 19:43:42.295336    1972 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "747be12f-55b9-48fd-b4c9-bf5ab26fcae1" (UID: "747be12f-55b9-48fd-b4c9-bf5ab26fcae1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 19:43:42.383658 kubelet[1972]: I0209 19:43:42.383609    1972 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-xtables-lock\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383658 kubelet[1972]: I0209 19:43:42.383647    1972 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-bpf-maps\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383658 kubelet[1972]: I0209 19:43:42.383658    1972 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hostproc\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383658 kubelet[1972]: I0209 19:43:42.383670    1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383679    1972 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-lib-modules\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383689    1972 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383707    1972 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383719    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383728    1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383738    1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lsmqr\" (UniqueName: \"kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-kube-api-access-lsmqr\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383747    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.383908 kubelet[1972]: I0209 19:43:42.383756    1972 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cni-path\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.384076 kubelet[1972]: I0209 19:43:42.383765    1972 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-hubble-tls\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.384076 kubelet[1972]: I0209 19:43:42.383773    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.384076 kubelet[1972]: I0209 19:43:42.383781    1972 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/747be12f-55b9-48fd-b4c9-bf5ab26fcae1-cilium-run\") on node \"localhost\" DevicePath \"\""
Feb  9 19:43:42.670782 systemd[1]: Removed slice kubepods-burstable-pod747be12f_55b9_48fd_b4c9_bf5ab26fcae1.slice.
Feb  9 19:43:43.185517 systemd[1]: var-lib-kubelet-pods-747be12f\x2d55b9\x2d48fd\x2db4c9\x2dbf5ab26fcae1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlsmqr.mount: Deactivated successfully.
Feb  9 19:43:43.185610 systemd[1]: var-lib-kubelet-pods-747be12f\x2d55b9\x2d48fd\x2db4c9\x2dbf5ab26fcae1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 19:43:43.568568 kubelet[1972]: I0209 19:43:43.568524    1972 topology_manager.go:215] "Topology Admit Handler" podUID="d3ac7289-6ef9-4aa7-a325-1522c41e019a" podNamespace="kube-system" podName="cilium-wktqd"
Feb  9 19:43:43.574985 systemd[1]: Created slice kubepods-burstable-podd3ac7289_6ef9_4aa7_a325_1522c41e019a.slice.
Feb  9 19:43:43.690642 kubelet[1972]: I0209 19:43:43.690585    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-cilium-run\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.690642 kubelet[1972]: I0209 19:43:43.690627    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-bpf-maps\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.690642 kubelet[1972]: I0209 19:43:43.690649    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmj9\" (UniqueName: \"kubernetes.io/projected/d3ac7289-6ef9-4aa7-a325-1522c41e019a-kube-api-access-9lmj9\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.690920 kubelet[1972]: I0209 19:43:43.690669    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-xtables-lock\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.690920 kubelet[1972]: I0209 19:43:43.690688    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3ac7289-6ef9-4aa7-a325-1522c41e019a-cilium-ipsec-secrets\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.690920 kubelet[1972]: I0209 19:43:43.690714    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-cni-path\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.690920 kubelet[1972]: I0209 19:43:43.690834    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-host-proc-sys-kernel\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691038 kubelet[1972]: I0209 19:43:43.690949    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3ac7289-6ef9-4aa7-a325-1522c41e019a-clustermesh-secrets\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691038 kubelet[1972]: I0209 19:43:43.690992    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-lib-modules\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691095 kubelet[1972]: I0209 19:43:43.691046    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-hostproc\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691095 kubelet[1972]: I0209 19:43:43.691069    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-etc-cni-netd\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691095 kubelet[1972]: I0209 19:43:43.691087    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3ac7289-6ef9-4aa7-a325-1522c41e019a-cilium-config-path\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691179 kubelet[1972]: I0209 19:43:43.691119    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3ac7289-6ef9-4aa7-a325-1522c41e019a-hubble-tls\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691179 kubelet[1972]: I0209 19:43:43.691139    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-cilium-cgroup\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.691179 kubelet[1972]: I0209 19:43:43.691166    1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3ac7289-6ef9-4aa7-a325-1522c41e019a-host-proc-sys-net\") pod \"cilium-wktqd\" (UID: \"d3ac7289-6ef9-4aa7-a325-1522c41e019a\") " pod="kube-system/cilium-wktqd"
Feb  9 19:43:43.878764 kubelet[1972]: E0209 19:43:43.878643    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:43.879498 env[1127]: time="2024-02-09T19:43:43.879393575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wktqd,Uid:d3ac7289-6ef9-4aa7-a325-1522c41e019a,Namespace:kube-system,Attempt:0,}"
Feb  9 19:43:44.151022 env[1127]: time="2024-02-09T19:43:44.150866028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 19:43:44.151022 env[1127]: time="2024-02-09T19:43:44.150918498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 19:43:44.151022 env[1127]: time="2024-02-09T19:43:44.150932945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 19:43:44.151223 env[1127]: time="2024-02-09T19:43:44.151140731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a pid=3808 runtime=io.containerd.runc.v2
Feb  9 19:43:44.161661 systemd[1]: Started cri-containerd-22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a.scope.
Feb  9 19:43:44.180892 env[1127]: time="2024-02-09T19:43:44.180843495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wktqd,Uid:d3ac7289-6ef9-4aa7-a325-1522c41e019a,Namespace:kube-system,Attempt:0,} returns sandbox id \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\""
Feb  9 19:43:44.181445 kubelet[1972]: E0209 19:43:44.181426    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:44.183884 env[1127]: time="2024-02-09T19:43:44.183844394Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 19:43:44.240825 env[1127]: time="2024-02-09T19:43:44.240741650Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e\""
Feb  9 19:43:44.241471 env[1127]: time="2024-02-09T19:43:44.241399291Z" level=info msg="StartContainer for \"af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e\""
Feb  9 19:43:44.257713 systemd[1]: Started cri-containerd-af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e.scope.
Feb  9 19:43:44.281833 env[1127]: time="2024-02-09T19:43:44.281773790Z" level=info msg="StartContainer for \"af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e\" returns successfully"
Feb  9 19:43:44.288647 systemd[1]: cri-containerd-af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e.scope: Deactivated successfully.
Feb  9 19:43:44.325330 env[1127]: time="2024-02-09T19:43:44.325118308Z" level=info msg="shim disconnected" id=af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e
Feb  9 19:43:44.325578 env[1127]: time="2024-02-09T19:43:44.325328819Z" level=warning msg="cleaning up after shim disconnected" id=af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e namespace=k8s.io
Feb  9 19:43:44.325578 env[1127]: time="2024-02-09T19:43:44.325356893Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:44.333118 env[1127]: time="2024-02-09T19:43:44.333079476Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:44.667989 kubelet[1972]: I0209 19:43:44.667944    1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="747be12f-55b9-48fd-b4c9-bf5ab26fcae1" path="/var/lib/kubelet/pods/747be12f-55b9-48fd-b4c9-bf5ab26fcae1/volumes"
Feb  9 19:43:45.185823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af13054a2eae05e9b000680ba4b63727413f8f4396c451d28cb438e00da8f87e-rootfs.mount: Deactivated successfully.
Feb  9 19:43:45.204572 kubelet[1972]: E0209 19:43:45.204543    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:45.206547 env[1127]: time="2024-02-09T19:43:45.206497775Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 19:43:45.219520 env[1127]: time="2024-02-09T19:43:45.219443848Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b\""
Feb  9 19:43:45.220748 env[1127]: time="2024-02-09T19:43:45.220724173Z" level=info msg="StartContainer for \"49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b\""
Feb  9 19:43:45.239013 systemd[1]: Started cri-containerd-49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b.scope.
Feb  9 19:43:45.263958 env[1127]: time="2024-02-09T19:43:45.263896065Z" level=info msg="StartContainer for \"49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b\" returns successfully"
Feb  9 19:43:45.267452 systemd[1]: cri-containerd-49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b.scope: Deactivated successfully.
Feb  9 19:43:45.293229 env[1127]: time="2024-02-09T19:43:45.293168621Z" level=info msg="shim disconnected" id=49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b
Feb  9 19:43:45.293229 env[1127]: time="2024-02-09T19:43:45.293220049Z" level=warning msg="cleaning up after shim disconnected" id=49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b namespace=k8s.io
Feb  9 19:43:45.293229 env[1127]: time="2024-02-09T19:43:45.293230449Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:45.300118 env[1127]: time="2024-02-09T19:43:45.300069226Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3952 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:46.185844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49169dc73c1fe807d388733229582783297dcc755f3d2c61d78fd126ea56109b-rootfs.mount: Deactivated successfully.
Feb  9 19:43:46.207818 kubelet[1972]: E0209 19:43:46.207773    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:46.209602 env[1127]: time="2024-02-09T19:43:46.209557901Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 19:43:46.224696 env[1127]: time="2024-02-09T19:43:46.224644406Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34\""
Feb  9 19:43:46.225831 env[1127]: time="2024-02-09T19:43:46.225782489Z" level=info msg="StartContainer for \"688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34\""
Feb  9 19:43:46.244491 systemd[1]: Started cri-containerd-688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34.scope.
Feb  9 19:43:46.270192 systemd[1]: cri-containerd-688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34.scope: Deactivated successfully.
Feb  9 19:43:46.270983 env[1127]: time="2024-02-09T19:43:46.270948324Z" level=info msg="StartContainer for \"688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34\" returns successfully"
Feb  9 19:43:46.290823 env[1127]: time="2024-02-09T19:43:46.290769791Z" level=info msg="shim disconnected" id=688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34
Feb  9 19:43:46.290823 env[1127]: time="2024-02-09T19:43:46.290820427Z" level=warning msg="cleaning up after shim disconnected" id=688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34 namespace=k8s.io
Feb  9 19:43:46.291028 env[1127]: time="2024-02-09T19:43:46.290830356Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:46.297877 env[1127]: time="2024-02-09T19:43:46.297850083Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:46.717763 kubelet[1972]: E0209 19:43:46.717731    1972 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 19:43:47.185967 systemd[1]: run-containerd-runc-k8s.io-688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34-runc.qnGzES.mount: Deactivated successfully.
Feb  9 19:43:47.186075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-688b4c9aaa7fc3e868843df11882228733ed3a9e6f5aa20693235ff9f8207a34-rootfs.mount: Deactivated successfully.
Feb  9 19:43:47.211678 kubelet[1972]: E0209 19:43:47.211642    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:47.214170 env[1127]: time="2024-02-09T19:43:47.214110739Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 19:43:47.224047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231093181.mount: Deactivated successfully.
Feb  9 19:43:47.226644 env[1127]: time="2024-02-09T19:43:47.226595158Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636\""
Feb  9 19:43:47.227216 env[1127]: time="2024-02-09T19:43:47.227186221Z" level=info msg="StartContainer for \"0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636\""
Feb  9 19:43:47.241398 systemd[1]: Started cri-containerd-0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636.scope.
Feb  9 19:43:47.261997 systemd[1]: cri-containerd-0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636.scope: Deactivated successfully.
Feb  9 19:43:47.263170 env[1127]: time="2024-02-09T19:43:47.263096074Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3ac7289_6ef9_4aa7_a325_1522c41e019a.slice/cri-containerd-0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636.scope/memory.events\": no such file or directory"
Feb  9 19:43:47.265801 env[1127]: time="2024-02-09T19:43:47.265748626Z" level=info msg="StartContainer for \"0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636\" returns successfully"
Feb  9 19:43:47.286522 env[1127]: time="2024-02-09T19:43:47.286447498Z" level=info msg="shim disconnected" id=0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636
Feb  9 19:43:47.286522 env[1127]: time="2024-02-09T19:43:47.286521218Z" level=warning msg="cleaning up after shim disconnected" id=0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636 namespace=k8s.io
Feb  9 19:43:47.286522 env[1127]: time="2024-02-09T19:43:47.286530526Z" level=info msg="cleaning up dead shim"
Feb  9 19:43:47.293975 env[1127]: time="2024-02-09T19:43:47.293943596Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4064 runtime=io.containerd.runc.v2\n"
Feb  9 19:43:48.185988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c81080589f9ba1dc87cec17a5d4087a834a9161f69a9baeffb3dd6a6fd81636-rootfs.mount: Deactivated successfully.
Feb  9 19:43:48.216647 kubelet[1972]: E0209 19:43:48.216438    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:48.219663 env[1127]: time="2024-02-09T19:43:48.219401601Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 19:43:48.237086 env[1127]: time="2024-02-09T19:43:48.237021671Z" level=info msg="CreateContainer within sandbox \"22e123fd4faffea0a26c58d3d61a21152454e4107c25fd766ce17cf7ca7bbe4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58ae07ab81c643b61cea84769ad540e0e7daaee50fcf0b0dfab1d4bb86696262\""
Feb  9 19:43:48.237632 env[1127]: time="2024-02-09T19:43:48.237593797Z" level=info msg="StartContainer for \"58ae07ab81c643b61cea84769ad540e0e7daaee50fcf0b0dfab1d4bb86696262\""
Feb  9 19:43:48.258934 systemd[1]: Started cri-containerd-58ae07ab81c643b61cea84769ad540e0e7daaee50fcf0b0dfab1d4bb86696262.scope.
Feb  9 19:43:48.369107 env[1127]: time="2024-02-09T19:43:48.369026308Z" level=info msg="StartContainer for \"58ae07ab81c643b61cea84769ad540e0e7daaee50fcf0b0dfab1d4bb86696262\" returns successfully"
Feb  9 19:43:48.532485 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb  9 19:43:48.665985 kubelet[1972]: E0209 19:43:48.665943    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:49.202208 kubelet[1972]: I0209 19:43:49.202172    1972 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:43:49Z","lastTransitionTime":"2024-02-09T19:43:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb  9 19:43:49.221118 kubelet[1972]: E0209 19:43:49.221076    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:50.223013 kubelet[1972]: E0209 19:43:50.222987    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:50.642256 systemd[1]: run-containerd-runc-k8s.io-58ae07ab81c643b61cea84769ad540e0e7daaee50fcf0b0dfab1d4bb86696262-runc.FoNwde.mount: Deactivated successfully.
Feb  9 19:43:51.225141 kubelet[1972]: E0209 19:43:51.225117    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:51.310667 systemd-networkd[1023]: lxc_health: Link UP
Feb  9 19:43:51.319660 systemd-networkd[1023]: lxc_health: Gained carrier
Feb  9 19:43:51.320485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 19:43:51.894784 kubelet[1972]: I0209 19:43:51.894316    1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wktqd" podStartSLOduration=8.894271347 podCreationTimestamp="2024-02-09 19:43:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:43:49.957869955 +0000 UTC m=+93.377054670" watchObservedRunningTime="2024-02-09 19:43:51.894271347 +0000 UTC m=+95.313456022"
Feb  9 19:43:52.226620 kubelet[1972]: E0209 19:43:52.226512    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:52.723775 systemd-networkd[1023]: lxc_health: Gained IPv6LL
Feb  9 19:43:53.228498 kubelet[1972]: E0209 19:43:53.228446    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:54.230073 kubelet[1972]: E0209 19:43:54.230039    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 19:43:56.982158 sshd[3777]: pam_unix(sshd:session): session closed for user core
Feb  9 19:43:56.985311 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:43352.service: Deactivated successfully.
Feb  9 19:43:56.986085 systemd[1]: session-26.scope: Deactivated successfully.
Feb  9 19:43:56.986587 systemd-logind[1109]: Session 26 logged out. Waiting for processes to exit.
Feb  9 19:43:56.987220 systemd-logind[1109]: Removed session 26.
Feb  9 19:43:57.666248 kubelet[1972]: E0209 19:43:57.666214    1972 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"