Feb  8 23:41:33.961311 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024
Feb  8 23:41:33.961333 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  8 23:41:33.961345 kernel: BIOS-provided physical RAM map:
Feb  8 23:41:33.961352 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  8 23:41:33.961358 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  8 23:41:33.961365 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  8 23:41:33.961373 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable
Feb  8 23:41:33.961380 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved
Feb  8 23:41:33.961387 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  8 23:41:33.961394 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  8 23:41:33.961401 kernel: NX (Execute Disable) protection: active
Feb  8 23:41:33.961407 kernel: SMBIOS 2.8 present.
Feb  8 23:41:33.961414 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb  8 23:41:33.961420 kernel: Hypervisor detected: KVM
Feb  8 23:41:33.961428 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  8 23:41:33.961437 kernel: kvm-clock: cpu 0, msr 17faa001, primary cpu clock
Feb  8 23:41:33.961444 kernel: kvm-clock: using sched offset of 8304250598 cycles
Feb  8 23:41:33.961452 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  8 23:41:33.961459 kernel: tsc: Detected 1996.249 MHz processor
Feb  8 23:41:33.961467 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb  8 23:41:33.961474 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb  8 23:41:33.961482 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000
Feb  8 23:41:33.961489 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  8 23:41:33.961498 kernel: ACPI: Early table checksum verification disabled
Feb  8 23:41:33.961505 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS )
Feb  8 23:41:33.961513 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  8 23:41:33.961520 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  8 23:41:33.961528 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  8 23:41:33.961535 kernel: ACPI: FACS 0x000000007FFE0000 000040
Feb  8 23:41:33.961543 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  8 23:41:33.961550 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  8 23:41:33.961557 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f]
Feb  8 23:41:33.961566 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b]
Feb  8 23:41:33.961573 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f]
Feb  8 23:41:33.961580 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f]
Feb  8 23:41:33.961588 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847]
Feb  8 23:41:33.961595 kernel: No NUMA configuration found
Feb  8 23:41:33.961602 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff]
Feb  8 23:41:33.961609 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff]
Feb  8 23:41:33.961617 kernel: Zone ranges:
Feb  8 23:41:33.961643 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  8 23:41:33.961652 kernel:   DMA32    [mem 0x0000000001000000-0x000000007ffdcfff]
Feb  8 23:41:33.961659 kernel:   Normal   empty
Feb  8 23:41:33.961667 kernel: Movable zone start for each node
Feb  8 23:41:33.961675 kernel: Early memory node ranges
Feb  8 23:41:33.961682 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  8 23:41:33.961692 kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdcfff]
Feb  8 23:41:33.961699 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff]
Feb  8 23:41:33.961707 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  8 23:41:33.961714 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  8 23:41:33.961722 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges
Feb  8 23:41:33.961729 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  8 23:41:33.961737 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  8 23:41:33.961744 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  8 23:41:33.961752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  8 23:41:33.961761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  8 23:41:33.961768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  8 23:41:33.961776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  8 23:41:33.961784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  8 23:41:33.961792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  8 23:41:33.961799 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb  8 23:41:33.961807 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices
Feb  8 23:41:33.961815 kernel: Booting paravirtualized kernel on KVM
Feb  8 23:41:33.961823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  8 23:41:33.961831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Feb  8 23:41:33.961840 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Feb  8 23:41:33.961848 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Feb  8 23:41:33.961855 kernel: pcpu-alloc: [0] 0 1 
Feb  8 23:41:33.961863 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0
Feb  8 23:41:33.961870 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb  8 23:41:33.961878 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515805
Feb  8 23:41:33.961885 kernel: Policy zone: DMA32
Feb  8 23:41:33.961894 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  8 23:41:33.961904 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb  8 23:41:33.961912 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb  8 23:41:33.961920 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  8 23:41:33.961928 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  8 23:41:33.961935 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved)
Feb  8 23:41:33.961943 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb  8 23:41:33.961951 kernel: ftrace: allocating 34475 entries in 135 pages
Feb  8 23:41:33.961958 kernel: ftrace: allocated 135 pages with 4 groups
Feb  8 23:41:33.961967 kernel: rcu: Hierarchical RCU implementation.
Feb  8 23:41:33.961976 kernel: rcu:         RCU event tracing is enabled.
Feb  8 23:41:33.961983 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb  8 23:41:33.961991 kernel:         Rude variant of Tasks RCU enabled.
Feb  8 23:41:33.961999 kernel:         Tracing variant of Tasks RCU enabled.
Feb  8 23:41:33.962007 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  8 23:41:33.962014 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb  8 23:41:33.962022 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb  8 23:41:33.962030 kernel: Console: colour VGA+ 80x25
Feb  8 23:41:33.962039 kernel: printk: console [tty0] enabled
Feb  8 23:41:33.962047 kernel: printk: console [ttyS0] enabled
Feb  8 23:41:33.962054 kernel: ACPI: Core revision 20210730
Feb  8 23:41:33.962062 kernel: APIC: Switch to symmetric I/O mode setup
Feb  8 23:41:33.962069 kernel: x2apic enabled
Feb  8 23:41:33.962077 kernel: Switched APIC routing to physical x2apic.
Feb  8 23:41:33.962085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Feb  8 23:41:33.962093 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  8 23:41:33.962100 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249)
Feb  8 23:41:33.962108 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Feb  8 23:41:33.962118 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Feb  8 23:41:33.962126 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  8 23:41:33.962134 kernel: Spectre V2 : Mitigation: Retpolines
Feb  8 23:41:33.962141 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb  8 23:41:33.962149 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb  8 23:41:33.962157 kernel: Speculative Store Bypass: Vulnerable
Feb  8 23:41:33.962164 kernel: x86/fpu: x87 FPU will use FXSAVE
Feb  8 23:41:33.962172 kernel: Freeing SMP alternatives memory: 32K
Feb  8 23:41:33.962179 kernel: pid_max: default: 32768 minimum: 301
Feb  8 23:41:33.962188 kernel: LSM: Security Framework initializing
Feb  8 23:41:33.962196 kernel: SELinux:  Initializing.
Feb  8 23:41:33.962204 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb  8 23:41:33.962212 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb  8 23:41:33.962220 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3)
Feb  8 23:41:33.962228 kernel: Performance Events: AMD PMU driver.
Feb  8 23:41:33.962235 kernel: ... version:                0
Feb  8 23:41:33.962242 kernel: ... bit width:              48
Feb  8 23:41:33.962250 kernel: ... generic registers:      4
Feb  8 23:41:33.962265 kernel: ... value mask:             0000ffffffffffff
Feb  8 23:41:33.962273 kernel: ... max period:             00007fffffffffff
Feb  8 23:41:33.962283 kernel: ... fixed-purpose events:   0
Feb  8 23:41:33.962290 kernel: ... event mask:             000000000000000f
Feb  8 23:41:33.962298 kernel: signal: max sigframe size: 1440
Feb  8 23:41:33.962306 kernel: rcu: Hierarchical SRCU implementation.
Feb  8 23:41:33.962314 kernel: smp: Bringing up secondary CPUs ...
Feb  8 23:41:33.962322 kernel: x86: Booting SMP configuration:
Feb  8 23:41:33.962331 kernel: .... node  #0, CPUs:      #1
Feb  8 23:41:33.962339 kernel: kvm-clock: cpu 1, msr 17faa041, secondary cpu clock
Feb  8 23:41:33.962347 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0
Feb  8 23:41:33.962355 kernel: smp: Brought up 1 node, 2 CPUs
Feb  8 23:41:33.962363 kernel: smpboot: Max logical packages: 2
Feb  8 23:41:33.962371 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS)
Feb  8 23:41:33.962379 kernel: devtmpfs: initialized
Feb  8 23:41:33.962386 kernel: x86/mm: Memory block size: 128MB
Feb  8 23:41:33.962395 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  8 23:41:33.962404 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb  8 23:41:33.962412 kernel: pinctrl core: initialized pinctrl subsystem
Feb  8 23:41:33.962420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  8 23:41:33.962428 kernel: audit: initializing netlink subsys (disabled)
Feb  8 23:41:33.962436 kernel: audit: type=2000 audit(1707435693.347:1): state=initialized audit_enabled=0 res=1
Feb  8 23:41:33.962444 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  8 23:41:33.962452 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  8 23:41:33.962460 kernel: cpuidle: using governor menu
Feb  8 23:41:33.962467 kernel: ACPI: bus type PCI registered
Feb  8 23:41:33.962477 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  8 23:41:33.962485 kernel: dca service started, version 1.12.1
Feb  8 23:41:33.962493 kernel: PCI: Using configuration type 1 for base access
Feb  8 23:41:33.962501 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  8 23:41:33.962510 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb  8 23:41:33.962519 kernel: ACPI: Added _OSI(Module Device)
Feb  8 23:41:33.962526 kernel: ACPI: Added _OSI(Processor Device)
Feb  8 23:41:33.962535 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb  8 23:41:33.962542 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  8 23:41:33.962552 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb  8 23:41:33.962560 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb  8 23:41:33.962568 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb  8 23:41:33.962576 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  8 23:41:33.962584 kernel: ACPI: Interpreter enabled
Feb  8 23:41:33.962593 kernel: ACPI: PM: (supports S0 S3 S5)
Feb  8 23:41:33.962600 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  8 23:41:33.962609 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  8 23:41:33.962617 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  8 23:41:33.963711 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  8 23:41:33.963898 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb  8 23:41:33.963995 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Feb  8 23:41:33.964009 kernel: acpiphp: Slot [3] registered
Feb  8 23:41:33.964018 kernel: acpiphp: Slot [4] registered
Feb  8 23:41:33.964026 kernel: acpiphp: Slot [5] registered
Feb  8 23:41:33.964036 kernel: acpiphp: Slot [6] registered
Feb  8 23:41:33.964049 kernel: acpiphp: Slot [7] registered
Feb  8 23:41:33.964058 kernel: acpiphp: Slot [8] registered
Feb  8 23:41:33.964066 kernel: acpiphp: Slot [9] registered
Feb  8 23:41:33.964075 kernel: acpiphp: Slot [10] registered
Feb  8 23:41:33.964083 kernel: acpiphp: Slot [11] registered
Feb  8 23:41:33.964092 kernel: acpiphp: Slot [12] registered
Feb  8 23:41:33.964100 kernel: acpiphp: Slot [13] registered
Feb  8 23:41:33.964109 kernel: acpiphp: Slot [14] registered
Feb  8 23:41:33.964117 kernel: acpiphp: Slot [15] registered
Feb  8 23:41:33.964126 kernel: acpiphp: Slot [16] registered
Feb  8 23:41:33.964136 kernel: acpiphp: Slot [17] registered
Feb  8 23:41:33.964145 kernel: acpiphp: Slot [18] registered
Feb  8 23:41:33.964153 kernel: acpiphp: Slot [19] registered
Feb  8 23:41:33.964161 kernel: acpiphp: Slot [20] registered
Feb  8 23:41:33.964170 kernel: acpiphp: Slot [21] registered
Feb  8 23:41:33.964178 kernel: acpiphp: Slot [22] registered
Feb  8 23:41:33.964187 kernel: acpiphp: Slot [23] registered
Feb  8 23:41:33.964195 kernel: acpiphp: Slot [24] registered
Feb  8 23:41:33.964203 kernel: acpiphp: Slot [25] registered
Feb  8 23:41:33.964213 kernel: acpiphp: Slot [26] registered
Feb  8 23:41:33.964222 kernel: acpiphp: Slot [27] registered
Feb  8 23:41:33.964230 kernel: acpiphp: Slot [28] registered
Feb  8 23:41:33.964239 kernel: acpiphp: Slot [29] registered
Feb  8 23:41:33.964247 kernel: acpiphp: Slot [30] registered
Feb  8 23:41:33.964256 kernel: acpiphp: Slot [31] registered
Feb  8 23:41:33.964264 kernel: PCI host bridge to bus 0000:00
Feb  8 23:41:33.964370 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  8 23:41:33.964452 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  8 23:41:33.964535 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  8 23:41:33.964615 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb  8 23:41:33.964743 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Feb  8 23:41:33.964828 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  8 23:41:33.964936 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb  8 23:41:33.965038 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb  8 23:41:33.965151 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Feb  8 23:41:33.965245 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc120-0xc12f]
Feb  8 23:41:33.965334 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Feb  8 23:41:33.965425 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Feb  8 23:41:33.965515 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Feb  8 23:41:33.965606 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Feb  8 23:41:33.965741 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb  8 23:41:33.965842 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  8 23:41:33.965935 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  8 23:41:33.966042 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Feb  8 23:41:33.966136 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Feb  8 23:41:33.966226 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Feb  8 23:41:33.966315 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Feb  8 23:41:33.966410 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Feb  8 23:41:33.966509 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  8 23:41:33.966653 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Feb  8 23:41:33.966750 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Feb  8 23:41:33.966837 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Feb  8 23:41:33.966925 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Feb  8 23:41:33.967013 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Feb  8 23:41:33.967118 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Feb  8 23:41:33.967208 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Feb  8 23:41:33.967296 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Feb  8 23:41:33.967384 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb  8 23:41:33.967480 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Feb  8 23:41:33.967579 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Feb  8 23:41:33.971781 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb  8 23:41:33.971924 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Feb  8 23:41:33.972025 kernel: pci 0000:00:06.0: reg 0x10: [io  0xc100-0xc11f]
Feb  8 23:41:33.972116 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Feb  8 23:41:33.972129 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  8 23:41:33.972139 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  8 23:41:33.972148 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  8 23:41:33.972156 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  8 23:41:33.972165 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  8 23:41:33.972177 kernel: iommu: Default domain type: Translated 
Feb  8 23:41:33.972186 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb  8 23:41:33.972282 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  8 23:41:33.972371 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  8 23:41:33.972460 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  8 23:41:33.972472 kernel: vgaarb: loaded
Feb  8 23:41:33.972481 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  8 23:41:33.972490 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  8 23:41:33.972499 kernel: PTP clock support registered
Feb  8 23:41:33.972511 kernel: PCI: Using ACPI for IRQ routing
Feb  8 23:41:33.972519 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb  8 23:41:33.972528 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb  8 23:41:33.972537 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff]
Feb  8 23:41:33.972545 kernel: clocksource: Switched to clocksource kvm-clock
Feb  8 23:41:33.972554 kernel: VFS: Disk quotas dquot_6.6.0
Feb  8 23:41:33.972563 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  8 23:41:33.972572 kernel: pnp: PnP ACPI init
Feb  8 23:41:33.972690 kernel: pnp 00:03: [dma 2]
Feb  8 23:41:33.972720 kernel: pnp: PnP ACPI: found 5 devices
Feb  8 23:41:33.972729 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  8 23:41:33.972738 kernel: NET: Registered PF_INET protocol family
Feb  8 23:41:33.972747 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb  8 23:41:33.972756 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb  8 23:41:33.972765 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  8 23:41:33.972773 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  8 23:41:33.972782 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Feb  8 23:41:33.972793 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb  8 23:41:33.972802 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb  8 23:41:33.972811 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb  8 23:41:33.972819 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  8 23:41:33.972828 kernel: NET: Registered PF_XDP protocol family
Feb  8 23:41:33.972919 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  8 23:41:33.973002 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  8 23:41:33.973081 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  8 23:41:33.973159 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb  8 23:41:33.973242 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Feb  8 23:41:33.973355 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  8 23:41:33.973454 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  8 23:41:33.973543 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Feb  8 23:41:33.973555 kernel: PCI: CLS 0 bytes, default 64
Feb  8 23:41:33.973565 kernel: Initialise system trusted keyrings
Feb  8 23:41:33.973574 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb  8 23:41:33.973586 kernel: Key type asymmetric registered
Feb  8 23:41:33.973594 kernel: Asymmetric key parser 'x509' registered
Feb  8 23:41:33.973603 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb  8 23:41:33.973612 kernel: io scheduler mq-deadline registered
Feb  8 23:41:33.973620 kernel: io scheduler kyber registered
Feb  8 23:41:33.973646 kernel: io scheduler bfq registered
Feb  8 23:41:33.973655 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb  8 23:41:33.973665 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb  8 23:41:33.973674 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  8 23:41:33.973683 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb  8 23:41:33.973694 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  8 23:41:33.973703 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  8 23:41:33.973712 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  8 23:41:33.973720 kernel: random: crng init done
Feb  8 23:41:33.973729 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  8 23:41:33.973737 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  8 23:41:33.973746 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  8 23:41:33.973840 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb  8 23:41:33.973857 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb  8 23:41:33.973937 kernel: rtc_cmos 00:04: registered as rtc0
Feb  8 23:41:33.974016 kernel: rtc_cmos 00:04: setting system clock to 2024-02-08T23:41:33 UTC (1707435693)
Feb  8 23:41:33.974095 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb  8 23:41:33.974108 kernel: NET: Registered PF_INET6 protocol family
Feb  8 23:41:33.974116 kernel: Segment Routing with IPv6
Feb  8 23:41:33.974125 kernel: In-situ OAM (IOAM) with IPv6
Feb  8 23:41:33.974134 kernel: NET: Registered PF_PACKET protocol family
Feb  8 23:41:33.974142 kernel: Key type dns_resolver registered
Feb  8 23:41:33.974154 kernel: IPI shorthand broadcast: enabled
Feb  8 23:41:33.974163 kernel: sched_clock: Marking stable (742560566, 128617449)->(926650242, -55472227)
Feb  8 23:41:33.974172 kernel: registered taskstats version 1
Feb  8 23:41:33.974180 kernel: Loading compiled-in X.509 certificates
Feb  8 23:41:33.974189 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6'
Feb  8 23:41:33.974198 kernel: Key type .fscrypt registered
Feb  8 23:41:33.974206 kernel: Key type fscrypt-provisioning registered
Feb  8 23:41:33.974216 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  8 23:41:33.974226 kernel: ima: Allocated hash algorithm: sha1
Feb  8 23:41:33.974236 kernel: ima: No architecture policies found
Feb  8 23:41:33.974244 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb  8 23:41:33.974253 kernel: Write protecting the kernel read-only data: 28672k
Feb  8 23:41:33.974262 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb  8 23:41:33.974271 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb  8 23:41:33.974279 kernel: Run /init as init process
Feb  8 23:41:33.974288 kernel:   with arguments:
Feb  8 23:41:33.974297 kernel:     /init
Feb  8 23:41:33.974307 kernel:   with environment:
Feb  8 23:41:33.974315 kernel:     HOME=/
Feb  8 23:41:33.974324 kernel:     TERM=linux
Feb  8 23:41:33.974332 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb  8 23:41:33.974344 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  8 23:41:33.974356 systemd[1]: Detected virtualization kvm.
Feb  8 23:41:33.974366 systemd[1]: Detected architecture x86-64.
Feb  8 23:41:33.974375 systemd[1]: Running in initrd.
Feb  8 23:41:33.974387 systemd[1]: No hostname configured, using default hostname.
Feb  8 23:41:33.974396 systemd[1]: Hostname set to <localhost>.
Feb  8 23:41:33.974406 systemd[1]: Initializing machine ID from VM UUID.
Feb  8 23:41:33.974415 systemd[1]: Queued start job for default target initrd.target.
Feb  8 23:41:33.974425 systemd[1]: Started systemd-ask-password-console.path.
Feb  8 23:41:33.974434 systemd[1]: Reached target cryptsetup.target.
Feb  8 23:41:33.974443 systemd[1]: Reached target paths.target.
Feb  8 23:41:33.974452 systemd[1]: Reached target slices.target.
Feb  8 23:41:33.974463 systemd[1]: Reached target swap.target.
Feb  8 23:41:33.974473 systemd[1]: Reached target timers.target.
Feb  8 23:41:33.974482 systemd[1]: Listening on iscsid.socket.
Feb  8 23:41:33.974492 systemd[1]: Listening on iscsiuio.socket.
Feb  8 23:41:33.974501 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  8 23:41:33.974510 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  8 23:41:33.974520 systemd[1]: Listening on systemd-journald.socket.
Feb  8 23:41:33.974531 systemd[1]: Listening on systemd-networkd.socket.
Feb  8 23:41:33.974540 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  8 23:41:33.974549 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  8 23:41:33.974559 systemd[1]: Reached target sockets.target.
Feb  8 23:41:33.974568 systemd[1]: Starting kmod-static-nodes.service...
Feb  8 23:41:33.974586 systemd[1]: Finished network-cleanup.service.
Feb  8 23:41:33.974597 systemd[1]: Starting systemd-fsck-usr.service...
Feb  8 23:41:33.974608 systemd[1]: Starting systemd-journald.service...
Feb  8 23:41:33.974618 systemd[1]: Starting systemd-modules-load.service...
Feb  8 23:41:33.979725 systemd[1]: Starting systemd-resolved.service...
Feb  8 23:41:33.979760 systemd[1]: Starting systemd-vconsole-setup.service...
Feb  8 23:41:33.979771 systemd[1]: Finished kmod-static-nodes.service.
Feb  8 23:41:33.979781 systemd[1]: Finished systemd-fsck-usr.service.
Feb  8 23:41:33.979795 systemd-journald[185]: Journal started
Feb  8 23:41:33.979876 systemd-journald[185]: Runtime Journal (/run/log/journal/a594cb76176b4c88b655d1e2558ded20) is 4.9M, max 39.5M, 34.5M free.
Feb  8 23:41:33.969291 systemd-modules-load[186]: Inserted module 'overlay'
Feb  8 23:41:34.027711 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  8 23:41:34.027742 kernel: Bridge firewalling registered
Feb  8 23:41:34.027774 systemd[1]: Started systemd-journald.service.
Feb  8 23:41:34.027791 kernel: audit: type=1130 audit(1707435694.021:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.002459 systemd-modules-load[186]: Inserted module 'br_netfilter'
Feb  8 23:41:34.031730 kernel: audit: type=1130 audit(1707435694.027:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.004343 systemd-resolved[187]: Positive Trust Anchors:
Feb  8 23:41:34.044913 kernel: SCSI subsystem initialized
Feb  8 23:41:34.044937 kernel: audit: type=1130 audit(1707435694.032:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.044950 kernel: audit: type=1130 audit(1707435694.036:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.004353 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  8 23:41:34.004391 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  8 23:41:34.017300 systemd-resolved[187]: Defaulting to hostname 'linux'.
Feb  8 23:41:34.028261 systemd[1]: Started systemd-resolved.service.
Feb  8 23:41:34.033510 systemd[1]: Finished systemd-vconsole-setup.service.
Feb  8 23:41:34.054676 kernel: audit: type=1130 audit(1707435694.050:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.037720 systemd[1]: Reached target nss-lookup.target.
Feb  8 23:41:34.038989 systemd[1]: Starting dracut-cmdline-ask.service...
Feb  8 23:41:34.040127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  8 23:41:34.050597 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  8 23:41:34.062657 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  8 23:41:34.066598 kernel: device-mapper: uevent: version 1.0.3
Feb  8 23:41:34.066643 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb  8 23:41:34.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.068390 systemd[1]: Finished dracut-cmdline-ask.service.
Feb  8 23:41:34.069808 systemd[1]: Starting dracut-cmdline.service...
Feb  8 23:41:34.075274 kernel: audit: type=1130 audit(1707435694.068:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.078747 systemd-modules-load[186]: Inserted module 'dm_multipath'
Feb  8 23:41:34.079762 systemd[1]: Finished systemd-modules-load.service.
Feb  8 23:41:34.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.081524 systemd[1]: Starting systemd-sysctl.service...
Feb  8 23:41:34.087019 kernel: audit: type=1130 audit(1707435694.080:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.090248 dracut-cmdline[202]: dracut-dracut-053
Feb  8 23:41:34.092818 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  8 23:41:34.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.094861 systemd[1]: Finished systemd-sysctl.service.
Feb  8 23:41:34.099665 kernel: audit: type=1130 audit(1707435694.094:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.165662 kernel: Loading iSCSI transport class v2.0-870.
Feb  8 23:41:34.179655 kernel: iscsi: registered transport (tcp)
Feb  8 23:41:34.205704 kernel: iscsi: registered transport (qla4xxx)
Feb  8 23:41:34.205771 kernel: QLogic iSCSI HBA Driver
Feb  8 23:41:34.261342 systemd[1]: Finished dracut-cmdline.service.
Feb  8 23:41:34.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.271674 kernel: audit: type=1130 audit(1707435694.262:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.271750 systemd[1]: Starting dracut-pre-udev.service...
Feb  8 23:41:34.358743 kernel: raid6: sse2x4   gen() 12117 MB/s
Feb  8 23:41:34.375689 kernel: raid6: sse2x4   xor()  4787 MB/s
Feb  8 23:41:34.392753 kernel: raid6: sse2x2   gen() 13822 MB/s
Feb  8 23:41:34.409741 kernel: raid6: sse2x2   xor()  8270 MB/s
Feb  8 23:41:34.426739 kernel: raid6: sse2x1   gen() 10592 MB/s
Feb  8 23:41:34.444410 kernel: raid6: sse2x1   xor()  6874 MB/s
Feb  8 23:41:34.444486 kernel: raid6: using algorithm sse2x2 gen() 13822 MB/s
Feb  8 23:41:34.444513 kernel: raid6: .... xor() 8270 MB/s, rmw enabled
Feb  8 23:41:34.445420 kernel: raid6: using ssse3x2 recovery algorithm
Feb  8 23:41:34.461324 kernel: xor: measuring software checksum speed
Feb  8 23:41:34.461383 kernel:    prefetch64-sse  : 18464 MB/sec
Feb  8 23:41:34.463772 kernel:    generic_sse     : 16713 MB/sec
Feb  8 23:41:34.463817 kernel: xor: using function: prefetch64-sse (18464 MB/sec)
Feb  8 23:41:34.579704 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb  8 23:41:34.594951 systemd[1]: Finished dracut-pre-udev.service.
Feb  8 23:41:34.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.596000 audit: BPF prog-id=7 op=LOAD
Feb  8 23:41:34.596000 audit: BPF prog-id=8 op=LOAD
Feb  8 23:41:34.598825 systemd[1]: Starting systemd-udevd.service...
Feb  8 23:41:34.614669 systemd-udevd[384]: Using default interface naming scheme 'v252'.
Feb  8 23:41:34.620042 systemd[1]: Started systemd-udevd.service.
Feb  8 23:41:34.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.633394 systemd[1]: Starting dracut-pre-trigger.service...
Feb  8 23:41:34.646454 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation
Feb  8 23:41:34.686324 systemd[1]: Finished dracut-pre-trigger.service.
Feb  8 23:41:34.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.688863 systemd[1]: Starting systemd-udev-trigger.service...
Feb  8 23:41:34.743764 systemd[1]: Finished systemd-udev-trigger.service.
Feb  8 23:41:34.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:34.791681 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB)
Feb  8 23:41:34.807041 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb  8 23:41:34.807091 kernel: GPT:17805311 != 41943039
Feb  8 23:41:34.807104 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb  8 23:41:34.807117 kernel: GPT:17805311 != 41943039
Feb  8 23:41:34.807130 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb  8 23:41:34.807142 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  8 23:41:34.845656 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443)
Feb  8 23:41:34.863670 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb  8 23:41:34.915575 kernel: libata version 3.00 loaded.
Feb  8 23:41:34.915599 kernel: ata_piix 0000:00:01.1: version 2.13
Feb  8 23:41:34.915796 kernel: scsi host0: ata_piix
Feb  8 23:41:34.915927 kernel: scsi host1: ata_piix
Feb  8 23:41:34.916043 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14
Feb  8 23:41:34.916057 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15
Feb  8 23:41:34.914935 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb  8 23:41:34.919822 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb  8 23:41:34.924339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb  8 23:41:34.928378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  8 23:41:34.930858 systemd[1]: Starting disk-uuid.service...
Feb  8 23:41:34.939938 disk-uuid[460]: Primary Header is updated.
Feb  8 23:41:34.939938 disk-uuid[460]: Secondary Entries is updated.
Feb  8 23:41:34.939938 disk-uuid[460]: Secondary Header is updated.
Feb  8 23:41:34.951655 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  8 23:41:34.955675 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  8 23:41:35.967682 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  8 23:41:35.970809 disk-uuid[461]: The operation has completed successfully.
Feb  8 23:41:36.044893 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb  8 23:41:36.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:36.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:36.045145 systemd[1]: Finished disk-uuid.service.
Feb  8 23:41:36.054868 systemd[1]: Starting verity-setup.service...
Feb  8 23:41:36.094672 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3"
Feb  8 23:41:36.759342 systemd[1]: Found device dev-mapper-usr.device.
Feb  8 23:41:36.763196 systemd[1]: Mounting sysusr-usr.mount...
Feb  8 23:41:36.765150 systemd[1]: Finished verity-setup.service.
Feb  8 23:41:36.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:36.964819 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb  8 23:41:36.963849 systemd[1]: Mounted sysusr-usr.mount.
Feb  8 23:41:36.964505 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb  8 23:41:36.965370 systemd[1]: Starting ignition-setup.service...
Feb  8 23:41:36.970201 systemd[1]: Starting parse-ip-for-networkd.service...
Feb  8 23:41:37.004319 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  8 23:41:37.004381 kernel: BTRFS info (device vda6): using free space tree
Feb  8 23:41:37.004404 kernel: BTRFS info (device vda6): has skinny extents
Feb  8 23:41:37.032955 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb  8 23:41:37.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.069958 systemd[1]: Finished ignition-setup.service.
Feb  8 23:41:37.071378 systemd[1]: Starting ignition-fetch-offline.service...
Feb  8 23:41:37.114458 systemd[1]: Finished parse-ip-for-networkd.service.
Feb  8 23:41:37.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.115000 audit: BPF prog-id=9 op=LOAD
Feb  8 23:41:37.116959 systemd[1]: Starting systemd-networkd.service...
Feb  8 23:41:37.143625 systemd-networkd[631]: lo: Link UP
Feb  8 23:41:37.144496 systemd-networkd[631]: lo: Gained carrier
Feb  8 23:41:37.145786 systemd-networkd[631]: Enumeration completed
Feb  8 23:41:37.146474 systemd[1]: Started systemd-networkd.service.
Feb  8 23:41:37.147327 systemd-networkd[631]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  8 23:41:37.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.147606 systemd[1]: Reached target network.target.
Feb  8 23:41:37.150995 systemd-networkd[631]: eth0: Link UP
Feb  8 23:41:37.151521 systemd-networkd[631]: eth0: Gained carrier
Feb  8 23:41:37.153105 systemd[1]: Starting iscsiuio.service...
Feb  8 23:41:37.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.158780 systemd[1]: Started iscsiuio.service.
Feb  8 23:41:37.160469 systemd[1]: Starting iscsid.service...
Feb  8 23:41:37.165769 iscsid[636]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb  8 23:41:37.165769 iscsid[636]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb  8 23:41:37.165769 iscsid[636]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb  8 23:41:37.165769 iscsid[636]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb  8 23:41:37.165769 iscsid[636]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb  8 23:41:37.165769 iscsid[636]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb  8 23:41:37.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.166754 systemd[1]: Started iscsid.service.
Feb  8 23:41:37.171257 systemd[1]: Starting dracut-initqueue.service...
Feb  8 23:41:37.171743 systemd-networkd[631]: eth0: DHCPv4 address 172.24.4.77/24, gateway 172.24.4.1 acquired from 172.24.4.1
Feb  8 23:41:37.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.187046 systemd[1]: Finished dracut-initqueue.service.
Feb  8 23:41:37.187686 systemd[1]: Reached target remote-fs-pre.target.
Feb  8 23:41:37.188233 systemd[1]: Reached target remote-cryptsetup.target.
Feb  8 23:41:37.189253 systemd[1]: Reached target remote-fs.target.
Feb  8 23:41:37.191510 systemd[1]: Starting dracut-pre-mount.service...
Feb  8 23:41:37.201872 systemd[1]: Finished dracut-pre-mount.service.
Feb  8 23:41:37.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.494060 ignition[603]: Ignition 2.14.0
Feb  8 23:41:37.495810 ignition[603]: Stage: fetch-offline
Feb  8 23:41:37.495975 ignition[603]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:37.496019 ignition[603]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:37.498309 ignition[603]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:37.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.501125 systemd[1]: Finished ignition-fetch-offline.service.
Feb  8 23:41:37.498543 ignition[603]: parsed url from cmdline: ""
Feb  8 23:41:37.505269 systemd[1]: Starting ignition-fetch.service...
Feb  8 23:41:37.498552 ignition[603]: no config URL provided
Feb  8 23:41:37.498566 ignition[603]: reading system config file "/usr/lib/ignition/user.ign"
Feb  8 23:41:37.498585 ignition[603]: no config at "/usr/lib/ignition/user.ign"
Feb  8 23:41:37.498600 ignition[603]: failed to fetch config: resource requires networking
Feb  8 23:41:37.499069 ignition[603]: Ignition finished successfully
Feb  8 23:41:37.525269 ignition[655]: Ignition 2.14.0
Feb  8 23:41:37.525297 ignition[655]: Stage: fetch
Feb  8 23:41:37.525542 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:37.525588 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:37.527668 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:37.527909 ignition[655]: parsed url from cmdline: ""
Feb  8 23:41:37.527918 ignition[655]: no config URL provided
Feb  8 23:41:37.527932 ignition[655]: reading system config file "/usr/lib/ignition/user.ign"
Feb  8 23:41:37.527951 ignition[655]: no config at "/usr/lib/ignition/user.ign"
Feb  8 23:41:37.536796 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting...
Feb  8 23:41:37.536854 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1
Feb  8 23:41:37.537442 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting...
Feb  8 23:41:37.749962 ignition[655]: GET result: OK
Feb  8 23:41:37.750272 ignition[655]: parsing config with SHA512: cbc666ffb8354eac321f64c0e22ebf6d52a642e0b463b69cfe261b797057548a5019cc813d311d173720dd22f1489e2fb636724e29fc723a9969035d08a15997
Feb  8 23:41:37.815690 unknown[655]: fetched base config from "system"
Feb  8 23:41:37.815721 unknown[655]: fetched base config from "system"
Feb  8 23:41:37.817127 ignition[655]: fetch: fetch complete
Feb  8 23:41:37.815738 unknown[655]: fetched user config from "openstack"
Feb  8 23:41:37.817141 ignition[655]: fetch: fetch passed
Feb  8 23:41:37.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.822866 systemd[1]: Finished ignition-fetch.service.
Feb  8 23:41:37.817231 ignition[655]: Ignition finished successfully
Feb  8 23:41:37.827136 systemd[1]: Starting ignition-kargs.service...
Feb  8 23:41:37.859442 ignition[661]: Ignition 2.14.0
Feb  8 23:41:37.859487 ignition[661]: Stage: kargs
Feb  8 23:41:37.859950 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:37.860019 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:37.863517 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:37.866933 ignition[661]: kargs: kargs passed
Feb  8 23:41:37.867125 ignition[661]: Ignition finished successfully
Feb  8 23:41:37.869556 systemd[1]: Finished ignition-kargs.service.
Feb  8 23:41:37.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.878113 systemd[1]: Starting ignition-disks.service...
Feb  8 23:41:37.891659 ignition[667]: Ignition 2.14.0
Feb  8 23:41:37.891677 ignition[667]: Stage: disks
Feb  8 23:41:37.891851 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:37.891890 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:37.893473 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:37.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.896428 systemd[1]: Finished ignition-disks.service.
Feb  8 23:41:37.895464 ignition[667]: disks: disks passed
Feb  8 23:41:37.897097 systemd[1]: Reached target initrd-root-device.target.
Feb  8 23:41:37.895536 ignition[667]: Ignition finished successfully
Feb  8 23:41:37.897596 systemd[1]: Reached target local-fs-pre.target.
Feb  8 23:41:37.898212 systemd[1]: Reached target local-fs.target.
Feb  8 23:41:37.899266 systemd[1]: Reached target sysinit.target.
Feb  8 23:41:37.900273 systemd[1]: Reached target basic.target.
Feb  8 23:41:37.902601 systemd[1]: Starting systemd-fsck-root.service...
Feb  8 23:41:37.926967 systemd-fsck[674]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks
Feb  8 23:41:37.945412 systemd[1]: Finished systemd-fsck-root.service.
Feb  8 23:41:37.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:37.947592 systemd[1]: Mounting sysroot.mount...
Feb  8 23:41:37.982034 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb  8 23:41:37.983439 systemd[1]: Mounted sysroot.mount.
Feb  8 23:41:37.986073 systemd[1]: Reached target initrd-root-fs.target.
Feb  8 23:41:37.992406 systemd[1]: Mounting sysroot-usr.mount...
Feb  8 23:41:37.995892 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb  8 23:41:37.998200 systemd[1]: Starting flatcar-openstack-hostname.service...
Feb  8 23:41:37.999684 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb  8 23:41:37.999782 systemd[1]: Reached target ignition-diskful.target.
Feb  8 23:41:38.007686 systemd[1]: Mounted sysroot-usr.mount.
Feb  8 23:41:38.018958 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  8 23:41:38.025255 systemd[1]: Starting initrd-setup-root.service...
Feb  8 23:41:38.041031 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory
Feb  8 23:41:38.055466 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681)
Feb  8 23:41:38.056515 initrd-setup-root[694]: cut: /sysroot/etc/group: No such file or directory
Feb  8 23:41:38.066929 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  8 23:41:38.067024 kernel: BTRFS info (device vda6): using free space tree
Feb  8 23:41:38.067037 kernel: BTRFS info (device vda6): has skinny extents
Feb  8 23:41:38.073488 initrd-setup-root[718]: cut: /sysroot/etc/shadow: No such file or directory
Feb  8 23:41:38.086551 initrd-setup-root[726]: cut: /sysroot/etc/gshadow: No such file or directory
Feb  8 23:41:38.101255 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  8 23:41:38.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.195871 systemd[1]: Finished initrd-setup-root.service.
Feb  8 23:41:38.211083 kernel: kauditd_printk_skb: 22 callbacks suppressed
Feb  8 23:41:38.211115 kernel: audit: type=1130 audit(1707435698.195:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.197565 systemd[1]: Starting ignition-mount.service...
Feb  8 23:41:38.213545 systemd[1]: Starting sysroot-boot.service...
Feb  8 23:41:38.220740 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully.
Feb  8 23:41:38.220872 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully.
Feb  8 23:41:38.269001 ignition[748]: INFO     : Ignition 2.14.0
Feb  8 23:41:38.269001 ignition[748]: INFO     : Stage: mount
Feb  8 23:41:38.270796 ignition[748]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:38.270796 ignition[748]: DEBUG    : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:38.273660 ignition[748]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:38.276437 ignition[748]: INFO     : mount: mount passed
Feb  8 23:41:38.277002 ignition[748]: INFO     : Ignition finished successfully
Feb  8 23:41:38.279787 systemd[1]: Finished ignition-mount.service.
Feb  8 23:41:38.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.285681 kernel: audit: type=1130 audit(1707435698.279:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.289486 systemd[1]: Finished sysroot-boot.service.
Feb  8 23:41:38.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.294664 kernel: audit: type=1130 audit(1707435698.289:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.313056 coreos-metadata[680]: Feb 08 23:41:38.312 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1
Feb  8 23:41:38.328590 coreos-metadata[680]: Feb 08 23:41:38.328 INFO Fetch successful
Feb  8 23:41:38.328590 coreos-metadata[680]: Feb 08 23:41:38.328 INFO wrote hostname ci-3510-3-2-4-0b571dfe90.novalocal to /sysroot/etc/hostname
Feb  8 23:41:38.333221 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully.
Feb  8 23:41:38.333410 systemd[1]: Finished flatcar-openstack-hostname.service.
Feb  8 23:41:38.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.337534 systemd[1]: Starting ignition-files.service...
Feb  8 23:41:38.353321 kernel: audit: type=1130 audit(1707435698.335:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.353364 kernel: audit: type=1131 audit(1707435698.335:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:38.360568 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  8 23:41:38.376694 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758)
Feb  8 23:41:38.384448 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  8 23:41:38.384507 kernel: BTRFS info (device vda6): using free space tree
Feb  8 23:41:38.384519 kernel: BTRFS info (device vda6): has skinny extents
Feb  8 23:41:38.395499 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  8 23:41:38.417060 ignition[777]: INFO     : Ignition 2.14.0
Feb  8 23:41:38.417060 ignition[777]: INFO     : Stage: files
Feb  8 23:41:38.419575 ignition[777]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:38.419575 ignition[777]: DEBUG    : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:38.419575 ignition[777]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:38.426138 ignition[777]: DEBUG    : files: compiled without relabeling support, skipping
Feb  8 23:41:38.426138 ignition[777]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb  8 23:41:38.426138 ignition[777]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb  8 23:41:38.433376 ignition[777]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb  8 23:41:38.435877 ignition[777]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb  8 23:41:38.439583 ignition[777]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb  8 23:41:38.437489 unknown[777]: wrote ssh authorized keys file for user: core
Feb  8 23:41:38.443943 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  8 23:41:38.443943 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  8 23:41:38.443943 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Feb  8 23:41:38.443943 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1
Feb  8 23:41:38.858110 systemd-networkd[631]: eth0: Gained IPv6LL
Feb  8 23:41:39.002072 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb  8 23:41:39.732162 ignition[777]: DEBUG    : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d
Feb  8 23:41:39.732162 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Feb  8 23:41:39.739688 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz"
Feb  8 23:41:39.739688 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1
Feb  8 23:41:40.213890 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb  8 23:41:40.714495 ignition[777]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449
Feb  8 23:41:40.718541 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz"
Feb  8 23:41:40.718541 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb  8 23:41:40.718541 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1
Feb  8 23:41:40.853682 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb  8 23:41:41.770196 ignition[777]: DEBUG    : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660
Feb  8 23:41:41.770196 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb  8 23:41:41.770196 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb  8 23:41:41.778399 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1
Feb  8 23:41:41.881292 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb  8 23:41:44.220926 ignition[777]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b
Feb  8 23:41:44.222765 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb  8 23:41:44.223757 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/install.sh"
Feb  8 23:41:44.224957 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh"
Feb  8 23:41:44.226074 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb  8 23:41:44.226074 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb  8 23:41:44.572690 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb  8 23:41:44.575218 ignition[777]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb  8 23:41:44.575218 ignition[777]: INFO     : files: op(b): [started]  processing unit "coreos-metadata-sshkeys@.service"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(b): op(c): [started]  writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(b): op(c): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(f): [started]  processing unit "containerd.service"
Feb  8 23:41:44.617741 ignition[777]: INFO     : files: op(f): op(10): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(f): op(10): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(f): [finished] processing unit "containerd.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(11): [started]  processing unit "prepare-cni-plugins.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(11): op(12): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(11): [finished] processing unit "prepare-cni-plugins.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(13): [started]  processing unit "prepare-critools.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(13): op(14): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(13): [finished] processing unit "prepare-critools.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(15): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(16): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(17): [started]  setting preset to enabled for "prepare-critools.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: op(17): [finished] setting preset to enabled for "prepare-critools.service"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: createResultFile: createFiles: op(18): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb  8 23:41:44.641091 ignition[777]: INFO     : files: files passed
Feb  8 23:41:44.641091 ignition[777]: INFO     : Ignition finished successfully
Feb  8 23:41:44.713169 kernel: audit: type=1130 audit(1707435704.646:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.713197 kernel: audit: type=1130 audit(1707435704.673:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.713216 kernel: audit: type=1131 audit(1707435704.673:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.713229 kernel: audit: type=1130 audit(1707435704.688:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.641864 systemd[1]: Finished ignition-files.service.
Feb  8 23:41:44.650689 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb  8 23:41:44.714838 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb  8 23:41:44.660299 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb  8 23:41:44.662591 systemd[1]: Starting ignition-quench.service...
Feb  8 23:41:44.670947 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb  8 23:41:44.671231 systemd[1]: Finished ignition-quench.service.
Feb  8 23:41:44.675279 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb  8 23:41:44.689170 systemd[1]: Reached target ignition-complete.target.
Feb  8 23:41:44.699840 systemd[1]: Starting initrd-parse-etc.service...
Feb  8 23:41:44.731320 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  8 23:41:44.749942 kernel: audit: type=1130 audit(1707435704.731:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.749988 kernel: audit: type=1131 audit(1707435704.731:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.731437 systemd[1]: Finished initrd-parse-etc.service.
Feb  8 23:41:44.732126 systemd[1]: Reached target initrd-fs.target.
Feb  8 23:41:44.750309 systemd[1]: Reached target initrd.target.
Feb  8 23:41:44.751593 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb  8 23:41:44.752530 systemd[1]: Starting dracut-pre-pivot.service...
Feb  8 23:41:44.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.765409 systemd[1]: Finished dracut-pre-pivot.service.
Feb  8 23:41:44.770222 kernel: audit: type=1130 audit(1707435704.765:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.769897 systemd[1]: Starting initrd-cleanup.service...
Feb  8 23:41:44.780929 systemd[1]: Stopped target nss-lookup.target.
Feb  8 23:41:44.781979 systemd[1]: Stopped target remote-cryptsetup.target.
Feb  8 23:41:44.783042 systemd[1]: Stopped target timers.target.
Feb  8 23:41:44.784050 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  8 23:41:44.784736 systemd[1]: Stopped dracut-pre-pivot.service.
Feb  8 23:41:44.793031 kernel: audit: type=1131 audit(1707435704.788:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.788911 systemd[1]: Stopped target initrd.target.
Feb  8 23:41:44.793577 systemd[1]: Stopped target basic.target.
Feb  8 23:41:44.794502 systemd[1]: Stopped target ignition-complete.target.
Feb  8 23:41:44.795466 systemd[1]: Stopped target ignition-diskful.target.
Feb  8 23:41:44.796432 systemd[1]: Stopped target initrd-root-device.target.
Feb  8 23:41:44.797387 systemd[1]: Stopped target remote-fs.target.
Feb  8 23:41:44.798306 systemd[1]: Stopped target remote-fs-pre.target.
Feb  8 23:41:44.799235 systemd[1]: Stopped target sysinit.target.
Feb  8 23:41:44.800145 systemd[1]: Stopped target local-fs.target.
Feb  8 23:41:44.801065 systemd[1]: Stopped target local-fs-pre.target.
Feb  8 23:41:44.801994 systemd[1]: Stopped target swap.target.
Feb  8 23:41:44.802853 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  8 23:41:44.807692 kernel: audit: type=1131 audit(1707435704.803:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.803008 systemd[1]: Stopped dracut-pre-mount.service.
Feb  8 23:41:44.803909 systemd[1]: Stopped target cryptsetup.target.
Feb  8 23:41:44.812754 kernel: audit: type=1131 audit(1707435704.808:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.808170 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  8 23:41:44.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.808310 systemd[1]: Stopped dracut-initqueue.service.
Feb  8 23:41:44.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.809176 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb  8 23:41:44.809320 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb  8 23:41:44.813314 systemd[1]: ignition-files.service: Deactivated successfully.
Feb  8 23:41:44.813455 systemd[1]: Stopped ignition-files.service.
Feb  8 23:41:44.815133 systemd[1]: Stopping ignition-mount.service...
Feb  8 23:41:44.815680 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  8 23:41:44.815835 systemd[1]: Stopped kmod-static-nodes.service.
Feb  8 23:41:44.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.823075 systemd[1]: Stopping sysroot-boot.service...
Feb  8 23:41:44.827531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  8 23:41:44.827734 systemd[1]: Stopped systemd-udev-trigger.service.
Feb  8 23:41:44.828285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  8 23:41:44.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.828386 systemd[1]: Stopped dracut-pre-trigger.service.
Feb  8 23:41:44.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.831346 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  8 23:41:44.831442 systemd[1]: Finished initrd-cleanup.service.
Feb  8 23:41:44.834603 ignition[815]: INFO     : Ignition 2.14.0
Feb  8 23:41:44.834603 ignition[815]: INFO     : Stage: umount
Feb  8 23:41:44.835752 ignition[815]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  8 23:41:44.835752 ignition[815]: DEBUG    : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a
Feb  8 23:41:44.838369 ignition[815]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/openstack"
Feb  8 23:41:44.838369 ignition[815]: INFO     : umount: umount passed
Feb  8 23:41:44.838369 ignition[815]: INFO     : Ignition finished successfully
Feb  8 23:41:44.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.838303 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb  8 23:41:44.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.838392 systemd[1]: Stopped ignition-mount.service.
Feb  8 23:41:44.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.839673 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb  8 23:41:44.839767 systemd[1]: Stopped ignition-disks.service.
Feb  8 23:41:44.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.840514 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb  8 23:41:44.840549 systemd[1]: Stopped ignition-kargs.service.
Feb  8 23:41:44.841685 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb  8 23:41:44.841725 systemd[1]: Stopped ignition-fetch.service.
Feb  8 23:41:44.842667 systemd[1]: Stopped target network.target.
Feb  8 23:41:44.843511 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb  8 23:41:44.843550 systemd[1]: Stopped ignition-fetch-offline.service.
Feb  8 23:41:44.844528 systemd[1]: Stopped target paths.target.
Feb  8 23:41:44.845534 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  8 23:41:44.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.848670 systemd[1]: Stopped systemd-ask-password-console.path.
Feb  8 23:41:44.850838 systemd[1]: Stopped target slices.target.
Feb  8 23:41:44.855918 systemd[1]: Stopped target sockets.target.
Feb  8 23:41:44.856874 systemd[1]: iscsid.socket: Deactivated successfully.
Feb  8 23:41:44.856899 systemd[1]: Closed iscsid.socket.
Feb  8 23:41:44.858053 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb  8 23:41:44.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.858078 systemd[1]: Closed iscsiuio.socket.
Feb  8 23:41:44.858939 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb  8 23:41:44.858977 systemd[1]: Stopped ignition-setup.service.
Feb  8 23:41:44.859982 systemd[1]: Stopping systemd-networkd.service...
Feb  8 23:41:44.861411 systemd[1]: Stopping systemd-resolved.service...
Feb  8 23:41:44.863742 systemd-networkd[631]: eth0: DHCPv6 lease lost
Feb  8 23:41:44.874000 audit: BPF prog-id=9 op=UNLOAD
Feb  8 23:41:44.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.864807 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  8 23:41:44.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.864905 systemd[1]: Stopped systemd-networkd.service.
Feb  8 23:41:44.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.867533 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb  8 23:41:44.867566 systemd[1]: Closed systemd-networkd.socket.
Feb  8 23:41:44.872070 systemd[1]: Stopping network-cleanup.service...
Feb  8 23:41:44.874651 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb  8 23:41:44.874734 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb  8 23:41:44.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.875803 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  8 23:41:44.887000 audit: BPF prog-id=6 op=UNLOAD
Feb  8 23:41:44.875855 systemd[1]: Stopped systemd-sysctl.service.
Feb  8 23:41:44.877233 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  8 23:41:44.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.877274 systemd[1]: Stopped systemd-modules-load.service.
Feb  8 23:41:44.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.878069 systemd[1]: Stopping systemd-udevd.service...
Feb  8 23:41:44.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.880532 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb  8 23:41:44.880653 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  8 23:41:44.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.881431 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb  8 23:41:44.881558 systemd[1]: Stopped systemd-resolved.service.
Feb  8 23:41:44.885317 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  8 23:41:44.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:44.885484 systemd[1]: Stopped systemd-udevd.service.
Feb  8 23:41:44.887566 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  8 23:41:44.887605 systemd[1]: Closed systemd-udevd-control.socket.
Feb  8 23:41:44.889041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  8 23:41:44.889076 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb  8 23:41:44.889678 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  8 23:41:44.889727 systemd[1]: Stopped dracut-pre-udev.service.
Feb  8 23:41:44.890739 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  8 23:41:44.890787 systemd[1]: Stopped dracut-cmdline.service.
Feb  8 23:41:44.891864 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb  8 23:41:44.891901 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb  8 23:41:44.893594 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb  8 23:41:44.894408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  8 23:41:44.894466 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb  8 23:41:44.896944 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb  8 23:41:44.897050 systemd[1]: Stopped sysroot-boot.service.
Feb  8 23:41:44.897720 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb  8 23:41:44.897815 systemd[1]: Stopped network-cleanup.service.
Feb  8 23:41:44.898361 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb  8 23:41:44.898402 systemd[1]: Stopped initrd-setup-root.service.
Feb  8 23:41:44.900825 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  8 23:41:44.900941 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb  8 23:41:44.901605 systemd[1]: Reached target initrd-switch-root.target.
Feb  8 23:41:44.902895 systemd[1]: Starting initrd-switch-root.service...
Feb  8 23:41:44.920000 audit: BPF prog-id=8 op=UNLOAD
Feb  8 23:41:44.920000 audit: BPF prog-id=7 op=UNLOAD
Feb  8 23:41:44.918758 systemd[1]: Switching root.
Feb  8 23:41:44.924000 audit: BPF prog-id=5 op=UNLOAD
Feb  8 23:41:44.924000 audit: BPF prog-id=4 op=UNLOAD
Feb  8 23:41:44.924000 audit: BPF prog-id=3 op=UNLOAD
Feb  8 23:41:44.939807 iscsid[636]: iscsid shutting down.
Feb  8 23:41:44.940397 systemd-journald[185]: Journal stopped
Feb  8 23:41:50.977223 systemd-journald[185]: Received SIGTERM from PID 1 (systemd).
Feb  8 23:41:50.977292 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb  8 23:41:50.977308 kernel: SELinux:  Class anon_inode not defined in policy.
Feb  8 23:41:50.977321 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb  8 23:41:50.977332 kernel: SELinux:  policy capability network_peer_controls=1
Feb  8 23:41:50.977348 kernel: SELinux:  policy capability open_perms=1
Feb  8 23:41:50.977360 kernel: SELinux:  policy capability extended_socket_class=1
Feb  8 23:41:50.977376 kernel: SELinux:  policy capability always_check_network=0
Feb  8 23:41:50.977387 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  8 23:41:50.977401 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  8 23:41:50.977415 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb  8 23:41:50.977426 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb  8 23:41:50.977439 systemd[1]: Successfully loaded SELinux policy in 101.443ms.
Feb  8 23:41:50.977456 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.310ms.
Feb  8 23:41:50.977471 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  8 23:41:50.977485 systemd[1]: Detected virtualization kvm.
Feb  8 23:41:50.977498 systemd[1]: Detected architecture x86-64.
Feb  8 23:41:50.977510 systemd[1]: Detected first boot.
Feb  8 23:41:50.977523 systemd[1]: Hostname set to <ci-3510-3-2-4-0b571dfe90.novalocal>.
Feb  8 23:41:50.977536 systemd[1]: Initializing machine ID from VM UUID.
Feb  8 23:41:50.977549 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb  8 23:41:50.977565 systemd[1]: Populated /etc with preset unit settings.
Feb  8 23:41:50.977580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:41:50.977596 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:41:50.977610 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:41:50.977623 systemd[1]: Queued start job for default target multi-user.target.
Feb  8 23:41:50.977666 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Feb  8 23:41:50.977680 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb  8 23:41:50.977693 systemd[1]: Created slice system-addon\x2drun.slice.
Feb  8 23:41:50.977706 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Feb  8 23:41:50.977726 systemd[1]: Created slice system-getty.slice.
Feb  8 23:41:50.977739 systemd[1]: Created slice system-modprobe.slice.
Feb  8 23:41:50.977751 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb  8 23:41:50.977764 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb  8 23:41:50.977776 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb  8 23:41:50.977789 systemd[1]: Created slice user.slice.
Feb  8 23:41:50.977801 systemd[1]: Started systemd-ask-password-console.path.
Feb  8 23:41:50.977813 systemd[1]: Started systemd-ask-password-wall.path.
Feb  8 23:41:50.977825 systemd[1]: Set up automount boot.automount.
Feb  8 23:41:50.977838 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb  8 23:41:50.977852 systemd[1]: Reached target integritysetup.target.
Feb  8 23:41:50.977866 systemd[1]: Reached target remote-cryptsetup.target.
Feb  8 23:41:50.977879 systemd[1]: Reached target remote-fs.target.
Feb  8 23:41:50.977891 systemd[1]: Reached target slices.target.
Feb  8 23:41:50.977903 systemd[1]: Reached target swap.target.
Feb  8 23:41:50.977915 systemd[1]: Reached target torcx.target.
Feb  8 23:41:50.977928 systemd[1]: Reached target veritysetup.target.
Feb  8 23:41:50.977941 systemd[1]: Listening on systemd-coredump.socket.
Feb  8 23:41:50.977954 systemd[1]: Listening on systemd-initctl.socket.
Feb  8 23:41:50.977966 kernel: kauditd_printk_skb: 46 callbacks suppressed
Feb  8 23:41:50.977979 kernel: audit: type=1400 audit(1707435710.731:87): avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  8 23:41:50.977992 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  8 23:41:50.978005 kernel: audit: type=1335 audit(1707435710.731:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Feb  8 23:41:50.978017 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  8 23:41:50.978030 systemd[1]: Listening on systemd-journald.socket.
Feb  8 23:41:50.978045 systemd[1]: Listening on systemd-networkd.socket.
Feb  8 23:41:50.978057 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  8 23:41:50.978069 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  8 23:41:50.978081 systemd[1]: Listening on systemd-userdbd.socket.
Feb  8 23:41:50.978093 systemd[1]: Mounting dev-hugepages.mount...
Feb  8 23:41:50.978105 systemd[1]: Mounting dev-mqueue.mount...
Feb  8 23:41:50.978118 systemd[1]: Mounting media.mount...
Feb  8 23:41:50.978130 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  8 23:41:50.978143 systemd[1]: Mounting sys-kernel-debug.mount...
Feb  8 23:41:50.978157 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb  8 23:41:50.978170 systemd[1]: Mounting tmp.mount...
Feb  8 23:41:50.978182 systemd[1]: Starting flatcar-tmpfiles.service...
Feb  8 23:41:50.978194 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  8 23:41:50.978207 systemd[1]: Starting kmod-static-nodes.service...
Feb  8 23:41:50.978219 systemd[1]: Starting modprobe@configfs.service...
Feb  8 23:41:50.978231 systemd[1]: Starting modprobe@dm_mod.service...
Feb  8 23:41:50.978244 systemd[1]: Starting modprobe@drm.service...
Feb  8 23:41:50.978257 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  8 23:41:50.978271 systemd[1]: Starting modprobe@fuse.service...
Feb  8 23:41:50.978283 systemd[1]: Starting modprobe@loop.service...
Feb  8 23:41:50.978297 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  8 23:41:50.978310 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb  8 23:41:50.978322 systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
Feb  8 23:41:50.978334 systemd[1]: Starting systemd-journald.service...
Feb  8 23:41:50.978346 systemd[1]: Starting systemd-modules-load.service...
Feb  8 23:41:50.978359 systemd[1]: Starting systemd-network-generator.service...
Feb  8 23:41:50.978371 systemd[1]: Starting systemd-remount-fs.service...
Feb  8 23:41:50.978385 systemd[1]: Starting systemd-udev-trigger.service...
Feb  8 23:41:50.978398 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  8 23:41:50.978410 kernel: loop: module loaded
Feb  8 23:41:50.978422 systemd[1]: Mounted dev-hugepages.mount.
Feb  8 23:41:50.978434 systemd[1]: Mounted dev-mqueue.mount.
Feb  8 23:41:50.978446 systemd[1]: Mounted media.mount.
Feb  8 23:41:50.978458 systemd[1]: Mounted sys-kernel-debug.mount.
Feb  8 23:41:50.978470 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb  8 23:41:50.978482 systemd[1]: Mounted tmp.mount.
Feb  8 23:41:50.978496 systemd[1]: Finished kmod-static-nodes.service.
Feb  8 23:41:50.978508 kernel: audit: type=1130 audit(1707435710.892:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978522 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  8 23:41:50.978535 systemd[1]: Finished modprobe@configfs.service.
Feb  8 23:41:50.978548 kernel: audit: type=1130 audit(1707435710.899:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  8 23:41:50.978572 kernel: audit: type=1131 audit(1707435710.899:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978584 systemd[1]: Finished modprobe@dm_mod.service.
Feb  8 23:41:50.978598 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  8 23:41:50.978610 systemd[1]: Finished modprobe@drm.service.
Feb  8 23:41:50.978623 kernel: audit: type=1130 audit(1707435710.911:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978663 kernel: audit: type=1131 audit(1707435710.911:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  8 23:41:50.978691 kernel: audit: type=1130 audit(1707435710.922:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978703 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  8 23:41:50.978715 kernel: audit: type=1131 audit(1707435710.922:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978729 kernel: fuse: init (API version 7.34)
Feb  8 23:41:50.978741 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  8 23:41:50.978754 systemd[1]: Finished modprobe@loop.service.
Feb  8 23:41:50.978767 kernel: audit: type=1130 audit(1707435710.935:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.978781 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  8 23:41:50.978793 systemd[1]: Finished modprobe@fuse.service.
Feb  8 23:41:50.978805 systemd[1]: Finished systemd-network-generator.service.
Feb  8 23:41:50.978819 systemd[1]: Finished systemd-remount-fs.service.
Feb  8 23:41:50.978836 systemd[1]: Reached target network-pre.target.
Feb  8 23:41:50.978848 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb  8 23:41:50.978861 systemd[1]: Mounting sys-kernel-config.mount...
Feb  8 23:41:50.978877 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  8 23:41:50.978893 systemd-journald[950]: Journal started
Feb  8 23:41:50.978942 systemd-journald[950]: Runtime Journal (/run/log/journal/a594cb76176b4c88b655d1e2558ded20) is 4.9M, max 39.5M, 34.5M free.
Feb  8 23:41:50.731000 audit[1]: AVC avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  8 23:41:50.731000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Feb  8 23:41:50.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:50.975000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb  8 23:41:50.975000 audit[950]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff05533fe0 a2=4000 a3=7fff0553407c items=0 ppid=1 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:41:50.975000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb  8 23:41:50.997576 systemd[1]: Starting systemd-hwdb-update.service...
Feb  8 23:41:50.999819 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  8 23:41:50.999860 systemd[1]: Starting systemd-random-seed.service...
Feb  8 23:41:51.002426 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  8 23:41:51.007653 systemd[1]: Started systemd-journald.service.
Feb  8 23:41:51.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.010964 systemd[1]: Finished flatcar-tmpfiles.service.
Feb  8 23:41:51.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.012265 systemd[1]: Finished systemd-modules-load.service.
Feb  8 23:41:51.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.012945 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb  8 23:41:51.013582 systemd[1]: Mounted sys-kernel-config.mount.
Feb  8 23:41:51.015592 systemd[1]: Starting systemd-journal-flush.service...
Feb  8 23:41:51.017543 systemd[1]: Starting systemd-sysctl.service...
Feb  8 23:41:51.020293 systemd[1]: Starting systemd-sysusers.service...
Feb  8 23:41:51.046692 systemd[1]: Finished systemd-udev-trigger.service.
Feb  8 23:41:51.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.060201 systemd-journald[950]: Time spent on flushing to /var/log/journal/a594cb76176b4c88b655d1e2558ded20 is 49.541ms for 1062 entries.
Feb  8 23:41:51.060201 systemd-journald[950]: System Journal (/var/log/journal/a594cb76176b4c88b655d1e2558ded20) is 8.0M, max 584.8M, 576.8M free.
Feb  8 23:41:51.205462 systemd-journald[950]: Received client request to flush runtime journal.
Feb  8 23:41:51.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.048838 systemd[1]: Starting systemd-udev-settle.service...
Feb  8 23:41:51.109171 systemd[1]: Finished systemd-random-seed.service.
Feb  8 23:41:51.206248 udevadm[1005]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb  8 23:41:51.110846 systemd[1]: Reached target first-boot-complete.target.
Feb  8 23:41:51.163534 systemd[1]: Finished systemd-sysctl.service.
Feb  8 23:41:51.192979 systemd[1]: Finished systemd-sysusers.service.
Feb  8 23:41:51.197877 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  8 23:41:51.206731 systemd[1]: Finished systemd-journal-flush.service.
Feb  8 23:41:51.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.245452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  8 23:41:51.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.990856 systemd[1]: Finished systemd-hwdb-update.service.
Feb  8 23:41:51.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:51.994940 systemd[1]: Starting systemd-udevd.service...
Feb  8 23:41:52.043012 systemd-udevd[1016]: Using default interface naming scheme 'v252'.
Feb  8 23:41:52.493875 systemd[1]: Started systemd-udevd.service.
Feb  8 23:41:52.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:52.500474 systemd[1]: Starting systemd-networkd.service...
Feb  8 23:41:52.538402 systemd[1]: Starting systemd-userdbd.service...
Feb  8 23:41:52.592843 systemd[1]: Found device dev-ttyS0.device.
Feb  8 23:41:52.623256 systemd[1]: Started systemd-userdbd.service.
Feb  8 23:41:52.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:52.679692 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Feb  8 23:41:52.696987 kernel: ACPI: button: Power Button [PWRF]
Feb  8 23:41:52.710992 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  8 23:41:52.722000 audit[1025]: AVC avc:  denied  { confidentiality } for  pid=1025 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  8 23:41:52.734977 systemd-networkd[1022]: lo: Link UP
Feb  8 23:41:52.734988 systemd-networkd[1022]: lo: Gained carrier
Feb  8 23:41:52.735480 systemd-networkd[1022]: Enumeration completed
Feb  8 23:41:52.735639 systemd[1]: Started systemd-networkd.service.
Feb  8 23:41:52.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:52.736327 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  8 23:41:52.738124 systemd-networkd[1022]: eth0: Link UP
Feb  8 23:41:52.738135 systemd-networkd[1022]: eth0: Gained carrier
Feb  8 23:41:52.748880 systemd-networkd[1022]: eth0: DHCPv4 address 172.24.4.77/24, gateway 172.24.4.1 acquired from 172.24.4.1
Feb  8 23:41:52.722000 audit[1025]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5647fb4a60b0 a1=32194 a2=7f9e67359bc5 a3=5 items=108 ppid=1016 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:41:52.722000 audit: CWD cwd="/"
Feb  8 23:41:52.722000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=1 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=2 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=3 name=(null) inode=14369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=4 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=5 name=(null) inode=14370 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=6 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=7 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=8 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=9 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=10 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=11 name=(null) inode=14373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=12 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=13 name=(null) inode=14374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=14 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=15 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=16 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=17 name=(null) inode=14376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=18 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=19 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=20 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=21 name=(null) inode=14378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=22 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=23 name=(null) inode=14379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=24 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=25 name=(null) inode=14380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=26 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=27 name=(null) inode=14381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=28 name=(null) inode=14377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=29 name=(null) inode=14382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=30 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=31 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=32 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=33 name=(null) inode=14384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=34 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=35 name=(null) inode=14385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=36 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=37 name=(null) inode=14386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=38 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=39 name=(null) inode=14387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=40 name=(null) inode=14383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=41 name=(null) inode=14388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=42 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=43 name=(null) inode=14389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=44 name=(null) inode=14389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=45 name=(null) inode=14390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=46 name=(null) inode=14389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=47 name=(null) inode=14391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=48 name=(null) inode=14389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=49 name=(null) inode=14392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=50 name=(null) inode=14389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=51 name=(null) inode=14393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=52 name=(null) inode=14389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=53 name=(null) inode=14394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=55 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=56 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=57 name=(null) inode=14396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=58 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=59 name=(null) inode=14397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=60 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=61 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=62 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=63 name=(null) inode=14399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=64 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=65 name=(null) inode=14400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=66 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=67 name=(null) inode=14401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=68 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=69 name=(null) inode=14402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=70 name=(null) inode=14398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=71 name=(null) inode=14403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=72 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=73 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=74 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=75 name=(null) inode=14405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=76 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=77 name=(null) inode=14406 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=78 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=79 name=(null) inode=14407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=80 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=81 name=(null) inode=14408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=82 name=(null) inode=14404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=83 name=(null) inode=14409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=84 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=85 name=(null) inode=14410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=86 name=(null) inode=14410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=87 name=(null) inode=14411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=88 name=(null) inode=14410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=89 name=(null) inode=14412 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=90 name=(null) inode=14410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=91 name=(null) inode=14413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=92 name=(null) inode=14410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=93 name=(null) inode=14414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=94 name=(null) inode=14410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=95 name=(null) inode=14415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=96 name=(null) inode=14395 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=97 name=(null) inode=14416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=98 name=(null) inode=14416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=99 name=(null) inode=14417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=100 name=(null) inode=14416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=101 name=(null) inode=14418 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=102 name=(null) inode=14416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=103 name=(null) inode=14419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=104 name=(null) inode=14416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=105 name=(null) inode=14420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=106 name=(null) inode=14416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PATH item=107 name=(null) inode=14421 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  8 23:41:52.722000 audit: PROCTITLE proctitle="(udev-worker)"
Feb  8 23:41:52.767652 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  8 23:41:52.781650 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
Feb  8 23:41:52.786654 kernel: mousedev: PS/2 mouse device common for all mice
Feb  8 23:41:52.839285 systemd[1]: Finished systemd-udev-settle.service.
Feb  8 23:41:52.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:52.842385 systemd[1]: Starting lvm2-activation-early.service...
Feb  8 23:41:52.900543 lvm[1046]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  8 23:41:52.946114 systemd[1]: Finished lvm2-activation-early.service.
Feb  8 23:41:52.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:52.947605 systemd[1]: Reached target cryptsetup.target.
Feb  8 23:41:52.951545 systemd[1]: Starting lvm2-activation.service...
Feb  8 23:41:52.964236 lvm[1048]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  8 23:41:53.007967 systemd[1]: Finished lvm2-activation.service.
Feb  8 23:41:53.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.009435 systemd[1]: Reached target local-fs-pre.target.
Feb  8 23:41:53.010542 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  8 23:41:53.010600 systemd[1]: Reached target local-fs.target.
Feb  8 23:41:53.011910 systemd[1]: Reached target machines.target.
Feb  8 23:41:53.016000 systemd[1]: Starting ldconfig.service...
Feb  8 23:41:53.019057 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb  8 23:41:53.019261 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  8 23:41:53.022602 systemd[1]: Starting systemd-boot-update.service...
Feb  8 23:41:53.026453 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb  8 23:41:53.031312 systemd[1]: Starting systemd-machine-id-commit.service...
Feb  8 23:41:53.033554 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb  8 23:41:53.034676 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb  8 23:41:53.041854 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb  8 23:41:53.062852 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1051 (bootctl)
Feb  8 23:41:53.064202 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb  8 23:41:53.075612 systemd-tmpfiles[1054]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb  8 23:41:53.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.077190 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb  8 23:41:53.081381 systemd-tmpfiles[1054]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb  8 23:41:53.086261 systemd-tmpfiles[1054]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb  8 23:41:53.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.485278 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb  8 23:41:53.487887 systemd[1]: Finished systemd-machine-id-commit.service.
Feb  8 23:41:53.617768 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31)
Feb  8 23:41:53.617768 systemd-fsck[1060]: /dev/vda1: 789 files, 115332/258078 clusters
Feb  8 23:41:53.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.625785 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb  8 23:41:53.630456 systemd[1]: Mounting boot.mount...
Feb  8 23:41:53.658756 systemd[1]: Mounted boot.mount.
Feb  8 23:41:53.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.687139 systemd[1]: Finished systemd-boot-update.service.
Feb  8 23:41:53.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.763464 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb  8 23:41:53.766311 systemd[1]: Starting audit-rules.service...
Feb  8 23:41:53.768554 systemd[1]: Starting clean-ca-certificates.service...
Feb  8 23:41:53.771171 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb  8 23:41:53.774320 systemd[1]: Starting systemd-resolved.service...
Feb  8 23:41:53.782835 systemd[1]: Starting systemd-timesyncd.service...
Feb  8 23:41:53.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.787285 systemd[1]: Starting systemd-update-utmp.service...
Feb  8 23:41:53.788419 systemd[1]: Finished clean-ca-certificates.service.
Feb  8 23:41:53.789662 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb  8 23:41:53.815000 audit[1075]: SYSTEM_BOOT pid=1075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.821886 systemd[1]: Finished systemd-update-utmp.service.
Feb  8 23:41:53.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.929545 systemd[1]: Started systemd-timesyncd.service.
Feb  8 23:41:53.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  8 23:41:53.930284 systemd[1]: Reached target time-set.target.
Feb  8 23:41:53.968522 systemd-resolved[1071]: Positive Trust Anchors:
Feb  8 23:41:53.968563 systemd-resolved[1071]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  8 23:41:53.968723 systemd-resolved[1071]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  8 23:41:54.633043 systemd-timesyncd[1073]: Contacted time server 51.15.182.163:123 (0.flatcar.pool.ntp.org).
Feb  8 23:41:54.634197 systemd-timesyncd[1073]: Initial clock synchronization to Thu 2024-02-08 23:41:54.632694 UTC.
Feb  8 23:41:54.680439 augenrules[1091]: No rules
Feb  8 23:41:54.679000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  8 23:41:54.679000 audit[1091]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc01729d0 a2=420 a3=0 items=0 ppid=1068 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  8 23:41:54.679000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  8 23:41:54.682518 systemd[1]: Finished audit-rules.service.
Feb  8 23:41:54.692984 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb  8 23:41:54.739611 systemd-resolved[1071]: Using system hostname 'ci-3510-3-2-4-0b571dfe90.novalocal'.
Feb  8 23:41:54.743760 systemd[1]: Started systemd-resolved.service.
Feb  8 23:41:54.745112 systemd[1]: Reached target network.target.
Feb  8 23:41:54.746146 systemd[1]: Reached target nss-lookup.target.
Feb  8 23:41:54.758115 systemd-networkd[1022]: eth0: Gained IPv6LL
Feb  8 23:41:56.499511 ldconfig[1050]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb  8 23:41:56.638354 systemd[1]: Finished ldconfig.service.
Feb  8 23:41:56.642222 systemd[1]: Starting systemd-update-done.service...
Feb  8 23:41:56.662414 systemd[1]: Finished systemd-update-done.service.
Feb  8 23:41:56.663578 systemd[1]: Reached target sysinit.target.
Feb  8 23:41:56.664664 systemd[1]: Started motdgen.path.
Feb  8 23:41:56.665584 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb  8 23:41:56.666918 systemd[1]: Started logrotate.timer.
Feb  8 23:41:56.667951 systemd[1]: Started mdadm.timer.
Feb  8 23:41:56.668750 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb  8 23:41:56.669986 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb  8 23:41:56.670064 systemd[1]: Reached target paths.target.
Feb  8 23:41:56.671126 systemd[1]: Reached target timers.target.
Feb  8 23:41:56.673959 systemd[1]: Listening on dbus.socket.
Feb  8 23:41:56.678389 systemd[1]: Starting docker.socket...
Feb  8 23:41:56.691720 systemd[1]: Listening on sshd.socket.
Feb  8 23:41:56.693131 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  8 23:41:56.694213 systemd[1]: Listening on docker.socket.
Feb  8 23:41:56.695349 systemd[1]: Reached target sockets.target.
Feb  8 23:41:56.696411 systemd[1]: Reached target basic.target.
Feb  8 23:41:56.697767 systemd[1]: System is tainted: cgroupsv1
Feb  8 23:41:56.697915 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  8 23:41:56.697966 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  8 23:41:56.700371 systemd[1]: Starting containerd.service...
Feb  8 23:41:56.703741 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Feb  8 23:41:56.707442 systemd[1]: Starting dbus.service...
Feb  8 23:41:56.714012 systemd[1]: Starting enable-oem-cloudinit.service...
Feb  8 23:41:56.720709 systemd[1]: Starting extend-filesystems.service...
Feb  8 23:41:56.722939 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb  8 23:41:56.726033 systemd[1]: Starting motdgen.service...
Feb  8 23:41:56.730253 systemd[1]: Starting prepare-cni-plugins.service...
Feb  8 23:41:56.734577 systemd[1]: Starting prepare-critools.service...
Feb  8 23:41:56.747310 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb  8 23:41:56.751163 systemd[1]: Starting sshd-keygen.service...
Feb  8 23:41:56.757326 systemd[1]: Starting systemd-logind.service...
Feb  8 23:41:56.758397 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  8 23:41:56.758510 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb  8 23:41:56.759899 systemd[1]: Starting update-engine.service...
Feb  8 23:41:56.761392 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb  8 23:41:56.769243 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb  8 23:41:56.769605 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb  8 23:41:56.853321 jq[1120]: true
Feb  8 23:41:56.858262 jq[1107]: false
Feb  8 23:41:56.865317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb  8 23:41:56.865658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb  8 23:41:56.889894 extend-filesystems[1110]: Found vda
Feb  8 23:41:56.894889 coreos-metadata[1104]: Feb 08 23:41:56.890 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1
Feb  8 23:41:56.895909 extend-filesystems[1110]: Found vda1
Feb  8 23:41:56.896586 extend-filesystems[1110]: Found vda2
Feb  8 23:41:56.897285 extend-filesystems[1110]: Found vda3
Feb  8 23:41:56.897521 systemd[1]: motdgen.service: Deactivated successfully.
Feb  8 23:41:56.897844 systemd[1]: Finished motdgen.service.
Feb  8 23:41:56.898025 extend-filesystems[1110]: Found usr
Feb  8 23:41:56.898870 extend-filesystems[1110]: Found vda4
Feb  8 23:41:56.898870 extend-filesystems[1110]: Found vda6
Feb  8 23:41:56.898870 extend-filesystems[1110]: Found vda7
Feb  8 23:41:56.898870 extend-filesystems[1110]: Found vda9
Feb  8 23:41:56.898870 extend-filesystems[1110]: Checking size of /dev/vda9
Feb  8 23:41:56.922037 env[1133]: time="2024-02-08T23:41:56.921986320Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb  8 23:41:56.924321 jq[1139]: true
Feb  8 23:41:56.963396 systemd-logind[1118]: Watching system buttons on /dev/input/event1 (Power Button)
Feb  8 23:41:56.963424 systemd-logind[1118]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb  8 23:41:56.963645 systemd-logind[1118]: New seat seat0.
Feb  8 23:41:56.976771 env[1133]: time="2024-02-08T23:41:56.976730773Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb  8 23:41:56.977140 env[1133]: time="2024-02-08T23:41:56.977118781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:41:56.979806 env[1133]: time="2024-02-08T23:41:56.979640350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb  8 23:41:56.979806 env[1133]: time="2024-02-08T23:41:56.979692267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980010564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980039489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980056881Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980070667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980149956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980405114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980543985Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980562710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980615479Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb  8 23:41:56.980798 env[1133]: time="2024-02-08T23:41:56.980630637Z" level=info msg="metadata content store policy set" policy=shared
Feb  8 23:41:56.999289 tar[1123]: ./
Feb  8 23:41:56.999289 tar[1123]: ./macvlan
Feb  8 23:41:57.030117 tar[1125]: crictl
Feb  8 23:41:57.061990 extend-filesystems[1110]: Resized partition /dev/vda9
Feb  8 23:41:57.131298 coreos-metadata[1104]: Feb 08 23:41:57.100 INFO Fetch successful
Feb  8 23:41:57.131298 coreos-metadata[1104]: Feb 08 23:41:57.100 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1
Feb  8 23:41:57.131298 coreos-metadata[1104]: Feb 08 23:41:57.116 INFO Fetch successful
Feb  8 23:41:57.209032 tar[1123]: ./static
Feb  8 23:41:57.261604 tar[1123]: ./vlan
Feb  8 23:41:57.267888 extend-filesystems[1174]: resize2fs 1.46.5 (30-Dec-2021)
Feb  8 23:41:57.341570 tar[1123]: ./portmap
Feb  8 23:41:57.600798 tar[1123]: ./host-local
Feb  8 23:41:57.651267 tar[1123]: ./vrf
Feb  8 23:41:57.688594 tar[1123]: ./bridge
Feb  8 23:41:57.748629 tar[1123]: ./tuning
Feb  8 23:41:57.813167 tar[1123]: ./firewall
Feb  8 23:41:57.844797 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks
Feb  8 23:41:57.853543 unknown[1104]: wrote ssh authorized keys file for user: core
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901275827Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901365576Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901389090Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901459953Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901480892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901502753Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901519755Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901537177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901553658Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901571081Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901590708Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901606868Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb  8 23:41:57.901815 env[1133]: time="2024-02-08T23:41:57.901799339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb  8 23:41:57.902263 env[1133]: time="2024-02-08T23:41:57.901902402Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb  8 23:41:57.902372 env[1133]: time="2024-02-08T23:41:57.902289138Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb  8 23:41:57.902372 env[1133]: time="2024-02-08T23:41:57.902328121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902443 env[1133]: time="2024-02-08T23:41:57.902380309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb  8 23:41:57.902443 env[1133]: time="2024-02-08T23:41:57.902433058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902498 env[1133]: time="2024-02-08T23:41:57.902448557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902498 env[1133]: time="2024-02-08T23:41:57.902465128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902498 env[1133]: time="2024-02-08T23:41:57.902478443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902498 env[1133]: time="2024-02-08T23:41:57.902495936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902597 env[1133]: time="2024-02-08T23:41:57.902512256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902597 env[1133]: time="2024-02-08T23:41:57.902527104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902597 env[1133]: time="2024-02-08T23:41:57.902540289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902597 env[1133]: time="2024-02-08T23:41:57.902557511Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb  8 23:41:57.902800 env[1133]: time="2024-02-08T23:41:57.902736016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902800 env[1133]: time="2024-02-08T23:41:57.902762105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902800 env[1133]: time="2024-02-08T23:41:57.902790668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.902883 env[1133]: time="2024-02-08T23:41:57.902806268Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb  8 23:41:57.902883 env[1133]: time="2024-02-08T23:41:57.902825494Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb  8 23:41:57.902883 env[1133]: time="2024-02-08T23:41:57.902840051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb  8 23:41:57.902883 env[1133]: time="2024-02-08T23:41:57.902862182Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb  8 23:41:57.902990 env[1133]: time="2024-02-08T23:41:57.902927715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb  8 23:41:57.903228 env[1133]: time="2024-02-08T23:41:57.903155392Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb  8 23:41:57.914124 env[1133]: time="2024-02-08T23:41:57.903231705Z" level=info msg="Connect containerd service"
Feb  8 23:41:57.914124 env[1133]: time="2024-02-08T23:41:57.903272873Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb  8 23:41:57.914124 env[1133]: time="2024-02-08T23:41:57.904068274Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  8 23:41:57.914124 env[1133]: time="2024-02-08T23:41:57.904447335Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb  8 23:41:57.914124 env[1133]: time="2024-02-08T23:41:57.904489645Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb  8 23:41:57.914124 env[1133]: time="2024-02-08T23:41:57.906315359Z" level=info msg="containerd successfully booted in 0.985844s"
Feb  8 23:41:57.904657 systemd[1]: Started containerd.service.
Feb  8 23:41:58.416576 update_engine[1119]: I0208 23:41:58.051350  1119 main.cc:92] Flatcar Update Engine starting
Feb  8 23:41:58.416576 update_engine[1119]: I0208 23:41:58.123618  1119 update_check_scheduler.cc:74] Next update check in 5m55s
Feb  8 23:41:58.417221 env[1133]: time="2024-02-08T23:41:57.924218556Z" level=info msg="Start subscribing containerd event"
Feb  8 23:41:58.417221 env[1133]: time="2024-02-08T23:41:57.924317101Z" level=info msg="Start recovering state"
Feb  8 23:41:58.417221 env[1133]: time="2024-02-08T23:41:57.924412009Z" level=info msg="Start event monitor"
Feb  8 23:41:58.417221 env[1133]: time="2024-02-08T23:41:57.924445302Z" level=info msg="Start snapshots syncer"
Feb  8 23:41:58.417221 env[1133]: time="2024-02-08T23:41:57.924459869Z" level=info msg="Start cni network conf syncer for default"
Feb  8 23:41:58.417221 env[1133]: time="2024-02-08T23:41:57.924471411Z" level=info msg="Start streaming server"
Feb  8 23:41:57.992311 dbus-daemon[1106]: [system] SELinux support is enabled
Feb  8 23:41:57.992699 systemd[1]: Started dbus.service.
Feb  8 23:41:58.012009 dbus-daemon[1106]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb  8 23:41:58.433108 tar[1123]: ./host-device
Feb  8 23:41:57.999179 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb  8 23:41:57.999324 systemd[1]: Reached target system-config.target.
Feb  8 23:41:58.002661 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb  8 23:41:58.002704 systemd[1]: Reached target user-config.target.
Feb  8 23:41:58.010096 systemd[1]: Started systemd-logind.service.
Feb  8 23:41:58.124061 systemd[1]: Started update-engine.service.
Feb  8 23:41:58.129482 systemd[1]: Started locksmithd.service.
Feb  8 23:41:58.442815 kernel: EXT4-fs (vda9): resized filesystem to 4635643
Feb  8 23:41:58.563550 locksmithd[1178]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb  8 23:41:59.541189 extend-filesystems[1174]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb  8 23:41:59.541189 extend-filesystems[1174]: old_desc_blocks = 1, new_desc_blocks = 3
Feb  8 23:41:59.541189 extend-filesystems[1174]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long.
Feb  8 23:41:59.569001 extend-filesystems[1110]: Resized filesystem in /dev/vda9
Feb  8 23:41:59.571887 bash[1170]: Updated "/home/core/.ssh/authorized_keys"
Feb  8 23:41:59.543617 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb  8 23:41:59.547045 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb  8 23:41:59.547592 systemd[1]: Finished extend-filesystems.service.
Feb  8 23:41:59.580908 sshd_keygen[1130]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb  8 23:41:59.584244 update-ssh-keys[1176]: Updated "/home/core/.ssh/authorized_keys"
Feb  8 23:41:59.585415 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Feb  8 23:41:59.651541 tar[1123]: ./sbr
Feb  8 23:41:59.668617 systemd[1]: Finished sshd-keygen.service.
Feb  8 23:41:59.671400 systemd[1]: Starting issuegen.service...
Feb  8 23:41:59.682285 systemd[1]: issuegen.service: Deactivated successfully.
Feb  8 23:41:59.682594 systemd[1]: Finished issuegen.service.
Feb  8 23:41:59.685376 systemd[1]: Starting systemd-user-sessions.service...
Feb  8 23:41:59.698036 systemd[1]: Finished systemd-user-sessions.service.
Feb  8 23:41:59.700644 systemd[1]: Started getty@tty1.service.
Feb  8 23:41:59.703317 systemd[1]: Started serial-getty@ttyS0.service.
Feb  8 23:41:59.704062 systemd[1]: Reached target getty.target.
Feb  8 23:41:59.722656 tar[1123]: ./loopback
Feb  8 23:41:59.757066 tar[1123]: ./dhcp
Feb  8 23:41:59.788282 systemd[1]: Finished prepare-critools.service.
Feb  8 23:41:59.861817 tar[1123]: ./ptp
Feb  8 23:41:59.901995 tar[1123]: ./ipvlan
Feb  8 23:41:59.941024 tar[1123]: ./bandwidth
Feb  8 23:42:00.068154 systemd[1]: Finished prepare-cni-plugins.service.
Feb  8 23:42:00.070317 systemd[1]: Reached target multi-user.target.
Feb  8 23:42:00.076769 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb  8 23:42:00.095387 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  8 23:42:00.096310 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb  8 23:42:00.105410 systemd[1]: Startup finished in 12.586s (kernel) + 14.210s (userspace) = 26.796s.
Feb  8 23:42:05.997864 systemd[1]: Created slice system-sshd.slice.
Feb  8 23:42:06.001494 systemd[1]: Started sshd@0-172.24.4.77:22-172.24.4.1:42578.service.
Feb  8 23:42:07.266427 sshd[1215]: Accepted publickey for core from 172.24.4.1 port 42578 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI
Feb  8 23:42:07.292310 sshd[1215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:42:07.322707 systemd[1]: Created slice user-500.slice.
Feb  8 23:42:07.323840 systemd[1]: Starting user-runtime-dir@500.service...
Feb  8 23:42:07.333807 systemd-logind[1118]: New session 1 of user core.
Feb  8 23:42:07.342427 systemd[1]: Finished user-runtime-dir@500.service.
Feb  8 23:42:07.345133 systemd[1]: Starting user@500.service...
Feb  8 23:42:07.360628 (systemd)[1220]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:42:07.545908 systemd[1220]: Queued start job for default target default.target.
Feb  8 23:42:07.547146 systemd[1220]: Reached target paths.target.
Feb  8 23:42:07.547401 systemd[1220]: Reached target sockets.target.
Feb  8 23:42:07.547569 systemd[1220]: Reached target timers.target.
Feb  8 23:42:07.547766 systemd[1220]: Reached target basic.target.
Feb  8 23:42:07.548268 systemd[1]: Started user@500.service.
Feb  8 23:42:07.550576 systemd[1]: Started session-1.scope.
Feb  8 23:42:07.551177 systemd[1220]: Reached target default.target.
Feb  8 23:42:07.551620 systemd[1220]: Startup finished in 177ms.
Feb  8 23:42:07.982814 systemd[1]: Started sshd@1-172.24.4.77:22-172.24.4.1:42594.service.
Feb  8 23:42:09.083030 sshd[1229]: Accepted publickey for core from 172.24.4.1 port 42594 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI
Feb  8 23:42:09.086466 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:42:09.099959 systemd-logind[1118]: New session 2 of user core.
Feb  8 23:42:09.101329 systemd[1]: Started session-2.scope.
Feb  8 23:42:09.675256 sshd[1229]: pam_unix(sshd:session): session closed for user core
Feb  8 23:42:09.680446 systemd[1]: Started sshd@2-172.24.4.77:22-172.24.4.1:42606.service.
Feb  8 23:42:09.686368 systemd[1]: sshd@1-172.24.4.77:22-172.24.4.1:42594.service: Deactivated successfully.
Feb  8 23:42:09.689643 systemd[1]: session-2.scope: Deactivated successfully.
Feb  8 23:42:09.691168 systemd-logind[1118]: Session 2 logged out. Waiting for processes to exit.
Feb  8 23:42:09.693729 systemd-logind[1118]: Removed session 2.
Feb  8 23:42:11.088770 sshd[1234]: Accepted publickey for core from 172.24.4.1 port 42606 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI
Feb  8 23:42:11.092643 sshd[1234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:42:11.103093 systemd-logind[1118]: New session 3 of user core.
Feb  8 23:42:11.103864 systemd[1]: Started session-3.scope.
Feb  8 23:42:11.740098 sshd[1234]: pam_unix(sshd:session): session closed for user core
Feb  8 23:42:11.746154 systemd[1]: Started sshd@3-172.24.4.77:22-172.24.4.1:42618.service.
Feb  8 23:42:11.747321 systemd[1]: sshd@2-172.24.4.77:22-172.24.4.1:42606.service: Deactivated successfully.
Feb  8 23:42:11.749891 systemd-logind[1118]: Session 3 logged out. Waiting for processes to exit.
Feb  8 23:42:11.750154 systemd[1]: session-3.scope: Deactivated successfully.
Feb  8 23:42:11.754112 systemd-logind[1118]: Removed session 3.
Feb  8 23:42:13.013835 sshd[1242]: Accepted publickey for core from 172.24.4.1 port 42618 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI
Feb  8 23:42:13.017456 sshd[1242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:42:13.028246 systemd-logind[1118]: New session 4 of user core.
Feb  8 23:42:13.028681 systemd[1]: Started session-4.scope.
Feb  8 23:42:13.676178 sshd[1242]: pam_unix(sshd:session): session closed for user core
Feb  8 23:42:13.679060 systemd[1]: Started sshd@4-172.24.4.77:22-172.24.4.1:42632.service.
Feb  8 23:42:13.687582 systemd[1]: sshd@3-172.24.4.77:22-172.24.4.1:42618.service: Deactivated successfully.
Feb  8 23:42:13.693698 systemd[1]: session-4.scope: Deactivated successfully.
Feb  8 23:42:13.694894 systemd-logind[1118]: Session 4 logged out. Waiting for processes to exit.
Feb  8 23:42:13.697662 systemd-logind[1118]: Removed session 4.
Feb  8 23:42:14.908527 sshd[1248]: Accepted publickey for core from 172.24.4.1 port 42632 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI
Feb  8 23:42:14.911485 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  8 23:42:14.923596 systemd-logind[1118]: New session 5 of user core.
Feb  8 23:42:14.924418 systemd[1]: Started session-5.scope.
Feb  8 23:42:15.359830 sudo[1254]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb  8 23:42:15.360315 sudo[1254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb  8 23:42:16.019998 systemd[1]: Reloading.
Feb  8 23:42:16.104959 /usr/lib/systemd/system-generators/torcx-generator[1283]: time="2024-02-08T23:42:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  8 23:42:16.104991 /usr/lib/systemd/system-generators/torcx-generator[1283]: time="2024-02-08T23:42:16Z" level=info msg="torcx already run"
Feb  8 23:42:16.225972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:42:16.225997 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:42:16.250171 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:42:16.339336 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  8 23:42:16.478483 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  8 23:42:16.480109 systemd[1]: Reached target network-online.target.
Feb  8 23:42:16.484213 systemd[1]: Started kubelet.service.
Feb  8 23:42:16.524377 systemd[1]: Starting coreos-metadata.service...
Feb  8 23:42:16.600426 coreos-metadata[1345]: Feb 08 23:42:16.600 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1
Feb  8 23:42:16.641413 kubelet[1337]: E0208 23:42:16.641355    1337 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb  8 23:42:16.643625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  8 23:42:16.643789 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  8 23:42:16.810771 coreos-metadata[1345]: Feb 08 23:42:16.810 INFO Fetch successful
Feb  8 23:42:16.810771 coreos-metadata[1345]: Feb 08 23:42:16.810 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1
Feb  8 23:42:16.826895 coreos-metadata[1345]: Feb 08 23:42:16.826 INFO Fetch successful
Feb  8 23:42:16.826895 coreos-metadata[1345]: Feb 08 23:42:16.826 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1
Feb  8 23:42:16.843434 coreos-metadata[1345]: Feb 08 23:42:16.843 INFO Fetch successful
Feb  8 23:42:16.843434 coreos-metadata[1345]: Feb 08 23:42:16.843 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1
Feb  8 23:42:16.861045 coreos-metadata[1345]: Feb 08 23:42:16.860 INFO Fetch successful
Feb  8 23:42:16.861045 coreos-metadata[1345]: Feb 08 23:42:16.860 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1
Feb  8 23:42:16.875982 coreos-metadata[1345]: Feb 08 23:42:16.875 INFO Fetch successful
Feb  8 23:42:16.893402 systemd[1]: Finished coreos-metadata.service.
Feb  8 23:42:17.546698 systemd[1]: Stopped kubelet.service.
Feb  8 23:42:17.584748 systemd[1]: Reloading.
Feb  8 23:42:17.741764 /usr/lib/systemd/system-generators/torcx-generator[1419]: time="2024-02-08T23:42:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  8 23:42:17.743873 /usr/lib/systemd/system-generators/torcx-generator[1419]: time="2024-02-08T23:42:17Z" level=info msg="torcx already run"
Feb  8 23:42:17.801315 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  8 23:42:17.801336 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  8 23:42:17.825126 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  8 23:42:17.923265 systemd[1]: Started kubelet.service.
Feb  8 23:42:17.996037 kubelet[1457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  8 23:42:17.996424 kubelet[1457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  8 23:42:17.996551 kubelet[1457]: I0208 23:42:17.996525    1457 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  8 23:42:17.998547 kubelet[1457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  8 23:42:17.998631 kubelet[1457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  8 23:42:18.616184 kubelet[1457]: I0208 23:42:18.616070    1457 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb  8 23:42:18.616184 kubelet[1457]: I0208 23:42:18.616100    1457 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  8 23:42:18.616711 kubelet[1457]: I0208 23:42:18.616343    1457 server.go:836] "Client rotation is on, will bootstrap in background"
Feb  8 23:42:18.620663 kubelet[1457]: I0208 23:42:18.620617    1457 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  8 23:42:18.621475 kubelet[1457]: I0208 23:42:18.621423    1457 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  8 23:42:18.621837 kubelet[1457]: I0208 23:42:18.621790    1457 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  8 23:42:18.621979 kubelet[1457]: I0208 23:42:18.621869    1457 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb  8 23:42:18.621979 kubelet[1457]: I0208 23:42:18.621890    1457 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb  8 23:42:18.621979 kubelet[1457]: I0208 23:42:18.621903    1457 container_manager_linux.go:308] "Creating device plugin manager"
Feb  8 23:42:18.622574 kubelet[1457]: I0208 23:42:18.621992    1457 state_mem.go:36] "Initialized new in-memory state store"
Feb  8 23:42:18.630161 kubelet[1457]: I0208 23:42:18.630077    1457 kubelet.go:398] "Attempting to sync node with API server"
Feb  8 23:42:18.630161 kubelet[1457]: I0208 23:42:18.630105    1457 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  8 23:42:18.630161 kubelet[1457]: I0208 23:42:18.630128    1457 kubelet.go:297] "Adding apiserver pod source"
Feb  8 23:42:18.630161 kubelet[1457]: I0208 23:42:18.630142    1457 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  8 23:42:18.630672 kubelet[1457]: E0208 23:42:18.630633    1457 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:18.630672 kubelet[1457]: E0208 23:42:18.630671    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:18.631212 kubelet[1457]: I0208 23:42:18.631170    1457 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  8 23:42:18.631475 kubelet[1457]: W0208 23:42:18.631420    1457 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb  8 23:42:18.631886 kubelet[1457]: I0208 23:42:18.631852    1457 server.go:1186] "Started kubelet"
Feb  8 23:42:18.635200 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb  8 23:42:18.635309 kubelet[1457]: I0208 23:42:18.635270    1457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  8 23:42:18.636852 kubelet[1457]: E0208 23:42:18.636735    1457 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  8 23:42:18.637043 kubelet[1457]: E0208 23:42:18.637017    1457 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  8 23:42:18.641078 kubelet[1457]: I0208 23:42:18.641012    1457 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb  8 23:42:18.643359 kubelet[1457]: I0208 23:42:18.643304    1457 server.go:451] "Adding debug handlers to kubelet server"
Feb  8 23:42:18.648413 kubelet[1457]: I0208 23:42:18.648366    1457 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb  8 23:42:18.649410 kubelet[1457]: I0208 23:42:18.649368    1457 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  8 23:42:18.670245 kubelet[1457]: W0208 23:42:18.670204    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:18.670245 kubelet[1457]: E0208 23:42:18.670251    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:18.679707 kubelet[1457]: E0208 23:42:18.679565    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c205f503f1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 631832561, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 631832561, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.680229 kubelet[1457]: E0208 23:42:18.680207    1457 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.24.4.77" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  8 23:42:18.680399 kubelet[1457]: W0208 23:42:18.680381    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:18.680506 kubelet[1457]: E0208 23:42:18.680491    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:18.680708 kubelet[1457]: W0208 23:42:18.680689    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:18.680839 kubelet[1457]: E0208 23:42:18.680824    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:18.700255 kubelet[1457]: E0208 23:42:18.700130    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20643a54c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 636985676, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 636985676, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.720765 kubelet[1457]: I0208 23:42:18.720737    1457 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  8 23:42:18.720765 kubelet[1457]: I0208 23:42:18.720758    1457 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  8 23:42:18.720993 kubelet[1457]: I0208 23:42:18.720839    1457 state_mem.go:36] "Initialized new in-memory state store"
Feb  8 23:42:18.723566 kubelet[1457]: E0208 23:42:18.723466    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.724764 kubelet[1457]: E0208 23:42:18.724700    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.725235 kubelet[1457]: I0208 23:42:18.725218    1457 policy_none.go:49] "None policy: Start"
Feb  8 23:42:18.725896 kubelet[1457]: I0208 23:42:18.725879    1457 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  8 23:42:18.725972 kubelet[1457]: I0208 23:42:18.725904    1457 state_mem.go:35] "Initializing new in-memory state store"
Feb  8 23:42:18.726821 kubelet[1457]: E0208 23:42:18.726722    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.736710 kubelet[1457]: I0208 23:42:18.736678    1457 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  8 23:42:18.737029 kubelet[1457]: I0208 23:42:18.737011    1457 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  8 23:42:18.743911 kubelet[1457]: E0208 23:42:18.743888    1457 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.77\" not found"
Feb  8 23:42:18.744365 kubelet[1457]: E0208 23:42:18.744277    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20c5b3ff3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 739195891, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 739195891, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.750677 kubelet[1457]: I0208 23:42:18.750596    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:18.753355 kubelet[1457]: E0208 23:42:18.753329    1457 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.77"
Feb  8 23:42:18.755005 kubelet[1457]: E0208 23:42:18.754903    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 750477650, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b371a76" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.757019 kubelet[1457]: E0208 23:42:18.756926    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 750515962, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3734af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.759444 kubelet[1457]: E0208 23:42:18.759287    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 750521472, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3740bd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.855916 kubelet[1457]: I0208 23:42:18.855869    1457 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb  8 23:42:18.886676 kubelet[1457]: I0208 23:42:18.881646    1457 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb  8 23:42:18.886919 kubelet[1457]: I0208 23:42:18.886830    1457 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb  8 23:42:18.887049 kubelet[1457]: I0208 23:42:18.887002    1457 kubelet.go:2113] "Starting kubelet main sync loop"
Feb  8 23:42:18.887141 kubelet[1457]: E0208 23:42:18.887058    1457 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb  8 23:42:18.888491 kubelet[1457]: E0208 23:42:18.882975    1457 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.24.4.77" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  8 23:42:18.888975 kubelet[1457]: W0208 23:42:18.888937    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:18.888975 kubelet[1457]: E0208 23:42:18.888969    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:18.955358 kubelet[1457]: I0208 23:42:18.955300    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:18.957333 kubelet[1457]: E0208 23:42:18.957210    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 955211578, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b371a76" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:18.958157 kubelet[1457]: E0208 23:42:18.958125    1457 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.77"
Feb  8 23:42:18.959332 kubelet[1457]: E0208 23:42:18.959223    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 955235974, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3734af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:19.048995 kubelet[1457]: E0208 23:42:19.048751    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 955248277, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3740bd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:19.292402 kubelet[1457]: E0208 23:42:19.291505    1457 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.24.4.77" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  8 23:42:19.360897 kubelet[1457]: I0208 23:42:19.360855    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:19.362555 kubelet[1457]: E0208 23:42:19.362400    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 19, 360019956, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b371a76" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:19.363053 kubelet[1457]: E0208 23:42:19.363005    1457 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.77"
Feb  8 23:42:19.449710 kubelet[1457]: E0208 23:42:19.449511    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 19, 360064239, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3734af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:19.515976 kubelet[1457]: W0208 23:42:19.515927    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:19.515976 kubelet[1457]: E0208 23:42:19.515985    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:19.631192 kubelet[1457]: E0208 23:42:19.631135    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:19.650187 kubelet[1457]: E0208 23:42:19.650029    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 19, 360071042, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3740bd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:19.673574 kubelet[1457]: W0208 23:42:19.673522    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:19.673957 kubelet[1457]: E0208 23:42:19.673931    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:19.713034 kubelet[1457]: W0208 23:42:19.712986    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:19.713292 kubelet[1457]: E0208 23:42:19.713267    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:20.095194 kubelet[1457]: E0208 23:42:20.095087    1457 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.24.4.77" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  8 23:42:20.165226 kubelet[1457]: I0208 23:42:20.165182    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:20.167841 kubelet[1457]: E0208 23:42:20.167758    1457 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.77"
Feb  8 23:42:20.168171 kubelet[1457]: E0208 23:42:20.168025    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 20, 165095515, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b371a76" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:20.170906 kubelet[1457]: W0208 23:42:20.170871    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:20.171134 kubelet[1457]: E0208 23:42:20.171112    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:20.171291 kubelet[1457]: E0208 23:42:20.170830    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 20, 165120713, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3734af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:20.250025 kubelet[1457]: E0208 23:42:20.249847    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 20, 165127395, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3740bd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:20.632292 kubelet[1457]: E0208 23:42:20.632224    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:21.477810 kubelet[1457]: W0208 23:42:21.477698    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:21.478679 kubelet[1457]: E0208 23:42:21.478649    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:21.633201 kubelet[1457]: E0208 23:42:21.633092    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:21.699520 kubelet[1457]: E0208 23:42:21.699469    1457 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.24.4.77" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  8 23:42:21.770115 kubelet[1457]: I0208 23:42:21.769927    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:21.773570 kubelet[1457]: E0208 23:42:21.773533    1457 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.77"
Feb  8 23:42:21.773862 kubelet[1457]: E0208 23:42:21.773505    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 21, 769847828, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b371a76" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:21.776499 kubelet[1457]: E0208 23:42:21.776372    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 21, 769865461, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3734af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:21.778581 kubelet[1457]: E0208 23:42:21.778452    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 21, 769871903, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3740bd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:21.802712 kubelet[1457]: W0208 23:42:21.802656    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:21.803143 kubelet[1457]: E0208 23:42:21.803105    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:22.438129 kubelet[1457]: W0208 23:42:22.436596    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:22.438129 kubelet[1457]: E0208 23:42:22.436748    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:22.633934 kubelet[1457]: E0208 23:42:22.633852    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:23.282600 kubelet[1457]: W0208 23:42:23.282514    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:23.283098 kubelet[1457]: E0208 23:42:23.283069    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:23.635110 kubelet[1457]: E0208 23:42:23.635054    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:24.636727 kubelet[1457]: E0208 23:42:24.636567    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:24.903547 kubelet[1457]: E0208 23:42:24.903112    1457 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.24.4.77" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  8 23:42:24.976633 kubelet[1457]: I0208 23:42:24.976576    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:24.979261 kubelet[1457]: E0208 23:42:24.979212    1457 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.77"
Feb  8 23:42:24.979549 kubelet[1457]: E0208 23:42:24.979386    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b371a76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.77 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720049782, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 24, 976496807, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b371a76" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:24.982095 kubelet[1457]: E0208 23:42:24.981901    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3734af", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.77 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720056495, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 24, 976508429, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3734af" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:24.985809 kubelet[1457]: E0208 23:42:24.985603    1457 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.77.17b207c20b3740bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.77", UID:"172.24.4.77", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.77 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.77"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 18, 720059581, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 24, 976517336, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.77.17b207c20b3740bd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  8 23:42:25.637958 kubelet[1457]: E0208 23:42:25.637905    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:26.113154 kubelet[1457]: W0208 23:42:26.113033    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:26.113154 kubelet[1457]: E0208 23:42:26.113147    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.77" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  8 23:42:26.639569 kubelet[1457]: E0208 23:42:26.639475    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:26.863432 kubelet[1457]: W0208 23:42:26.863293    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:26.863432 kubelet[1457]: E0208 23:42:26.863420    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  8 23:42:27.259551 kubelet[1457]: W0208 23:42:27.259466    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:27.259551 kubelet[1457]: E0208 23:42:27.259534    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  8 23:42:27.639942 kubelet[1457]: E0208 23:42:27.639837    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:27.641189 kubelet[1457]: W0208 23:42:27.641152    1457 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:27.641396 kubelet[1457]: E0208 23:42:27.641370    1457 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  8 23:42:28.619159 kubelet[1457]: I0208 23:42:28.618883    1457 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb  8 23:42:28.641324 kubelet[1457]: E0208 23:42:28.641184    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:28.744912 kubelet[1457]: E0208 23:42:28.744860    1457 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.77\" not found"
Feb  8 23:42:29.069654 kubelet[1457]: E0208 23:42:29.068965    1457 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.77" not found
Feb  8 23:42:29.642498 kubelet[1457]: E0208 23:42:29.642368    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:30.079054 kubelet[1457]: E0208 23:42:30.078514    1457 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.77" not found
Feb  8 23:42:30.643586 kubelet[1457]: E0208 23:42:30.643506    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:31.313108 kubelet[1457]: E0208 23:42:31.313042    1457 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.77\" not found" node="172.24.4.77"
Feb  8 23:42:31.381261 kubelet[1457]: I0208 23:42:31.381188    1457 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.77"
Feb  8 23:42:31.516712 kubelet[1457]: I0208 23:42:31.516621    1457 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.77"
Feb  8 23:42:31.550714 kubelet[1457]: E0208 23:42:31.550667    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:31.645675 kubelet[1457]: E0208 23:42:31.645595    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:31.651277 kubelet[1457]: E0208 23:42:31.651231    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:31.752474 kubelet[1457]: E0208 23:42:31.752402    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:31.852939 kubelet[1457]: E0208 23:42:31.852880    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:31.952695 sudo[1254]: pam_unix(sudo:session): session closed for user root
Feb  8 23:42:31.954634 kubelet[1457]: E0208 23:42:31.953496    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.054672 kubelet[1457]: E0208 23:42:32.054497    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.139579 sshd[1248]: pam_unix(sshd:session): session closed for user core
Feb  8 23:42:32.147426 systemd-logind[1118]: Session 5 logged out. Waiting for processes to exit.
Feb  8 23:42:32.157609 kubelet[1457]: E0208 23:42:32.155572    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.150426 systemd[1]: sshd@4-172.24.4.77:22-172.24.4.1:42632.service: Deactivated successfully.
Feb  8 23:42:32.156873 systemd[1]: session-5.scope: Deactivated successfully.
Feb  8 23:42:32.162937 systemd-logind[1118]: Removed session 5.
Feb  8 23:42:32.256899 kubelet[1457]: E0208 23:42:32.256622    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.358233 kubelet[1457]: E0208 23:42:32.358166    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.459612 kubelet[1457]: E0208 23:42:32.459547    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.561152 kubelet[1457]: E0208 23:42:32.560944    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.646493 kubelet[1457]: E0208 23:42:32.646421    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:32.662266 kubelet[1457]: E0208 23:42:32.662133    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.762970 kubelet[1457]: E0208 23:42:32.762713    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.863954 kubelet[1457]: E0208 23:42:32.863860    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:32.964929 kubelet[1457]: E0208 23:42:32.964680    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.065062 kubelet[1457]: E0208 23:42:33.064931    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.165974 kubelet[1457]: E0208 23:42:33.165684    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.267047 kubelet[1457]: E0208 23:42:33.266889    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.367958 kubelet[1457]: E0208 23:42:33.367827    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.469255 kubelet[1457]: E0208 23:42:33.468950    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.570116 kubelet[1457]: E0208 23:42:33.569935    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.648181 kubelet[1457]: E0208 23:42:33.648093    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:33.670987 kubelet[1457]: E0208 23:42:33.670862    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.771697 kubelet[1457]: E0208 23:42:33.771466    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.872547 kubelet[1457]: E0208 23:42:33.872450    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:33.973217 kubelet[1457]: E0208 23:42:33.973160    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.074552 kubelet[1457]: E0208 23:42:34.074347    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.175233 kubelet[1457]: E0208 23:42:34.175158    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.276546 kubelet[1457]: E0208 23:42:34.276480    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.377277 kubelet[1457]: E0208 23:42:34.377177    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.478210 kubelet[1457]: E0208 23:42:34.478150    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.579387 kubelet[1457]: E0208 23:42:34.579271    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.650898 kubelet[1457]: E0208 23:42:34.649123    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:34.679572 kubelet[1457]: E0208 23:42:34.679430    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.780697 kubelet[1457]: E0208 23:42:34.780504    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.881738 kubelet[1457]: E0208 23:42:34.881569    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:34.982023 kubelet[1457]: E0208 23:42:34.981847    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.082858 kubelet[1457]: E0208 23:42:35.082743    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.183345 kubelet[1457]: E0208 23:42:35.183273    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.285217 kubelet[1457]: E0208 23:42:35.284401    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.386532 kubelet[1457]: E0208 23:42:35.386447    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.487072 kubelet[1457]: E0208 23:42:35.486979    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.587950 kubelet[1457]: E0208 23:42:35.587152    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.650418 kubelet[1457]: E0208 23:42:35.650296    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:35.689030 kubelet[1457]: E0208 23:42:35.688966    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.789511 kubelet[1457]: E0208 23:42:35.789443    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.890937 kubelet[1457]: E0208 23:42:35.890826    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:35.992054 kubelet[1457]: E0208 23:42:35.991993    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.093426 kubelet[1457]: E0208 23:42:36.093358    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.194310 kubelet[1457]: E0208 23:42:36.194116    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.295696 kubelet[1457]: E0208 23:42:36.295626    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.396510 kubelet[1457]: E0208 23:42:36.396407    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.497856 kubelet[1457]: E0208 23:42:36.497631    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.599212 kubelet[1457]: E0208 23:42:36.599126    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.650954 kubelet[1457]: E0208 23:42:36.650887    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:36.700218 kubelet[1457]: E0208 23:42:36.700099    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.800530 kubelet[1457]: E0208 23:42:36.800323    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:36.902148 kubelet[1457]: E0208 23:42:36.902054    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.003677 kubelet[1457]: E0208 23:42:37.003594    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.104657 kubelet[1457]: E0208 23:42:37.104595    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.205271 kubelet[1457]: E0208 23:42:37.205199    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.306563 kubelet[1457]: E0208 23:42:37.306463    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.407461 kubelet[1457]: E0208 23:42:37.407237    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.508601 kubelet[1457]: E0208 23:42:37.508543    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.609883 kubelet[1457]: E0208 23:42:37.609754    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.651849 kubelet[1457]: E0208 23:42:37.651769    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:37.711162 kubelet[1457]: E0208 23:42:37.710987    1457 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.77\" not found"
Feb  8 23:42:37.813081 kubelet[1457]: I0208 23:42:37.813003    1457 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb  8 23:42:37.814062 env[1133]: time="2024-02-08T23:42:37.813924713Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb  8 23:42:37.814924 kubelet[1457]: I0208 23:42:37.814395    1457 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb  8 23:42:38.631063 kubelet[1457]: E0208 23:42:38.630915    1457 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:38.646282 kubelet[1457]: I0208 23:42:38.646225    1457 apiserver.go:52] "Watching apiserver"
Feb  8 23:42:38.651748 kubelet[1457]: I0208 23:42:38.651678    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:42:38.652013 kubelet[1457]: I0208 23:42:38.651967    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:42:38.654611 kubelet[1457]: E0208 23:42:38.654574    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:38.760434 kubelet[1457]: I0208 23:42:38.760368    1457 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  8 23:42:38.780940 kubelet[1457]: I0208 23:42:38.780853    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-bpf-maps\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.781152 kubelet[1457]: I0208 23:42:38.781032    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rrm\" (UniqueName: \"kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-kube-api-access-g8rrm\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.781152 kubelet[1457]: I0208 23:42:38.781143    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a9921ac-1465-475f-89b5-41bd677f564d-xtables-lock\") pod \"kube-proxy-rqj5z\" (UID: \"3a9921ac-1465-475f-89b5-41bd677f564d\") " pod="kube-system/kube-proxy-rqj5z"
Feb  8 23:42:38.781350 kubelet[1457]: I0208 23:42:38.781265    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-run\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.781427 kubelet[1457]: I0208 23:42:38.781374    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hostproc\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.781502 kubelet[1457]: I0208 23:42:38.781480    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cni-path\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.781663 kubelet[1457]: I0208 23:42:38.781585    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-xtables-lock\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.782042 kubelet[1457]: I0208 23:42:38.781990    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-net\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.782307 kubelet[1457]: I0208 23:42:38.782281    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krv44\" (UniqueName: \"kubernetes.io/projected/3a9921ac-1465-475f-89b5-41bd677f564d-kube-api-access-krv44\") pod \"kube-proxy-rqj5z\" (UID: \"3a9921ac-1465-475f-89b5-41bd677f564d\") " pod="kube-system/kube-proxy-rqj5z"
Feb  8 23:42:38.782537 kubelet[1457]: I0208 23:42:38.782513    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-etc-cni-netd\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.782819 kubelet[1457]: I0208 23:42:38.782755    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-config-path\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.783067 kubelet[1457]: I0208 23:42:38.783040    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-kernel\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.783385 kubelet[1457]: I0208 23:42:38.783359    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hubble-tls\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.783617 kubelet[1457]: I0208 23:42:38.783592    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a9921ac-1465-475f-89b5-41bd677f564d-kube-proxy\") pod \"kube-proxy-rqj5z\" (UID: \"3a9921ac-1465-475f-89b5-41bd677f564d\") " pod="kube-system/kube-proxy-rqj5z"
Feb  8 23:42:38.783868 kubelet[1457]: I0208 23:42:38.783843    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-cgroup\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.784116 kubelet[1457]: I0208 23:42:38.784088    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-lib-modules\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.784344 kubelet[1457]: I0208 23:42:38.784320    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-clustermesh-secrets\") pod \"cilium-2pwfh\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") " pod="kube-system/cilium-2pwfh"
Feb  8 23:42:38.784581 kubelet[1457]: I0208 23:42:38.784557    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a9921ac-1465-475f-89b5-41bd677f564d-lib-modules\") pod \"kube-proxy-rqj5z\" (UID: \"3a9921ac-1465-475f-89b5-41bd677f564d\") " pod="kube-system/kube-proxy-rqj5z"
Feb  8 23:42:38.784750 kubelet[1457]: I0208 23:42:38.784728    1457 reconciler.go:41] "Reconciler: start to sync state"
Feb  8 23:42:38.962565 env[1133]: time="2024-02-08T23:42:38.962359557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqj5z,Uid:3a9921ac-1465-475f-89b5-41bd677f564d,Namespace:kube-system,Attempt:0,}"
Feb  8 23:42:39.264043 env[1133]: time="2024-02-08T23:42:39.263852541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2pwfh,Uid:369b9b3c-dbdd-43f3-a6ab-3817da0a96cd,Namespace:kube-system,Attempt:0,}"
Feb  8 23:42:39.655836 kubelet[1457]: E0208 23:42:39.655664    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:39.771527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392391669.mount: Deactivated successfully.
Feb  8 23:42:39.785353 env[1133]: time="2024-02-08T23:42:39.785260431Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.787403 env[1133]: time="2024-02-08T23:42:39.787345505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.793009 env[1133]: time="2024-02-08T23:42:39.792939699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.797609 env[1133]: time="2024-02-08T23:42:39.797533304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.803633 env[1133]: time="2024-02-08T23:42:39.803577213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.809584 env[1133]: time="2024-02-08T23:42:39.809499141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.811482 env[1133]: time="2024-02-08T23:42:39.811408386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.813460 env[1133]: time="2024-02-08T23:42:39.813304687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:39.853841 env[1133]: time="2024-02-08T23:42:39.853625083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:42:39.853841 env[1133]: time="2024-02-08T23:42:39.853732745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:42:39.854415 env[1133]: time="2024-02-08T23:42:39.853765959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:42:39.854742 env[1133]: time="2024-02-08T23:42:39.854404758Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf pid=1554 runtime=io.containerd.runc.v2
Feb  8 23:42:39.861638 env[1133]: time="2024-02-08T23:42:39.861500280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:42:39.862136 env[1133]: time="2024-02-08T23:42:39.862069499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:42:39.862424 env[1133]: time="2024-02-08T23:42:39.862325780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:42:39.863131 env[1133]: time="2024-02-08T23:42:39.863017308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6be8293b5f8c9de210e6c7f4f360cd1736d3879967ab4ee42a1b612e36a4f9b9 pid=1562 runtime=io.containerd.runc.v2
Feb  8 23:42:39.933279 env[1133]: time="2024-02-08T23:42:39.933144912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2pwfh,Uid:369b9b3c-dbdd-43f3-a6ab-3817da0a96cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\""
Feb  8 23:42:39.935887 env[1133]: time="2024-02-08T23:42:39.935853427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rqj5z,Uid:3a9921ac-1465-475f-89b5-41bd677f564d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6be8293b5f8c9de210e6c7f4f360cd1736d3879967ab4ee42a1b612e36a4f9b9\""
Feb  8 23:42:39.936674 env[1133]: time="2024-02-08T23:42:39.936649432Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb  8 23:42:40.656276 kubelet[1457]: E0208 23:42:40.656161    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:41.656951 kubelet[1457]: E0208 23:42:41.656832    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:42.657752 kubelet[1457]: E0208 23:42:42.657664    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:43.310851 update_engine[1119]: I0208 23:42:43.310509  1119 update_attempter.cc:509] Updating boot flags...
Feb  8 23:42:43.657907 kubelet[1457]: E0208 23:42:43.657845    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:44.658943 kubelet[1457]: E0208 23:42:44.658839    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:45.659554 kubelet[1457]: E0208 23:42:45.659437    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:46.661393 kubelet[1457]: E0208 23:42:46.661287    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:47.491287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295544936.mount: Deactivated successfully.
Feb  8 23:42:47.663536 kubelet[1457]: E0208 23:42:47.663410    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:48.664474 kubelet[1457]: E0208 23:42:48.664410    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:49.666133 kubelet[1457]: E0208 23:42:49.665946    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:50.668317 kubelet[1457]: E0208 23:42:50.668218    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:51.668362 kubelet[1457]: E0208 23:42:51.668322    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:51.895943 env[1133]: time="2024-02-08T23:42:51.895826797Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:51.901624 env[1133]: time="2024-02-08T23:42:51.901549625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:51.905006 env[1133]: time="2024-02-08T23:42:51.904958843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:51.906844 env[1133]: time="2024-02-08T23:42:51.906750344Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb  8 23:42:51.909955 env[1133]: time="2024-02-08T23:42:51.909877953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\""
Feb  8 23:42:51.913820 env[1133]: time="2024-02-08T23:42:51.913716756Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  8 23:42:51.949595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764225962.mount: Deactivated successfully.
Feb  8 23:42:51.971082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459555801.mount: Deactivated successfully.
Feb  8 23:42:51.986053 env[1133]: time="2024-02-08T23:42:51.985935794Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\""
Feb  8 23:42:51.988522 env[1133]: time="2024-02-08T23:42:51.988465692Z" level=info msg="StartContainer for \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\""
Feb  8 23:42:52.077812 env[1133]: time="2024-02-08T23:42:52.076767454Z" level=info msg="StartContainer for \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\" returns successfully"
Feb  8 23:42:52.412066 env[1133]: time="2024-02-08T23:42:52.411971105Z" level=info msg="shim disconnected" id=e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08
Feb  8 23:42:52.412443 env[1133]: time="2024-02-08T23:42:52.412395591Z" level=warning msg="cleaning up after shim disconnected" id=e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08 namespace=k8s.io
Feb  8 23:42:52.412585 env[1133]: time="2024-02-08T23:42:52.412553306Z" level=info msg="cleaning up dead shim"
Feb  8 23:42:52.428460 env[1133]: time="2024-02-08T23:42:52.428377840Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1685 runtime=io.containerd.runc.v2\n"
Feb  8 23:42:52.669895 kubelet[1457]: E0208 23:42:52.669284    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:52.935485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08-rootfs.mount: Deactivated successfully.
Feb  8 23:42:53.007056 env[1133]: time="2024-02-08T23:42:53.006966636Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  8 23:42:53.036877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403981240.mount: Deactivated successfully.
Feb  8 23:42:53.053597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811262574.mount: Deactivated successfully.
Feb  8 23:42:53.062887 env[1133]: time="2024-02-08T23:42:53.062833683Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\""
Feb  8 23:42:53.063734 env[1133]: time="2024-02-08T23:42:53.063713564Z" level=info msg="StartContainer for \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\""
Feb  8 23:42:53.138813 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  8 23:42:53.139165 systemd[1]: Stopped systemd-sysctl.service.
Feb  8 23:42:53.139354 systemd[1]: Stopping systemd-sysctl.service...
Feb  8 23:42:53.143129 systemd[1]: Starting systemd-sysctl.service...
Feb  8 23:42:53.157700 systemd[1]: Finished systemd-sysctl.service.
Feb  8 23:42:53.166707 env[1133]: time="2024-02-08T23:42:53.166665709Z" level=info msg="StartContainer for \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\" returns successfully"
Feb  8 23:42:53.237375 env[1133]: time="2024-02-08T23:42:53.236375018Z" level=info msg="shim disconnected" id=c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39
Feb  8 23:42:53.237758 env[1133]: time="2024-02-08T23:42:53.237713731Z" level=warning msg="cleaning up after shim disconnected" id=c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39 namespace=k8s.io
Feb  8 23:42:53.237904 env[1133]: time="2024-02-08T23:42:53.237880213Z" level=info msg="cleaning up dead shim"
Feb  8 23:42:53.251042 env[1133]: time="2024-02-08T23:42:53.250990743Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1753 runtime=io.containerd.runc.v2\n"
Feb  8 23:42:53.670214 kubelet[1457]: E0208 23:42:53.670163    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:54.009251 env[1133]: time="2024-02-08T23:42:54.008946260Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  8 23:42:54.033814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount431686499.mount: Deactivated successfully.
Feb  8 23:42:54.052571 env[1133]: time="2024-02-08T23:42:54.052482117Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\""
Feb  8 23:42:54.053638 env[1133]: time="2024-02-08T23:42:54.053601678Z" level=info msg="StartContainer for \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\""
Feb  8 23:42:54.162615 env[1133]: time="2024-02-08T23:42:54.162554647Z" level=info msg="StartContainer for \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\" returns successfully"
Feb  8 23:42:54.407444 env[1133]: time="2024-02-08T23:42:54.407343627Z" level=info msg="shim disconnected" id=d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334
Feb  8 23:42:54.407819 env[1133]: time="2024-02-08T23:42:54.407470034Z" level=warning msg="cleaning up after shim disconnected" id=d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334 namespace=k8s.io
Feb  8 23:42:54.407819 env[1133]: time="2024-02-08T23:42:54.407500772Z" level=info msg="cleaning up dead shim"
Feb  8 23:42:54.425987 env[1133]: time="2024-02-08T23:42:54.425896247Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1814 runtime=io.containerd.runc.v2\n"
Feb  8 23:42:54.671388 kubelet[1457]: E0208 23:42:54.670929    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:54.839344 env[1133]: time="2024-02-08T23:42:54.839266306Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:54.841630 env[1133]: time="2024-02-08T23:42:54.841567724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:54.843636 env[1133]: time="2024-02-08T23:42:54.843599547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:54.846498 env[1133]: time="2024-02-08T23:42:54.846475693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:42:54.847881 env[1133]: time="2024-02-08T23:42:54.847753220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\""
Feb  8 23:42:54.851526 env[1133]: time="2024-02-08T23:42:54.851459134Z" level=info msg="CreateContainer within sandbox \"6be8293b5f8c9de210e6c7f4f360cd1736d3879967ab4ee42a1b612e36a4f9b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb  8 23:42:54.878823 env[1133]: time="2024-02-08T23:42:54.878738047Z" level=info msg="CreateContainer within sandbox \"6be8293b5f8c9de210e6c7f4f360cd1736d3879967ab4ee42a1b612e36a4f9b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0252535bfc13942d739f650a2ee1a7165f2e917f98c5699a295c77e9d2e22d6f\""
Feb  8 23:42:54.879967 env[1133]: time="2024-02-08T23:42:54.879926477Z" level=info msg="StartContainer for \"0252535bfc13942d739f650a2ee1a7165f2e917f98c5699a295c77e9d2e22d6f\""
Feb  8 23:42:54.937131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334-rootfs.mount: Deactivated successfully.
Feb  8 23:42:55.009211 env[1133]: time="2024-02-08T23:42:55.009168975Z" level=info msg="StartContainer for \"0252535bfc13942d739f650a2ee1a7165f2e917f98c5699a295c77e9d2e22d6f\" returns successfully"
Feb  8 23:42:55.016952 env[1133]: time="2024-02-08T23:42:55.016914829Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  8 23:42:55.037633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011551394.mount: Deactivated successfully.
Feb  8 23:42:55.044287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062658232.mount: Deactivated successfully.
Feb  8 23:42:55.057946 env[1133]: time="2024-02-08T23:42:55.057895409Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\""
Feb  8 23:42:55.058966 env[1133]: time="2024-02-08T23:42:55.058941932Z" level=info msg="StartContainer for \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\""
Feb  8 23:42:55.070811 kubelet[1457]: I0208 23:42:55.070220    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rqj5z" podStartSLOduration=-9.223372012784615e+09 pod.CreationTimestamp="2024-02-08 23:42:31 +0000 UTC" firstStartedPulling="2024-02-08 23:42:39.938029624 +0000 UTC m=+22.008803933" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:55.06974974 +0000 UTC m=+37.140524080" watchObservedRunningTime="2024-02-08 23:42:55.070160913 +0000 UTC m=+37.140935232"
Feb  8 23:42:55.159363 env[1133]: time="2024-02-08T23:42:55.159307093Z" level=info msg="StartContainer for \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\" returns successfully"
Feb  8 23:42:55.285574 env[1133]: time="2024-02-08T23:42:55.285354020Z" level=info msg="shim disconnected" id=e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b
Feb  8 23:42:55.287670 env[1133]: time="2024-02-08T23:42:55.287589073Z" level=warning msg="cleaning up after shim disconnected" id=e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b namespace=k8s.io
Feb  8 23:42:55.287949 env[1133]: time="2024-02-08T23:42:55.287909966Z" level=info msg="cleaning up dead shim"
Feb  8 23:42:55.330077 env[1133]: time="2024-02-08T23:42:55.329082125Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1970 runtime=io.containerd.runc.v2\n"
Feb  8 23:42:55.672101 kubelet[1457]: E0208 23:42:55.671980    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:56.033889 env[1133]: time="2024-02-08T23:42:56.033198756Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  8 23:42:56.064726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862130246.mount: Deactivated successfully.
Feb  8 23:42:56.077048 env[1133]: time="2024-02-08T23:42:56.076931418Z" level=info msg="CreateContainer within sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\""
Feb  8 23:42:56.078405 env[1133]: time="2024-02-08T23:42:56.078331054Z" level=info msg="StartContainer for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\""
Feb  8 23:42:56.227499 env[1133]: time="2024-02-08T23:42:56.227446186Z" level=info msg="StartContainer for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" returns successfully"
Feb  8 23:42:56.354278 kubelet[1457]: I0208 23:42:56.354232    1457 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb  8 23:42:56.673320 kubelet[1457]: E0208 23:42:56.673047    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:56.780810 kernel: Initializing XFRM netlink socket
Feb  8 23:42:57.675255 kubelet[1457]: E0208 23:42:57.675089    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:58.555069 systemd-networkd[1022]: cilium_host: Link UP
Feb  8 23:42:58.565934 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready
Feb  8 23:42:58.566061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb  8 23:42:58.566301 systemd-networkd[1022]: cilium_net: Link UP
Feb  8 23:42:58.566737 systemd-networkd[1022]: cilium_net: Gained carrier
Feb  8 23:42:58.567006 systemd-networkd[1022]: cilium_host: Gained carrier
Feb  8 23:42:58.630846 kubelet[1457]: E0208 23:42:58.630761    1457 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:58.675726 kubelet[1457]: E0208 23:42:58.675652    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:42:58.691324 systemd-networkd[1022]: cilium_vxlan: Link UP
Feb  8 23:42:58.691332 systemd-networkd[1022]: cilium_vxlan: Gained carrier
Feb  8 23:42:58.902239 systemd-networkd[1022]: cilium_net: Gained IPv6LL
Feb  8 23:42:58.926097 systemd-networkd[1022]: cilium_host: Gained IPv6LL
Feb  8 23:42:59.076863 kernel: NET: Registered PF_ALG protocol family
Feb  8 23:42:59.676286 kubelet[1457]: E0208 23:42:59.676194    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:00.190947 systemd-networkd[1022]: lxc_health: Link UP
Feb  8 23:43:00.201531 systemd-networkd[1022]: lxc_health: Gained carrier
Feb  8 23:43:00.202704 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  8 23:43:00.434099 systemd-networkd[1022]: cilium_vxlan: Gained IPv6LL
Feb  8 23:43:00.676885 kubelet[1457]: E0208 23:43:00.676734    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:01.295548 kubelet[1457]: I0208 23:43:01.295480    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2pwfh" podStartSLOduration=-9.223372006559374e+09 pod.CreationTimestamp="2024-02-08 23:42:31 +0000 UTC" firstStartedPulling="2024-02-08 23:42:39.936185992 +0000 UTC m=+22.006960311" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:57.0748501 +0000 UTC m=+39.145624509" watchObservedRunningTime="2024-02-08 23:43:01.295401818 +0000 UTC m=+43.366176147"
Feb  8 23:43:01.677401 kubelet[1457]: E0208 23:43:01.677307    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:01.859360 systemd-networkd[1022]: lxc_health: Gained IPv6LL
Feb  8 23:43:02.678256 kubelet[1457]: E0208 23:43:02.678092    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:03.679436 kubelet[1457]: E0208 23:43:03.679338    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:04.679672 kubelet[1457]: E0208 23:43:04.679544    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:05.680604 kubelet[1457]: E0208 23:43:05.680544    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:06.142505 kubelet[1457]: I0208 23:43:06.142423    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:43:06.198587 kubelet[1457]: I0208 23:43:06.198504    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfvw\" (UniqueName: \"kubernetes.io/projected/5f4088d7-cff3-4436-b0f5-8a1f98b1a961-kube-api-access-cmfvw\") pod \"nginx-deployment-8ffc5cf85-8xhgj\" (UID: \"5f4088d7-cff3-4436-b0f5-8a1f98b1a961\") " pod="default/nginx-deployment-8ffc5cf85-8xhgj"
Feb  8 23:43:06.462865 env[1133]: time="2024-02-08T23:43:06.460656610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-8xhgj,Uid:5f4088d7-cff3-4436-b0f5-8a1f98b1a961,Namespace:default,Attempt:0,}"
Feb  8 23:43:06.608511 systemd-networkd[1022]: lxccbf0c32e0c92: Link UP
Feb  8 23:43:06.616850 kernel: eth0: renamed from tmp8d0fe
Feb  8 23:43:06.631381 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  8 23:43:06.631503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccbf0c32e0c92: link becomes ready
Feb  8 23:43:06.631658 systemd-networkd[1022]: lxccbf0c32e0c92: Gained carrier
Feb  8 23:43:06.681656 kubelet[1457]: E0208 23:43:06.681586    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:06.950566 env[1133]: time="2024-02-08T23:43:06.950116161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:43:06.950566 env[1133]: time="2024-02-08T23:43:06.950216430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:43:06.950566 env[1133]: time="2024-02-08T23:43:06.950249752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:43:06.951578 env[1133]: time="2024-02-08T23:43:06.951429816Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d0fe05bc369ad09cbc184c7fc45555fdf331a81621a7da8ae39269066f80649 pid=2540 runtime=io.containerd.runc.v2
Feb  8 23:43:07.079841 env[1133]: time="2024-02-08T23:43:07.079736942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-8xhgj,Uid:5f4088d7-cff3-4436-b0f5-8a1f98b1a961,Namespace:default,Attempt:0,} returns sandbox id \"8d0fe05bc369ad09cbc184c7fc45555fdf331a81621a7da8ae39269066f80649\""
Feb  8 23:43:07.081877 env[1133]: time="2024-02-08T23:43:07.081587403Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb  8 23:43:07.328723 systemd[1]: run-containerd-runc-k8s.io-8d0fe05bc369ad09cbc184c7fc45555fdf331a81621a7da8ae39269066f80649-runc.JboCI2.mount: Deactivated successfully.
Feb  8 23:43:07.682199 kubelet[1457]: E0208 23:43:07.682101    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:08.422337 systemd-networkd[1022]: lxccbf0c32e0c92: Gained IPv6LL
Feb  8 23:43:08.683607 kubelet[1457]: E0208 23:43:08.683346    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:09.688190 kubelet[1457]: E0208 23:43:09.688098    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:10.688909 kubelet[1457]: E0208 23:43:10.688799    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:11.690070 kubelet[1457]: E0208 23:43:11.689924    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:11.742325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450303846.mount: Deactivated successfully.
Feb  8 23:43:12.690846 kubelet[1457]: E0208 23:43:12.690764    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:13.197554 env[1133]: time="2024-02-08T23:43:13.197462556Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:13.201044 env[1133]: time="2024-02-08T23:43:13.200993769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:13.211016 env[1133]: time="2024-02-08T23:43:13.210959963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:13.214540 env[1133]: time="2024-02-08T23:43:13.214493191Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:13.216617 env[1133]: time="2024-02-08T23:43:13.216556381Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb  8 23:43:13.220148 env[1133]: time="2024-02-08T23:43:13.220094367Z" level=info msg="CreateContainer within sandbox \"8d0fe05bc369ad09cbc184c7fc45555fdf331a81621a7da8ae39269066f80649\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb  8 23:43:13.274065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73027479.mount: Deactivated successfully.
Feb  8 23:43:13.302280 env[1133]: time="2024-02-08T23:43:13.302229260Z" level=info msg="CreateContainer within sandbox \"8d0fe05bc369ad09cbc184c7fc45555fdf331a81621a7da8ae39269066f80649\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2288b74967d171a1913013c072d9e71f86c0e1dc8353ef252ef9c2d791be0f41\""
Feb  8 23:43:13.303321 env[1133]: time="2024-02-08T23:43:13.303294568Z" level=info msg="StartContainer for \"2288b74967d171a1913013c072d9e71f86c0e1dc8353ef252ef9c2d791be0f41\""
Feb  8 23:43:13.388642 env[1133]: time="2024-02-08T23:43:13.388565464Z" level=info msg="StartContainer for \"2288b74967d171a1913013c072d9e71f86c0e1dc8353ef252ef9c2d791be0f41\" returns successfully"
Feb  8 23:43:13.691841 kubelet[1457]: E0208 23:43:13.691744    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:14.693313 kubelet[1457]: E0208 23:43:14.693200    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:15.693599 kubelet[1457]: E0208 23:43:15.693458    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:16.694760 kubelet[1457]: E0208 23:43:16.694637    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:17.696862 kubelet[1457]: E0208 23:43:17.696706    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:18.630893 kubelet[1457]: E0208 23:43:18.630821    1457 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:18.698574 kubelet[1457]: E0208 23:43:18.698527    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:19.701024 kubelet[1457]: E0208 23:43:19.700913    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:20.701665 kubelet[1457]: E0208 23:43:20.701595    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:21.494868 kubelet[1457]: I0208 23:43:21.494712    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-8xhgj" podStartSLOduration=-9.223372021360172e+09 pod.CreationTimestamp="2024-02-08 23:43:06 +0000 UTC" firstStartedPulling="2024-02-08 23:43:07.081169148 +0000 UTC m=+49.151943467" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:14.105915923 +0000 UTC m=+56.176690312" watchObservedRunningTime="2024-02-08 23:43:21.494603185 +0000 UTC m=+63.565377564"
Feb  8 23:43:21.496995 kubelet[1457]: I0208 23:43:21.495473    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:43:21.529379 kubelet[1457]: I0208 23:43:21.529218    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljsb2\" (UniqueName: \"kubernetes.io/projected/c996f5f2-e94c-4ab9-813f-b46f5a729b13-kube-api-access-ljsb2\") pod \"nfs-server-provisioner-0\" (UID: \"c996f5f2-e94c-4ab9-813f-b46f5a729b13\") " pod="default/nfs-server-provisioner-0"
Feb  8 23:43:21.529379 kubelet[1457]: I0208 23:43:21.529306    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c996f5f2-e94c-4ab9-813f-b46f5a729b13-data\") pod \"nfs-server-provisioner-0\" (UID: \"c996f5f2-e94c-4ab9-813f-b46f5a729b13\") " pod="default/nfs-server-provisioner-0"
Feb  8 23:43:21.702761 kubelet[1457]: E0208 23:43:21.702629    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:21.812898 env[1133]: time="2024-02-08T23:43:21.812579905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c996f5f2-e94c-4ab9-813f-b46f5a729b13,Namespace:default,Attempt:0,}"
Feb  8 23:43:21.914093 systemd-networkd[1022]: lxc80a1d959d336: Link UP
Feb  8 23:43:21.922991 kernel: eth0: renamed from tmpa1616
Feb  8 23:43:21.934282 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  8 23:43:21.934406 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc80a1d959d336: link becomes ready
Feb  8 23:43:21.938117 systemd-networkd[1022]: lxc80a1d959d336: Gained carrier
Feb  8 23:43:22.329397 env[1133]: time="2024-02-08T23:43:22.328768946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:43:22.329397 env[1133]: time="2024-02-08T23:43:22.329002795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:43:22.329397 env[1133]: time="2024-02-08T23:43:22.329038812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:43:22.330070 env[1133]: time="2024-02-08T23:43:22.329639048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a16165e24e24af7c64a1a0f43c399fe2568fafef80bd95f2a01d52fb7683b4a4 pid=2711 runtime=io.containerd.runc.v2
Feb  8 23:43:22.414871 env[1133]: time="2024-02-08T23:43:22.414809286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c996f5f2-e94c-4ab9-813f-b46f5a729b13,Namespace:default,Attempt:0,} returns sandbox id \"a16165e24e24af7c64a1a0f43c399fe2568fafef80bd95f2a01d52fb7683b4a4\""
Feb  8 23:43:22.417038 env[1133]: time="2024-02-08T23:43:22.416997801Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb  8 23:43:22.703872 kubelet[1457]: E0208 23:43:22.703744    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:23.704749 kubelet[1457]: E0208 23:43:23.704672    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:23.782962 systemd-networkd[1022]: lxc80a1d959d336: Gained IPv6LL
Feb  8 23:43:24.705195 kubelet[1457]: E0208 23:43:24.705134    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:25.706222 kubelet[1457]: E0208 23:43:25.706134    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:26.374928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822195436.mount: Deactivated successfully.
Feb  8 23:43:26.707471 kubelet[1457]: E0208 23:43:26.707084    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:27.707904 kubelet[1457]: E0208 23:43:27.707727    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:28.709735 kubelet[1457]: E0208 23:43:28.709649    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:29.597307 env[1133]: time="2024-02-08T23:43:29.597070762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:29.602188 env[1133]: time="2024-02-08T23:43:29.602114401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:29.606534 env[1133]: time="2024-02-08T23:43:29.606463479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:29.611098 env[1133]: time="2024-02-08T23:43:29.611026898Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:29.613194 env[1133]: time="2024-02-08T23:43:29.613136795Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb  8 23:43:29.620621 env[1133]: time="2024-02-08T23:43:29.620510585Z" level=info msg="CreateContainer within sandbox \"a16165e24e24af7c64a1a0f43c399fe2568fafef80bd95f2a01d52fb7683b4a4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb  8 23:43:29.641674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241136354.mount: Deactivated successfully.
Feb  8 23:43:29.653269 env[1133]: time="2024-02-08T23:43:29.653220947Z" level=info msg="CreateContainer within sandbox \"a16165e24e24af7c64a1a0f43c399fe2568fafef80bd95f2a01d52fb7683b4a4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f387caf52c084b32dbf48c300b37348d613b4488a0741b0964c6a550799ac2b7\""
Feb  8 23:43:29.654853 env[1133]: time="2024-02-08T23:43:29.654810247Z" level=info msg="StartContainer for \"f387caf52c084b32dbf48c300b37348d613b4488a0741b0964c6a550799ac2b7\""
Feb  8 23:43:29.712947 kubelet[1457]: E0208 23:43:29.712899    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:29.742111 env[1133]: time="2024-02-08T23:43:29.742069579Z" level=info msg="StartContainer for \"f387caf52c084b32dbf48c300b37348d613b4488a0741b0964c6a550799ac2b7\" returns successfully"
Feb  8 23:43:30.249306 kubelet[1457]: I0208 23:43:30.249243    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372027605646e+09 pod.CreationTimestamp="2024-02-08 23:43:21 +0000 UTC" firstStartedPulling="2024-02-08 23:43:22.416283762 +0000 UTC m=+64.487058121" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:30.247666427 +0000 UTC m=+72.318440846" watchObservedRunningTime="2024-02-08 23:43:30.249129361 +0000 UTC m=+72.319903740"
Feb  8 23:43:30.714304 kubelet[1457]: E0208 23:43:30.714152    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:31.715010 kubelet[1457]: E0208 23:43:31.714954    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:32.716320 kubelet[1457]: E0208 23:43:32.716262    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:33.717568 kubelet[1457]: E0208 23:43:33.717435    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:34.718464 kubelet[1457]: E0208 23:43:34.718408    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:35.720599 kubelet[1457]: E0208 23:43:35.720512    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:36.721126 kubelet[1457]: E0208 23:43:36.721072    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:37.722668 kubelet[1457]: E0208 23:43:37.722604    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:38.630767 kubelet[1457]: E0208 23:43:38.630719    1457 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:38.724098 kubelet[1457]: E0208 23:43:38.723959    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:39.724749 kubelet[1457]: E0208 23:43:39.724660    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:40.168060 kubelet[1457]: I0208 23:43:40.167980    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:43:40.281450 kubelet[1457]: I0208 23:43:40.281393    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed6dde74-92d3-4e88-9f0d-1fdc315e19fc\" (UniqueName: \"kubernetes.io/nfs/86826811-0005-4e6e-985b-b7607e6a2483-pvc-ed6dde74-92d3-4e88-9f0d-1fdc315e19fc\") pod \"test-pod-1\" (UID: \"86826811-0005-4e6e-985b-b7607e6a2483\") " pod="default/test-pod-1"
Feb  8 23:43:40.282128 kubelet[1457]: I0208 23:43:40.282041    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77g7n\" (UniqueName: \"kubernetes.io/projected/86826811-0005-4e6e-985b-b7607e6a2483-kube-api-access-77g7n\") pod \"test-pod-1\" (UID: \"86826811-0005-4e6e-985b-b7607e6a2483\") " pod="default/test-pod-1"
Feb  8 23:43:40.464853 kernel: FS-Cache: Loaded
Feb  8 23:43:40.537896 kernel: RPC: Registered named UNIX socket transport module.
Feb  8 23:43:40.538116 kernel: RPC: Registered udp transport module.
Feb  8 23:43:40.538173 kernel: RPC: Registered tcp transport module.
Feb  8 23:43:40.538535 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  8 23:43:40.601892 kernel: FS-Cache: Netfs 'nfs' registered for caching
Feb  8 23:43:40.725058 kubelet[1457]: E0208 23:43:40.724983    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:40.832265 kernel: NFS: Registering the id_resolver key type
Feb  8 23:43:40.832509 kernel: Key type id_resolver registered
Feb  8 23:43:40.832572 kernel: Key type id_legacy registered
Feb  8 23:43:40.928992 nfsidmap[2855]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal'
Feb  8 23:43:40.941437 nfsidmap[2856]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal'
Feb  8 23:43:41.084000 env[1133]: time="2024-02-08T23:43:41.083067884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:86826811-0005-4e6e-985b-b7607e6a2483,Namespace:default,Attempt:0,}"
Feb  8 23:43:41.183527 systemd-networkd[1022]: lxc5bc50c0906fb: Link UP
Feb  8 23:43:41.188862 kernel: eth0: renamed from tmpd891d
Feb  8 23:43:41.195807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  8 23:43:41.195960 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5bc50c0906fb: link becomes ready
Feb  8 23:43:41.196202 systemd-networkd[1022]: lxc5bc50c0906fb: Gained carrier
Feb  8 23:43:41.568806 env[1133]: time="2024-02-08T23:43:41.568256882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:43:41.569157 env[1133]: time="2024-02-08T23:43:41.569129195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:43:41.569278 env[1133]: time="2024-02-08T23:43:41.569254862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:43:41.569740 env[1133]: time="2024-02-08T23:43:41.569707894Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d891dd0b58e0f513daa0d03e787480d3dceb2c36bb3a04a35f36d15c59e339ea pid=2882 runtime=io.containerd.runc.v2
Feb  8 23:43:41.669251 env[1133]: time="2024-02-08T23:43:41.669174362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:86826811-0005-4e6e-985b-b7607e6a2483,Namespace:default,Attempt:0,} returns sandbox id \"d891dd0b58e0f513daa0d03e787480d3dceb2c36bb3a04a35f36d15c59e339ea\""
Feb  8 23:43:41.671569 env[1133]: time="2024-02-08T23:43:41.671542702Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb  8 23:43:41.726207 kubelet[1457]: E0208 23:43:41.726103    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:42.180399 env[1133]: time="2024-02-08T23:43:42.180297232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:42.183972 env[1133]: time="2024-02-08T23:43:42.183895576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:42.188085 env[1133]: time="2024-02-08T23:43:42.188019211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:42.192052 env[1133]: time="2024-02-08T23:43:42.191977284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:43:42.194185 env[1133]: time="2024-02-08T23:43:42.194124697Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb  8 23:43:42.199218 env[1133]: time="2024-02-08T23:43:42.199156030Z" level=info msg="CreateContainer within sandbox \"d891dd0b58e0f513daa0d03e787480d3dceb2c36bb3a04a35f36d15c59e339ea\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb  8 23:43:42.238087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827669617.mount: Deactivated successfully.
Feb  8 23:43:42.238879 env[1133]: time="2024-02-08T23:43:42.238610409Z" level=info msg="CreateContainer within sandbox \"d891dd0b58e0f513daa0d03e787480d3dceb2c36bb3a04a35f36d15c59e339ea\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"01440fce424de44c34233b04034718599e414e72bf32dee7ac19c4ae937b3634\""
Feb  8 23:43:42.240324 env[1133]: time="2024-02-08T23:43:42.240260196Z" level=info msg="StartContainer for \"01440fce424de44c34233b04034718599e414e72bf32dee7ac19c4ae937b3634\""
Feb  8 23:43:42.340636 env[1133]: time="2024-02-08T23:43:42.340580651Z" level=info msg="StartContainer for \"01440fce424de44c34233b04034718599e414e72bf32dee7ac19c4ae937b3634\" returns successfully"
Feb  8 23:43:42.726831 kubelet[1457]: E0208 23:43:42.726679    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:42.982292 systemd-networkd[1022]: lxc5bc50c0906fb: Gained IPv6LL
Feb  8 23:43:43.300273 kubelet[1457]: I0208 23:43:43.300121    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372017554762e+09 pod.CreationTimestamp="2024-02-08 23:43:24 +0000 UTC" firstStartedPulling="2024-02-08 23:43:41.670897367 +0000 UTC m=+83.741671686" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:43.29960864 +0000 UTC m=+85.370383029" watchObservedRunningTime="2024-02-08 23:43:43.300014624 +0000 UTC m=+85.370788983"
Feb  8 23:43:43.727259 kubelet[1457]: E0208 23:43:43.727164    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:44.727846 kubelet[1457]: E0208 23:43:44.727739    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:45.728915 kubelet[1457]: E0208 23:43:45.728836    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:46.730078 kubelet[1457]: E0208 23:43:46.729951    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:47.731244 kubelet[1457]: E0208 23:43:47.731164    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:48.732358 kubelet[1457]: E0208 23:43:48.732299    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:49.733265 kubelet[1457]: E0208 23:43:49.733141    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:50.733805 kubelet[1457]: E0208 23:43:50.733720    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:51.734521 kubelet[1457]: E0208 23:43:51.734447    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:51.950418 systemd[1]: run-containerd-runc-k8s.io-50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841-runc.zhkvki.mount: Deactivated successfully.
Feb  8 23:43:52.001346 env[1133]: time="2024-02-08T23:43:52.000097648Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  8 23:43:52.011895 env[1133]: time="2024-02-08T23:43:52.011833135Z" level=info msg="StopContainer for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" with timeout 1 (s)"
Feb  8 23:43:52.012608 env[1133]: time="2024-02-08T23:43:52.012563178Z" level=info msg="Stop container \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" with signal terminated"
Feb  8 23:43:52.025545 systemd-networkd[1022]: lxc_health: Link DOWN
Feb  8 23:43:52.025556 systemd-networkd[1022]: lxc_health: Lost carrier
Feb  8 23:43:52.095335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841-rootfs.mount: Deactivated successfully.
Feb  8 23:43:52.110899 env[1133]: time="2024-02-08T23:43:52.110811383Z" level=info msg="shim disconnected" id=50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841
Feb  8 23:43:52.111261 env[1133]: time="2024-02-08T23:43:52.111221093Z" level=warning msg="cleaning up after shim disconnected" id=50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841 namespace=k8s.io
Feb  8 23:43:52.111413 env[1133]: time="2024-02-08T23:43:52.111382397Z" level=info msg="cleaning up dead shim"
Feb  8 23:43:52.125636 env[1133]: time="2024-02-08T23:43:52.125573573Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3011 runtime=io.containerd.runc.v2\n"
Feb  8 23:43:52.130191 env[1133]: time="2024-02-08T23:43:52.130132289Z" level=info msg="StopContainer for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" returns successfully"
Feb  8 23:43:52.131072 env[1133]: time="2024-02-08T23:43:52.131030368Z" level=info msg="StopPodSandbox for \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\""
Feb  8 23:43:52.131202 env[1133]: time="2024-02-08T23:43:52.131098688Z" level=info msg="Container to stop \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:43:52.131202 env[1133]: time="2024-02-08T23:43:52.131117353Z" level=info msg="Container to stop \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:43:52.131202 env[1133]: time="2024-02-08T23:43:52.131134835Z" level=info msg="Container to stop \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:43:52.131202 env[1133]: time="2024-02-08T23:43:52.131149843Z" level=info msg="Container to stop \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:43:52.131202 env[1133]: time="2024-02-08T23:43:52.131163719Z" level=info msg="Container to stop \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:43:52.133234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf-shm.mount: Deactivated successfully.
Feb  8 23:43:52.167568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf-rootfs.mount: Deactivated successfully.
Feb  8 23:43:52.175301 env[1133]: time="2024-02-08T23:43:52.175260555Z" level=info msg="shim disconnected" id=e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf
Feb  8 23:43:52.175483 env[1133]: time="2024-02-08T23:43:52.175465881Z" level=warning msg="cleaning up after shim disconnected" id=e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf namespace=k8s.io
Feb  8 23:43:52.175546 env[1133]: time="2024-02-08T23:43:52.175532636Z" level=info msg="cleaning up dead shim"
Feb  8 23:43:52.185179 env[1133]: time="2024-02-08T23:43:52.185124049Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3042 runtime=io.containerd.runc.v2\n"
Feb  8 23:43:52.185515 env[1133]: time="2024-02-08T23:43:52.185477194Z" level=info msg="TearDown network for sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" successfully"
Feb  8 23:43:52.185558 env[1133]: time="2024-02-08T23:43:52.185511548Z" level=info msg="StopPodSandbox for \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" returns successfully"
Feb  8 23:43:52.287996 kubelet[1457]: I0208 23:43:52.284522    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hubble-tls\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.287996 kubelet[1457]: I0208 23:43:52.284616    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-run\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.287996 kubelet[1457]: I0208 23:43:52.284666    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hostproc\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.287996 kubelet[1457]: I0208 23:43:52.284720    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-xtables-lock\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.287996 kubelet[1457]: I0208 23:43:52.284847    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-etc-cni-netd\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.287996 kubelet[1457]: I0208 23:43:52.284906    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-lib-modules\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.288639 kubelet[1457]: I0208 23:43:52.285021    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8rrm\" (UniqueName: \"kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-kube-api-access-g8rrm\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.288639 kubelet[1457]: I0208 23:43:52.285073    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-bpf-maps\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.288639 kubelet[1457]: I0208 23:43:52.285128    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-net\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.288639 kubelet[1457]: I0208 23:43:52.285061    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.288639 kubelet[1457]: I0208 23:43:52.285214    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.289076 kubelet[1457]: I0208 23:43:52.285258    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hostproc" (OuterVolumeSpecName: "hostproc") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.289076 kubelet[1457]: I0208 23:43:52.285300    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.289076 kubelet[1457]: I0208 23:43:52.285338    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.289076 kubelet[1457]: I0208 23:43:52.285374    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.289076 kubelet[1457]: I0208 23:43:52.285179    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-kernel\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286153    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-cgroup\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286266    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-clustermesh-secrets\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286445    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cni-path\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286554    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-config-path\") pod \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\" (UID: \"369b9b3c-dbdd-43f3-a6ab-3817da0a96cd\") "
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286663    1457 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-etc-cni-netd\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286729    1457 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-lib-modules\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.289421 kubelet[1457]: I0208 23:43:52.286760    1457 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hostproc\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.292018 kubelet[1457]: I0208 23:43:52.286828    1457 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-xtables-lock\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.292018 kubelet[1457]: I0208 23:43:52.286863    1457 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-kernel\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.292018 kubelet[1457]: I0208 23:43:52.286928    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-run\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.292018 kubelet[1457]: W0208 23:43:52.287379    1457 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  8 23:43:52.292018 kubelet[1457]: I0208 23:43:52.291740    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.292018 kubelet[1457]: I0208 23:43:52.291864    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.293146 kubelet[1457]: I0208 23:43:52.292611    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.296048 kubelet[1457]: I0208 23:43:52.295947    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  8 23:43:52.297150 kubelet[1457]: I0208 23:43:52.297071    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cni-path" (OuterVolumeSpecName: "cni-path") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:43:52.297407 kubelet[1457]: I0208 23:43:52.297348    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-kube-api-access-g8rrm" (OuterVolumeSpecName: "kube-api-access-g8rrm") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "kube-api-access-g8rrm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:43:52.314402 kubelet[1457]: I0208 23:43:52.314337    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  8 23:43:52.315087 kubelet[1457]: I0208 23:43:52.315047    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" (UID: "369b9b3c-dbdd-43f3-a6ab-3817da0a96cd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:43:52.322377 kubelet[1457]: I0208 23:43:52.322312    1457 scope.go:115] "RemoveContainer" containerID="50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841"
Feb  8 23:43:52.328525 env[1133]: time="2024-02-08T23:43:52.327892250Z" level=info msg="RemoveContainer for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\""
Feb  8 23:43:52.351417 env[1133]: time="2024-02-08T23:43:52.351338437Z" level=info msg="RemoveContainer for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" returns successfully"
Feb  8 23:43:52.352186 kubelet[1457]: I0208 23:43:52.352110    1457 scope.go:115] "RemoveContainer" containerID="e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b"
Feb  8 23:43:52.355224 env[1133]: time="2024-02-08T23:43:52.355162270Z" level=info msg="RemoveContainer for \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\""
Feb  8 23:43:52.373409 env[1133]: time="2024-02-08T23:43:52.373292647Z" level=info msg="RemoveContainer for \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\" returns successfully"
Feb  8 23:43:52.375844 kubelet[1457]: I0208 23:43:52.374257    1457 scope.go:115] "RemoveContainer" containerID="d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334"
Feb  8 23:43:52.378844 env[1133]: time="2024-02-08T23:43:52.378728033Z" level=info msg="RemoveContainer for \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\""
Feb  8 23:43:52.387170 kubelet[1457]: I0208 23:43:52.387103    1457 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-g8rrm\" (UniqueName: \"kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-kube-api-access-g8rrm\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387170 kubelet[1457]: I0208 23:43:52.387139    1457 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-bpf-maps\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387170 kubelet[1457]: I0208 23:43:52.387156    1457 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-host-proc-sys-net\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387170 kubelet[1457]: I0208 23:43:52.387170    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-cgroup\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387170 kubelet[1457]: I0208 23:43:52.387185    1457 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-clustermesh-secrets\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387607 kubelet[1457]: I0208 23:43:52.387199    1457 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cni-path\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387607 kubelet[1457]: I0208 23:43:52.387214    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-cilium-config-path\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.387607 kubelet[1457]: I0208 23:43:52.387228    1457 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd-hubble-tls\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:43:52.389917 env[1133]: time="2024-02-08T23:43:52.389851077Z" level=info msg="RemoveContainer for \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\" returns successfully"
Feb  8 23:43:52.390368 kubelet[1457]: I0208 23:43:52.390322    1457 scope.go:115] "RemoveContainer" containerID="c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39"
Feb  8 23:43:52.391880 env[1133]: time="2024-02-08T23:43:52.391763074Z" level=info msg="RemoveContainer for \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\""
Feb  8 23:43:52.399158 env[1133]: time="2024-02-08T23:43:52.399104725Z" level=info msg="RemoveContainer for \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\" returns successfully"
Feb  8 23:43:52.399549 kubelet[1457]: I0208 23:43:52.399510    1457 scope.go:115] "RemoveContainer" containerID="e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08"
Feb  8 23:43:52.401078 env[1133]: time="2024-02-08T23:43:52.401028354Z" level=info msg="RemoveContainer for \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\""
Feb  8 23:43:52.408999 env[1133]: time="2024-02-08T23:43:52.408905352Z" level=info msg="RemoveContainer for \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\" returns successfully"
Feb  8 23:43:52.409535 kubelet[1457]: I0208 23:43:52.409484    1457 scope.go:115] "RemoveContainer" containerID="50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841"
Feb  8 23:43:52.409892 env[1133]: time="2024-02-08T23:43:52.409736766Z" level=error msg="ContainerStatus for \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\": not found"
Feb  8 23:43:52.410664 kubelet[1457]: E0208 23:43:52.410627    1457 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\": not found" containerID="50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841"
Feb  8 23:43:52.410846 kubelet[1457]: I0208 23:43:52.410689    1457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841} err="failed to get container status \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\": rpc error: code = NotFound desc = an error occurred when try to find container \"50aa04708313c8e81a08883ea3b0467cb9c560606f820c764d6e0440dc94d841\": not found"
Feb  8 23:43:52.410846 kubelet[1457]: I0208 23:43:52.410705    1457 scope.go:115] "RemoveContainer" containerID="e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b"
Feb  8 23:43:52.411507 env[1133]: time="2024-02-08T23:43:52.411370449Z" level=error msg="ContainerStatus for \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\": not found"
Feb  8 23:43:52.411878 kubelet[1457]: E0208 23:43:52.411843    1457 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\": not found" containerID="e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b"
Feb  8 23:43:52.412009 kubelet[1457]: I0208 23:43:52.411892    1457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b} err="failed to get container status \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6a31adec045b4a36aefc936187926ae6131b8bbef8cc28cfe7c502e76b9b25b\": not found"
Feb  8 23:43:52.412009 kubelet[1457]: I0208 23:43:52.411904    1457 scope.go:115] "RemoveContainer" containerID="d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334"
Feb  8 23:43:52.412454 env[1133]: time="2024-02-08T23:43:52.412356865Z" level=error msg="ContainerStatus for \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\": not found"
Feb  8 23:43:52.412815 kubelet[1457]: E0208 23:43:52.412728    1457 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\": not found" containerID="d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334"
Feb  8 23:43:52.412815 kubelet[1457]: I0208 23:43:52.412787    1457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334} err="failed to get container status \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\": rpc error: code = NotFound desc = an error occurred when try to find container \"d706c0ccdc42796330fc59f197e9b738f50a16df669cacb8a674ff42eef62334\": not found"
Feb  8 23:43:52.412815 kubelet[1457]: I0208 23:43:52.412801    1457 scope.go:115] "RemoveContainer" containerID="c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39"
Feb  8 23:43:52.413454 env[1133]: time="2024-02-08T23:43:52.413352959Z" level=error msg="ContainerStatus for \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\": not found"
Feb  8 23:43:52.413767 kubelet[1457]: E0208 23:43:52.413744    1457 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\": not found" containerID="c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39"
Feb  8 23:43:52.413767 kubelet[1457]: I0208 23:43:52.413807    1457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39} err="failed to get container status \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\": rpc error: code = NotFound desc = an error occurred when try to find container \"c943e3575d1ba8dc7dee5ce9991d977d4995071c05233f2f4bb04ecad7ecaa39\": not found"
Feb  8 23:43:52.413767 kubelet[1457]: I0208 23:43:52.413820    1457 scope.go:115] "RemoveContainer" containerID="e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08"
Feb  8 23:43:52.414399 env[1133]: time="2024-02-08T23:43:52.414303527Z" level=error msg="ContainerStatus for \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\": not found"
Feb  8 23:43:52.414707 kubelet[1457]: E0208 23:43:52.414678    1457 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\": not found" containerID="e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08"
Feb  8 23:43:52.414907 kubelet[1457]: I0208 23:43:52.414721    1457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08} err="failed to get container status \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\": rpc error: code = NotFound desc = an error occurred when try to find container \"e02e35f5b2b173925fcb5e58701d9abab39d85572857090a29d56958a8dedf08\": not found"
Feb  8 23:43:52.734923 kubelet[1457]: E0208 23:43:52.734864    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:52.896485 kubelet[1457]: I0208 23:43:52.896432    1457 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=369b9b3c-dbdd-43f3-a6ab-3817da0a96cd path="/var/lib/kubelet/pods/369b9b3c-dbdd-43f3-a6ab-3817da0a96cd/volumes"
Feb  8 23:43:52.934960 systemd[1]: var-lib-kubelet-pods-369b9b3c\x2ddbdd\x2d43f3\x2da6ab\x2d3817da0a96cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8rrm.mount: Deactivated successfully.
Feb  8 23:43:52.935308 systemd[1]: var-lib-kubelet-pods-369b9b3c\x2ddbdd\x2d43f3\x2da6ab\x2d3817da0a96cd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  8 23:43:52.935550 systemd[1]: var-lib-kubelet-pods-369b9b3c\x2ddbdd\x2d43f3\x2da6ab\x2d3817da0a96cd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  8 23:43:53.736489 kubelet[1457]: E0208 23:43:53.736386    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:53.833403 kubelet[1457]: E0208 23:43:53.833351    1457 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  8 23:43:54.737631 kubelet[1457]: E0208 23:43:54.737523    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:55.738282 kubelet[1457]: E0208 23:43:55.738230    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:56.739227 kubelet[1457]: E0208 23:43:56.739162    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:57.230407 kubelet[1457]: I0208 23:43:57.230273    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:43:57.230693 kubelet[1457]: E0208 23:43:57.230508    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" containerName="mount-cgroup"
Feb  8 23:43:57.230693 kubelet[1457]: E0208 23:43:57.230591    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" containerName="apply-sysctl-overwrites"
Feb  8 23:43:57.230693 kubelet[1457]: E0208 23:43:57.230614    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" containerName="mount-bpf-fs"
Feb  8 23:43:57.230980 kubelet[1457]: E0208 23:43:57.230695    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" containerName="clean-cilium-state"
Feb  8 23:43:57.230980 kubelet[1457]: E0208 23:43:57.230771    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" containerName="cilium-agent"
Feb  8 23:43:57.230980 kubelet[1457]: I0208 23:43:57.230929    1457 memory_manager.go:346] "RemoveStaleState removing state" podUID="369b9b3c-dbdd-43f3-a6ab-3817da0a96cd" containerName="cilium-agent"
Feb  8 23:43:57.240976 kubelet[1457]: I0208 23:43:57.240907    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:43:57.325893 kubelet[1457]: I0208 23:43:57.325844    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-etc-cni-netd\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.326249 kubelet[1457]: I0208 23:43:57.326224    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-clustermesh-secrets\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.326476 kubelet[1457]: I0208 23:43:57.326452    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-config-path\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.326702 kubelet[1457]: I0208 23:43:57.326678    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-ipsec-secrets\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.326967 kubelet[1457]: I0208 23:43:57.326944    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-run\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.327180 kubelet[1457]: I0208 23:43:57.327158    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-cgroup\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.327369 kubelet[1457]: I0208 23:43:57.327349    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-lib-modules\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.327574 kubelet[1457]: I0208 23:43:57.327554    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f976\" (UniqueName: \"kubernetes.io/projected/7b61b636-2f3b-49ec-81a1-68fd2652485a-kube-api-access-5f976\") pod \"cilium-operator-f59cbd8c6-64xnf\" (UID: \"7b61b636-2f3b-49ec-81a1-68fd2652485a\") " pod="kube-system/cilium-operator-f59cbd8c6-64xnf"
Feb  8 23:43:57.327810 kubelet[1457]: I0208 23:43:57.327753    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-bpf-maps\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.328031 kubelet[1457]: I0208 23:43:57.328008    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-xtables-lock\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.328231 kubelet[1457]: I0208 23:43:57.328210    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4tg\" (UniqueName: \"kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-kube-api-access-wj4tg\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.328427 kubelet[1457]: I0208 23:43:57.328406    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-hostproc\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.328630 kubelet[1457]: I0208 23:43:57.328606    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cni-path\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.328872 kubelet[1457]: I0208 23:43:57.328848    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-kernel\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.329216 kubelet[1457]: I0208 23:43:57.329174    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-hubble-tls\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.329487 kubelet[1457]: I0208 23:43:57.329453    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7b61b636-2f3b-49ec-81a1-68fd2652485a-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-64xnf\" (UID: \"7b61b636-2f3b-49ec-81a1-68fd2652485a\") " pod="kube-system/cilium-operator-f59cbd8c6-64xnf"
Feb  8 23:43:57.329703 kubelet[1457]: I0208 23:43:57.329681    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-net\") pod \"cilium-wvknm\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") " pod="kube-system/cilium-wvknm"
Feb  8 23:43:57.550881 env[1133]: time="2024-02-08T23:43:57.550674008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvknm,Uid:bd144eda-dba8-499c-9395-962159c6f2fa,Namespace:kube-system,Attempt:0,}"
Feb  8 23:43:57.556950 env[1133]: time="2024-02-08T23:43:57.556408462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-64xnf,Uid:7b61b636-2f3b-49ec-81a1-68fd2652485a,Namespace:kube-system,Attempt:0,}"
Feb  8 23:43:57.614338 env[1133]: time="2024-02-08T23:43:57.614209066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:43:57.614762 env[1133]: time="2024-02-08T23:43:57.614658381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:43:57.615216 env[1133]: time="2024-02-08T23:43:57.615151267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:43:57.615865 env[1133]: time="2024-02-08T23:43:57.615754973Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be pid=3072 runtime=io.containerd.runc.v2
Feb  8 23:43:57.627540 env[1133]: time="2024-02-08T23:43:57.627389562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:43:57.627540 env[1133]: time="2024-02-08T23:43:57.627475033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:43:57.628177 env[1133]: time="2024-02-08T23:43:57.627501502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:43:57.628177 env[1133]: time="2024-02-08T23:43:57.627938375Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f266dae12325786d4d366fb4afb1ca55b6704aa5b5391f9c943edb8f1ddb92c pid=3086 runtime=io.containerd.runc.v2
Feb  8 23:43:57.697486 env[1133]: time="2024-02-08T23:43:57.697401421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvknm,Uid:bd144eda-dba8-499c-9395-962159c6f2fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\""
Feb  8 23:43:57.707027 env[1133]: time="2024-02-08T23:43:57.706961719Z" level=info msg="CreateContainer within sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  8 23:43:57.722925 env[1133]: time="2024-02-08T23:43:57.722877569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-64xnf,Uid:7b61b636-2f3b-49ec-81a1-68fd2652485a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f266dae12325786d4d366fb4afb1ca55b6704aa5b5391f9c943edb8f1ddb92c\""
Feb  8 23:43:57.724948 env[1133]: time="2024-02-08T23:43:57.724893239Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb  8 23:43:57.740229 kubelet[1457]: E0208 23:43:57.740194    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:57.759922 env[1133]: time="2024-02-08T23:43:57.759874392Z" level=info msg="CreateContainer within sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8\""
Feb  8 23:43:57.761086 env[1133]: time="2024-02-08T23:43:57.761060753Z" level=info msg="StartContainer for \"6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8\""
Feb  8 23:43:57.881772 env[1133]: time="2024-02-08T23:43:57.881675772Z" level=info msg="StartContainer for \"6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8\" returns successfully"
Feb  8 23:43:57.998617 env[1133]: time="2024-02-08T23:43:57.998527374Z" level=info msg="shim disconnected" id=6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8
Feb  8 23:43:57.999145 env[1133]: time="2024-02-08T23:43:57.999055597Z" level=warning msg="cleaning up after shim disconnected" id=6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8 namespace=k8s.io
Feb  8 23:43:57.999370 env[1133]: time="2024-02-08T23:43:57.999333570Z" level=info msg="cleaning up dead shim"
Feb  8 23:43:58.020891 env[1133]: time="2024-02-08T23:43:58.020811055Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3201 runtime=io.containerd.runc.v2\n"
Feb  8 23:43:58.356961 env[1133]: time="2024-02-08T23:43:58.356810818Z" level=info msg="CreateContainer within sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  8 23:43:58.403081 env[1133]: time="2024-02-08T23:43:58.402951460Z" level=info msg="CreateContainer within sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7\""
Feb  8 23:43:58.404636 env[1133]: time="2024-02-08T23:43:58.404562981Z" level=info msg="StartContainer for \"4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7\""
Feb  8 23:43:58.513968 env[1133]: time="2024-02-08T23:43:58.513803897Z" level=info msg="StartContainer for \"4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7\" returns successfully"
Feb  8 23:43:58.541955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7-rootfs.mount: Deactivated successfully.
Feb  8 23:43:58.556534 env[1133]: time="2024-02-08T23:43:58.556486186Z" level=info msg="shim disconnected" id=4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7
Feb  8 23:43:58.556921 env[1133]: time="2024-02-08T23:43:58.556536200Z" level=warning msg="cleaning up after shim disconnected" id=4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7 namespace=k8s.io
Feb  8 23:43:58.556921 env[1133]: time="2024-02-08T23:43:58.556548122Z" level=info msg="cleaning up dead shim"
Feb  8 23:43:58.567377 env[1133]: time="2024-02-08T23:43:58.567314868Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n"
Feb  8 23:43:58.631435 kubelet[1457]: E0208 23:43:58.631225    1457 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:58.740475 kubelet[1457]: E0208 23:43:58.740353    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:43:58.835354 kubelet[1457]: E0208 23:43:58.835298    1457 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  8 23:43:59.362112 env[1133]: time="2024-02-08T23:43:59.362010701Z" level=info msg="CreateContainer within sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  8 23:43:59.394539 env[1133]: time="2024-02-08T23:43:59.394412124Z" level=info msg="CreateContainer within sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1\""
Feb  8 23:43:59.399844 env[1133]: time="2024-02-08T23:43:59.396448954Z" level=info msg="StartContainer for \"a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1\""
Feb  8 23:43:59.506348 env[1133]: time="2024-02-08T23:43:59.506308479Z" level=info msg="StartContainer for \"a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1\" returns successfully"
Feb  8 23:43:59.531819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1-rootfs.mount: Deactivated successfully.
Feb  8 23:43:59.539557 env[1133]: time="2024-02-08T23:43:59.539514255Z" level=info msg="shim disconnected" id=a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1
Feb  8 23:43:59.539836 env[1133]: time="2024-02-08T23:43:59.539816773Z" level=warning msg="cleaning up after shim disconnected" id=a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1 namespace=k8s.io
Feb  8 23:43:59.539915 env[1133]: time="2024-02-08T23:43:59.539900501Z" level=info msg="cleaning up dead shim"
Feb  8 23:43:59.548567 env[1133]: time="2024-02-08T23:43:59.548532502Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:43:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3323 runtime=io.containerd.runc.v2\n"
Feb  8 23:43:59.741104 kubelet[1457]: E0208 23:43:59.740840    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:00.096287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077474999.mount: Deactivated successfully.
Feb  8 23:44:00.360761 env[1133]: time="2024-02-08T23:44:00.360657429Z" level=info msg="StopPodSandbox for \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\""
Feb  8 23:44:00.361288 env[1133]: time="2024-02-08T23:44:00.361263168Z" level=info msg="Container to stop \"6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:44:00.361368 env[1133]: time="2024-02-08T23:44:00.361350271Z" level=info msg="Container to stop \"a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:44:00.361445 env[1133]: time="2024-02-08T23:44:00.361427266Z" level=info msg="Container to stop \"4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  8 23:44:00.454850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be-rootfs.mount: Deactivated successfully.
Feb  8 23:44:00.455140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be-shm.mount: Deactivated successfully.
Feb  8 23:44:00.460061 env[1133]: time="2024-02-08T23:44:00.459987916Z" level=info msg="shim disconnected" id=efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be
Feb  8 23:44:00.460174 env[1133]: time="2024-02-08T23:44:00.460072745Z" level=warning msg="cleaning up after shim disconnected" id=efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be namespace=k8s.io
Feb  8 23:44:00.460174 env[1133]: time="2024-02-08T23:44:00.460094185Z" level=info msg="cleaning up dead shim"
Feb  8 23:44:00.492209 env[1133]: time="2024-02-08T23:44:00.492145497Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:44:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3355 runtime=io.containerd.runc.v2\n"
Feb  8 23:44:00.492698 env[1133]: time="2024-02-08T23:44:00.492649364Z" level=info msg="TearDown network for sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" successfully"
Feb  8 23:44:00.492741 env[1133]: time="2024-02-08T23:44:00.492702515Z" level=info msg="StopPodSandbox for \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" returns successfully"
Feb  8 23:44:00.671019 kubelet[1457]: I0208 23:44:00.668282    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-kernel\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671019 kubelet[1457]: I0208 23:44:00.668332    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-cgroup\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671019 kubelet[1457]: I0208 23:44:00.668359    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-bpf-maps\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671019 kubelet[1457]: I0208 23:44:00.668382    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-xtables-lock\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671019 kubelet[1457]: I0208 23:44:00.668404    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-hostproc\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671019 kubelet[1457]: I0208 23:44:00.668427    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-net\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671612 kubelet[1457]: I0208 23:44:00.668449    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-etc-cni-netd\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671612 kubelet[1457]: I0208 23:44:00.668478    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-config-path\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671612 kubelet[1457]: I0208 23:44:00.668468    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.671612 kubelet[1457]: I0208 23:44:00.668506    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-ipsec-secrets\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671612 kubelet[1457]: I0208 23:44:00.668535    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-hubble-tls\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671992 kubelet[1457]: I0208 23:44:00.668540    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.671992 kubelet[1457]: I0208 23:44:00.668565    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-clustermesh-secrets\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671992 kubelet[1457]: I0208 23:44:00.668589    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-run\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.671992 kubelet[1457]: I0208 23:44:00.668581    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.671992 kubelet[1457]: I0208 23:44:00.668613    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-lib-modules\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.672299 kubelet[1457]: I0208 23:44:00.668621    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.672299 kubelet[1457]: I0208 23:44:00.668641    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4tg\" (UniqueName: \"kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-kube-api-access-wj4tg\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.672299 kubelet[1457]: I0208 23:44:00.668659    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.672299 kubelet[1457]: I0208 23:44:00.668674    1457 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cni-path\") pod \"bd144eda-dba8-499c-9395-962159c6f2fa\" (UID: \"bd144eda-dba8-499c-9395-962159c6f2fa\") "
Feb  8 23:44:00.672299 kubelet[1457]: I0208 23:44:00.668697    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cni-path" (OuterVolumeSpecName: "cni-path") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668724    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-hostproc" (OuterVolumeSpecName: "hostproc") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668742    1457 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-kernel\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668765    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-cgroup\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668803    1457 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-bpf-maps\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668818    1457 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-xtables-lock\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668836    1457 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-host-proc-sys-net\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.672598 kubelet[1457]: I0208 23:44:00.668849    1457 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cni-path\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.675231 kubelet[1457]: I0208 23:44:00.669450    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.675231 kubelet[1457]: W0208 23:44:00.669855    1457 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/bd144eda-dba8-499c-9395-962159c6f2fa/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  8 23:44:00.675231 kubelet[1457]: I0208 23:44:00.673642    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.675231 kubelet[1457]: I0208 23:44:00.673742    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  8 23:44:00.681942 kubelet[1457]: I0208 23:44:00.681876    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  8 23:44:00.687434 systemd[1]: var-lib-kubelet-pods-bd144eda\x2ddba8\x2d499c\x2d9395\x2d962159c6f2fa-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb  8 23:44:00.697283 systemd[1]: var-lib-kubelet-pods-bd144eda\x2ddba8\x2d499c\x2d9395\x2d962159c6f2fa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  8 23:44:00.700623 kubelet[1457]: I0208 23:44:00.700572    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  8 23:44:00.708109 kubelet[1457]: I0208 23:44:00.705388    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  8 23:44:00.707058 systemd[1]: var-lib-kubelet-pods-bd144eda\x2ddba8\x2d499c\x2d9395\x2d962159c6f2fa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  8 23:44:00.716500 kubelet[1457]: I0208 23:44:00.714071    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:44:00.715390 systemd[1]: var-lib-kubelet-pods-bd144eda\x2ddba8\x2d499c\x2d9395\x2d962159c6f2fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwj4tg.mount: Deactivated successfully.
Feb  8 23:44:00.718430 kubelet[1457]: I0208 23:44:00.718387    1457 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-kube-api-access-wj4tg" (OuterVolumeSpecName: "kube-api-access-wj4tg") pod "bd144eda-dba8-499c-9395-962159c6f2fa" (UID: "bd144eda-dba8-499c-9395-962159c6f2fa"). InnerVolumeSpecName "kube-api-access-wj4tg". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  8 23:44:00.741665 kubelet[1457]: E0208 23:44:00.741585    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:00.769831 kubelet[1457]: I0208 23:44:00.769756    1457 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-hostproc\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.770100 kubelet[1457]: I0208 23:44:00.770077    1457 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-etc-cni-netd\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.770406 kubelet[1457]: I0208 23:44:00.770363    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-config-path\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.770582 kubelet[1457]: I0208 23:44:00.770562    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-ipsec-secrets\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.770765 kubelet[1457]: I0208 23:44:00.770746    1457 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-hubble-tls\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.770983 kubelet[1457]: I0208 23:44:00.770963    1457 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bd144eda-dba8-499c-9395-962159c6f2fa-clustermesh-secrets\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.771183 kubelet[1457]: I0208 23:44:00.771162    1457 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-cilium-run\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.771395 kubelet[1457]: I0208 23:44:00.771373    1457 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd144eda-dba8-499c-9395-962159c6f2fa-lib-modules\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:00.771634 kubelet[1457]: I0208 23:44:00.771611    1457 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-wj4tg\" (UniqueName: \"kubernetes.io/projected/bd144eda-dba8-499c-9395-962159c6f2fa-kube-api-access-wj4tg\") on node \"172.24.4.77\" DevicePath \"\""
Feb  8 23:44:01.230835 env[1133]: time="2024-02-08T23:44:01.230658334Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:44:01.232764 env[1133]: time="2024-02-08T23:44:01.232712977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:44:01.235261 env[1133]: time="2024-02-08T23:44:01.235214741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  8 23:44:01.236881 env[1133]: time="2024-02-08T23:44:01.236819868Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb  8 23:44:01.240157 env[1133]: time="2024-02-08T23:44:01.240099013Z" level=info msg="CreateContainer within sandbox \"5f266dae12325786d4d366fb4afb1ca55b6704aa5b5391f9c943edb8f1ddb92c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb  8 23:44:01.263846 env[1133]: time="2024-02-08T23:44:01.263708340Z" level=info msg="CreateContainer within sandbox \"5f266dae12325786d4d366fb4afb1ca55b6704aa5b5391f9c943edb8f1ddb92c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0ad79193b08fa740b7f5b0cc5884a75be1162ab99668e2d9fff7dfa1a3387cb2\""
Feb  8 23:44:01.265140 env[1133]: time="2024-02-08T23:44:01.265100257Z" level=info msg="StartContainer for \"0ad79193b08fa740b7f5b0cc5884a75be1162ab99668e2d9fff7dfa1a3387cb2\""
Feb  8 23:44:01.342060 env[1133]: time="2024-02-08T23:44:01.341981417Z" level=info msg="StartContainer for \"0ad79193b08fa740b7f5b0cc5884a75be1162ab99668e2d9fff7dfa1a3387cb2\" returns successfully"
Feb  8 23:44:01.366381 kubelet[1457]: I0208 23:44:01.366348    1457 scope.go:115] "RemoveContainer" containerID="a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1"
Feb  8 23:44:01.386236 env[1133]: time="2024-02-08T23:44:01.386159353Z" level=info msg="RemoveContainer for \"a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1\""
Feb  8 23:44:01.394680 env[1133]: time="2024-02-08T23:44:01.394636191Z" level=info msg="RemoveContainer for \"a3edf2d5e538d67f3254f2cb749cdedf34014f56c19832676cd7b67b498494a1\" returns successfully"
Feb  8 23:44:01.395077 kubelet[1457]: I0208 23:44:01.395054    1457 scope.go:115] "RemoveContainer" containerID="4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7"
Feb  8 23:44:01.396261 env[1133]: time="2024-02-08T23:44:01.396234565Z" level=info msg="RemoveContainer for \"4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7\""
Feb  8 23:44:01.399541 env[1133]: time="2024-02-08T23:44:01.399494745Z" level=info msg="RemoveContainer for \"4e4043b885b9f258fe1d096d4c8b34d7ce32b2e70077b2e3914c5e6da2ce08e7\" returns successfully"
Feb  8 23:44:01.399803 kubelet[1457]: I0208 23:44:01.399755    1457 scope.go:115] "RemoveContainer" containerID="6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8"
Feb  8 23:44:01.401036 env[1133]: time="2024-02-08T23:44:01.400983584Z" level=info msg="RemoveContainer for \"6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8\""
Feb  8 23:44:01.404386 env[1133]: time="2024-02-08T23:44:01.404341817Z" level=info msg="RemoveContainer for \"6f40d28d8c95a264f2df5c0a193845bab75dd78715b19c4f73ad01c17e659dc8\" returns successfully"
Feb  8 23:44:01.460866 kubelet[1457]: I0208 23:44:01.453709    1457 topology_manager.go:210] "Topology Admit Handler"
Feb  8 23:44:01.460866 kubelet[1457]: E0208 23:44:01.453819    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd144eda-dba8-499c-9395-962159c6f2fa" containerName="apply-sysctl-overwrites"
Feb  8 23:44:01.460866 kubelet[1457]: E0208 23:44:01.453837    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd144eda-dba8-499c-9395-962159c6f2fa" containerName="mount-cgroup"
Feb  8 23:44:01.460866 kubelet[1457]: E0208 23:44:01.453849    1457 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd144eda-dba8-499c-9395-962159c6f2fa" containerName="mount-bpf-fs"
Feb  8 23:44:01.460866 kubelet[1457]: I0208 23:44:01.453914    1457 memory_manager.go:346] "RemoveStaleState removing state" podUID="bd144eda-dba8-499c-9395-962159c6f2fa" containerName="mount-bpf-fs"
Feb  8 23:44:01.486073 kubelet[1457]: I0208 23:44:01.485954    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-64xnf" podStartSLOduration=-9.223372032368864e+09 pod.CreationTimestamp="2024-02-08 23:43:57 +0000 UTC" firstStartedPulling="2024-02-08 23:43:57.724135124 +0000 UTC m=+99.794909433" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:44:01.459523502 +0000 UTC m=+103.530297851" watchObservedRunningTime="2024-02-08 23:44:01.485910891 +0000 UTC m=+103.556685210"
Feb  8 23:44:01.590841 kubelet[1457]: I0208 23:44:01.589211    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-host-proc-sys-kernel\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.590841 kubelet[1457]: I0208 23:44:01.589353    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/302261b8-245b-4d7d-a460-9c73a8f073c4-hubble-tls\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.590841 kubelet[1457]: I0208 23:44:01.589418    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-cilium-run\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.590841 kubelet[1457]: I0208 23:44:01.589476    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-hostproc\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.590841 kubelet[1457]: I0208 23:44:01.589542    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/302261b8-245b-4d7d-a460-9c73a8f073c4-clustermesh-secrets\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.590841 kubelet[1457]: I0208 23:44:01.589605    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/302261b8-245b-4d7d-a460-9c73a8f073c4-cilium-config-path\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.591548 kubelet[1457]: I0208 23:44:01.589668    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-etc-cni-netd\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.591548 kubelet[1457]: I0208 23:44:01.589727    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-lib-modules\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.591548 kubelet[1457]: I0208 23:44:01.589821    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-cilium-cgroup\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.591548 kubelet[1457]: I0208 23:44:01.589904    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-cni-path\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.591548 kubelet[1457]: I0208 23:44:01.589962    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-xtables-lock\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.591548 kubelet[1457]: I0208 23:44:01.590019    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/302261b8-245b-4d7d-a460-9c73a8f073c4-cilium-ipsec-secrets\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.592044 kubelet[1457]: I0208 23:44:01.590075    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-bpf-maps\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.592044 kubelet[1457]: I0208 23:44:01.590140    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/302261b8-245b-4d7d-a460-9c73a8f073c4-host-proc-sys-net\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.592044 kubelet[1457]: I0208 23:44:01.590201    1457 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sgxb\" (UniqueName: \"kubernetes.io/projected/302261b8-245b-4d7d-a460-9c73a8f073c4-kube-api-access-9sgxb\") pod \"cilium-jmhcv\" (UID: \"302261b8-245b-4d7d-a460-9c73a8f073c4\") " pod="kube-system/cilium-jmhcv"
Feb  8 23:44:01.743102 kubelet[1457]: E0208 23:44:01.742903    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:01.761847 env[1133]: time="2024-02-08T23:44:01.761049345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jmhcv,Uid:302261b8-245b-4d7d-a460-9c73a8f073c4,Namespace:kube-system,Attempt:0,}"
Feb  8 23:44:01.838405 env[1133]: time="2024-02-08T23:44:01.837858630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  8 23:44:01.838405 env[1133]: time="2024-02-08T23:44:01.837978094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  8 23:44:01.838405 env[1133]: time="2024-02-08T23:44:01.838026236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  8 23:44:01.838901 env[1133]: time="2024-02-08T23:44:01.838567122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64 pid=3425 runtime=io.containerd.runc.v2
Feb  8 23:44:01.910489 env[1133]: time="2024-02-08T23:44:01.910414670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jmhcv,Uid:302261b8-245b-4d7d-a460-9c73a8f073c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\""
Feb  8 23:44:01.913369 env[1133]: time="2024-02-08T23:44:01.913335480Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  8 23:44:01.927267 env[1133]: time="2024-02-08T23:44:01.927158308Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be56853d02a6d67e4fb5c16784ef1380d1cad46c2e59fc1a73eaee20e270fb38\""
Feb  8 23:44:01.927824 env[1133]: time="2024-02-08T23:44:01.927703493Z" level=info msg="StartContainer for \"be56853d02a6d67e4fb5c16784ef1380d1cad46c2e59fc1a73eaee20e270fb38\""
Feb  8 23:44:01.985692 env[1133]: time="2024-02-08T23:44:01.985623445Z" level=info msg="StartContainer for \"be56853d02a6d67e4fb5c16784ef1380d1cad46c2e59fc1a73eaee20e270fb38\" returns successfully"
Feb  8 23:44:02.023908 env[1133]: time="2024-02-08T23:44:02.023489283Z" level=info msg="shim disconnected" id=be56853d02a6d67e4fb5c16784ef1380d1cad46c2e59fc1a73eaee20e270fb38
Feb  8 23:44:02.023908 env[1133]: time="2024-02-08T23:44:02.023539377Z" level=warning msg="cleaning up after shim disconnected" id=be56853d02a6d67e4fb5c16784ef1380d1cad46c2e59fc1a73eaee20e270fb38 namespace=k8s.io
Feb  8 23:44:02.023908 env[1133]: time="2024-02-08T23:44:02.023551770Z" level=info msg="cleaning up dead shim"
Feb  8 23:44:02.033735 env[1133]: time="2024-02-08T23:44:02.033677766Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3511 runtime=io.containerd.runc.v2\n"
Feb  8 23:44:02.396528 env[1133]: time="2024-02-08T23:44:02.396434998Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  8 23:44:02.427690 env[1133]: time="2024-02-08T23:44:02.427489028Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e\""
Feb  8 23:44:02.431733 env[1133]: time="2024-02-08T23:44:02.430010207Z" level=info msg="StartContainer for \"6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e\""
Feb  8 23:44:02.492362 systemd[1]: run-containerd-runc-k8s.io-6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e-runc.dpPkyr.mount: Deactivated successfully.
Feb  8 23:44:02.523582 env[1133]: time="2024-02-08T23:44:02.523545398Z" level=info msg="StartContainer for \"6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e\" returns successfully"
Feb  8 23:44:02.585609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e-rootfs.mount: Deactivated successfully.
Feb  8 23:44:02.593507 env[1133]: time="2024-02-08T23:44:02.593393488Z" level=info msg="shim disconnected" id=6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e
Feb  8 23:44:02.593507 env[1133]: time="2024-02-08T23:44:02.593491483Z" level=warning msg="cleaning up after shim disconnected" id=6869fe8c6a00774359f816c889023697bb13dabdfc4319e9871e9bac8d41528e namespace=k8s.io
Feb  8 23:44:02.593960 env[1133]: time="2024-02-08T23:44:02.593517262Z" level=info msg="cleaning up dead shim"
Feb  8 23:44:02.611389 env[1133]: time="2024-02-08T23:44:02.611283140Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:44:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3576 runtime=io.containerd.runc.v2\n"
Feb  8 23:44:02.744663 kubelet[1457]: E0208 23:44:02.743598    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:02.896133 kubelet[1457]: I0208 23:44:02.896076    1457 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bd144eda-dba8-499c-9395-962159c6f2fa path="/var/lib/kubelet/pods/bd144eda-dba8-499c-9395-962159c6f2fa/volumes"
Feb  8 23:44:03.288629 kubelet[1457]: I0208 23:44:03.288523    1457 setters.go:548] "Node became not ready" node="172.24.4.77" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:44:03.288292326 +0000 UTC m=+105.359066685 LastTransitionTime:2024-02-08 23:44:03.288292326 +0000 UTC m=+105.359066685 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
Feb  8 23:44:03.400110 env[1133]: time="2024-02-08T23:44:03.400024436Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  8 23:44:03.453198 env[1133]: time="2024-02-08T23:44:03.450006864Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad52f5d9273fd2465efb7ba8ddd9e1b4268ff8917f5dc6246e4c66dbc3fcc751\""
Feb  8 23:44:03.453858 env[1133]: time="2024-02-08T23:44:03.453732236Z" level=info msg="StartContainer for \"ad52f5d9273fd2465efb7ba8ddd9e1b4268ff8917f5dc6246e4c66dbc3fcc751\""
Feb  8 23:44:03.455663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227137752.mount: Deactivated successfully.
Feb  8 23:44:03.545244 env[1133]: time="2024-02-08T23:44:03.544952937Z" level=info msg="StartContainer for \"ad52f5d9273fd2465efb7ba8ddd9e1b4268ff8917f5dc6246e4c66dbc3fcc751\" returns successfully"
Feb  8 23:44:03.574443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad52f5d9273fd2465efb7ba8ddd9e1b4268ff8917f5dc6246e4c66dbc3fcc751-rootfs.mount: Deactivated successfully.
Feb  8 23:44:03.582547 env[1133]: time="2024-02-08T23:44:03.582482146Z" level=info msg="shim disconnected" id=ad52f5d9273fd2465efb7ba8ddd9e1b4268ff8917f5dc6246e4c66dbc3fcc751
Feb  8 23:44:03.582836 env[1133]: time="2024-02-08T23:44:03.582810843Z" level=warning msg="cleaning up after shim disconnected" id=ad52f5d9273fd2465efb7ba8ddd9e1b4268ff8917f5dc6246e4c66dbc3fcc751 namespace=k8s.io
Feb  8 23:44:03.582952 env[1133]: time="2024-02-08T23:44:03.582933945Z" level=info msg="cleaning up dead shim"
Feb  8 23:44:03.592106 env[1133]: time="2024-02-08T23:44:03.592065029Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:44:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3635 runtime=io.containerd.runc.v2\n"
Feb  8 23:44:03.744252 kubelet[1457]: E0208 23:44:03.744152    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:03.838429 kubelet[1457]: E0208 23:44:03.837648    1457 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  8 23:44:04.408346 env[1133]: time="2024-02-08T23:44:04.408249588Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  8 23:44:04.439755 env[1133]: time="2024-02-08T23:44:04.439627592Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5be1ae4860e252ccfa7481ce0ca7f59b568f16401c64adf5de0866a17a3e18d9\""
Feb  8 23:44:04.443764 env[1133]: time="2024-02-08T23:44:04.443675451Z" level=info msg="StartContainer for \"5be1ae4860e252ccfa7481ce0ca7f59b568f16401c64adf5de0866a17a3e18d9\""
Feb  8 23:44:04.544169 env[1133]: time="2024-02-08T23:44:04.543889770Z" level=info msg="StartContainer for \"5be1ae4860e252ccfa7481ce0ca7f59b568f16401c64adf5de0866a17a3e18d9\" returns successfully"
Feb  8 23:44:04.561907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be1ae4860e252ccfa7481ce0ca7f59b568f16401c64adf5de0866a17a3e18d9-rootfs.mount: Deactivated successfully.
Feb  8 23:44:04.567436 env[1133]: time="2024-02-08T23:44:04.567397057Z" level=info msg="shim disconnected" id=5be1ae4860e252ccfa7481ce0ca7f59b568f16401c64adf5de0866a17a3e18d9
Feb  8 23:44:04.567573 env[1133]: time="2024-02-08T23:44:04.567553571Z" level=warning msg="cleaning up after shim disconnected" id=5be1ae4860e252ccfa7481ce0ca7f59b568f16401c64adf5de0866a17a3e18d9 namespace=k8s.io
Feb  8 23:44:04.567654 env[1133]: time="2024-02-08T23:44:04.567640544Z" level=info msg="cleaning up dead shim"
Feb  8 23:44:04.576742 env[1133]: time="2024-02-08T23:44:04.576696478Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:44:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3689 runtime=io.containerd.runc.v2\n"
Feb  8 23:44:04.745630 kubelet[1457]: E0208 23:44:04.744846    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:05.417634 env[1133]: time="2024-02-08T23:44:05.417499784Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  8 23:44:05.455713 env[1133]: time="2024-02-08T23:44:05.455564683Z" level=info msg="CreateContainer within sandbox \"48e3d3381ff00978dafc18d62967e64b309b011b42361e9381eb04e687d97e64\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d\""
Feb  8 23:44:05.457325 env[1133]: time="2024-02-08T23:44:05.457236255Z" level=info msg="StartContainer for \"2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d\""
Feb  8 23:44:05.513485 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.PWpn3x.mount: Deactivated successfully.
Feb  8 23:44:05.552092 env[1133]: time="2024-02-08T23:44:05.552033699Z" level=info msg="StartContainer for \"2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d\" returns successfully"
Feb  8 23:44:05.746113 kubelet[1457]: E0208 23:44:05.745458    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:06.295802 kernel: cryptd: max_cpu_qlen set to 1000
Feb  8 23:44:06.354855 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic))))
Feb  8 23:44:06.453907 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.ZdJ73j.mount: Deactivated successfully.
Feb  8 23:44:06.745851 kubelet[1457]: E0208 23:44:06.745719    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:07.747325 kubelet[1457]: E0208 23:44:07.747247    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:07.990401 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.swWx8L.mount: Deactivated successfully.
Feb  8 23:44:08.747875 kubelet[1457]: E0208 23:44:08.747839    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:09.602278 systemd-networkd[1022]: lxc_health: Link UP
Feb  8 23:44:09.608355 systemd-networkd[1022]: lxc_health: Gained carrier
Feb  8 23:44:09.608990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  8 23:44:09.749321 kubelet[1457]: E0208 23:44:09.749258    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:09.790222 kubelet[1457]: I0208 23:44:09.790171    1457 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jmhcv" podStartSLOduration=8.790136101 pod.CreationTimestamp="2024-02-08 23:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:44:06.458401777 +0000 UTC m=+108.529176106" watchObservedRunningTime="2024-02-08 23:44:09.790136101 +0000 UTC m=+111.860910410"
Feb  8 23:44:10.236753 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.1abtdR.mount: Deactivated successfully.
Feb  8 23:44:10.750600 kubelet[1457]: E0208 23:44:10.750490    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:11.427016 systemd-networkd[1022]: lxc_health: Gained IPv6LL
Feb  8 23:44:11.751878 kubelet[1457]: E0208 23:44:11.751715    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:12.509297 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.5zFtw3.mount: Deactivated successfully.
Feb  8 23:44:12.752145 kubelet[1457]: E0208 23:44:12.752089    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:13.752592 kubelet[1457]: E0208 23:44:13.752386    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:14.753135 kubelet[1457]: E0208 23:44:14.753072    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:14.755212 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.MeqiFq.mount: Deactivated successfully.
Feb  8 23:44:15.753565 kubelet[1457]: E0208 23:44:15.753415    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:16.754311 kubelet[1457]: E0208 23:44:16.754179    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:17.045107 systemd[1]: run-containerd-runc-k8s.io-2cc28e7c52adcbc9b9bb2a7173dca3a91221564b3fc4c7e0bd2d092b1398443d-runc.t9KhXG.mount: Deactivated successfully.
Feb  8 23:44:17.755141 kubelet[1457]: E0208 23:44:17.755080    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:18.630578 kubelet[1457]: E0208 23:44:18.630512    1457 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:18.679491 env[1133]: time="2024-02-08T23:44:18.679393350Z" level=info msg="StopPodSandbox for \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\""
Feb  8 23:44:18.680381 env[1133]: time="2024-02-08T23:44:18.679576854Z" level=info msg="TearDown network for sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" successfully"
Feb  8 23:44:18.680381 env[1133]: time="2024-02-08T23:44:18.679657677Z" level=info msg="StopPodSandbox for \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" returns successfully"
Feb  8 23:44:18.681318 env[1133]: time="2024-02-08T23:44:18.681237084Z" level=info msg="RemovePodSandbox for \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\""
Feb  8 23:44:18.681478 env[1133]: time="2024-02-08T23:44:18.681311794Z" level=info msg="Forcibly stopping sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\""
Feb  8 23:44:18.681560 env[1133]: time="2024-02-08T23:44:18.681470852Z" level=info msg="TearDown network for sandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" successfully"
Feb  8 23:44:18.688348 env[1133]: time="2024-02-08T23:44:18.688278852Z" level=info msg="RemovePodSandbox \"efaaf0e0807a820171a79a96728ee115e87ca1fba47dbc3dce322560943ee7be\" returns successfully"
Feb  8 23:44:18.690161 env[1133]: time="2024-02-08T23:44:18.690076589Z" level=info msg="StopPodSandbox for \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\""
Feb  8 23:44:18.690359 env[1133]: time="2024-02-08T23:44:18.690255336Z" level=info msg="TearDown network for sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" successfully"
Feb  8 23:44:18.690463 env[1133]: time="2024-02-08T23:44:18.690348200Z" level=info msg="StopPodSandbox for \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" returns successfully"
Feb  8 23:44:18.691633 env[1133]: time="2024-02-08T23:44:18.691559125Z" level=info msg="RemovePodSandbox for \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\""
Feb  8 23:44:18.691760 env[1133]: time="2024-02-08T23:44:18.691666075Z" level=info msg="Forcibly stopping sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\""
Feb  8 23:44:18.692012 env[1133]: time="2024-02-08T23:44:18.691942144Z" level=info msg="TearDown network for sandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" successfully"
Feb  8 23:44:18.699834 env[1133]: time="2024-02-08T23:44:18.699620108Z" level=info msg="RemovePodSandbox \"e02e7225ac89d563c172f229b94c727a4bd17d5498977c011e9df1baee866adf\" returns successfully"
Feb  8 23:44:18.755523 kubelet[1457]: E0208 23:44:18.755427    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:19.757128 kubelet[1457]: E0208 23:44:19.757039    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:20.758613 kubelet[1457]: E0208 23:44:20.758423    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:21.759517 kubelet[1457]: E0208 23:44:21.759438    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  8 23:44:22.761154 kubelet[1457]: E0208 23:44:22.761084    1457 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"