Feb  9 18:55:33.797198 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024
Feb  9 18:55:33.797223 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6
Feb  9 18:55:33.797233 kernel: BIOS-provided physical RAM map:
Feb  9 18:55:33.797241 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  9 18:55:33.797248 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  9 18:55:33.797255 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  9 18:55:33.797264 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable
Feb  9 18:55:33.797270 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved
Feb  9 18:55:33.797279 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  9 18:55:33.797287 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  9 18:55:33.797294 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved
Feb  9 18:55:33.797301 kernel: NX (Execute Disable) protection: active
Feb  9 18:55:33.797308 kernel: SMBIOS 2.8 present.
Feb  9 18:55:33.797316 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Feb  9 18:55:33.797327 kernel: Hypervisor detected: KVM
Feb  9 18:55:33.797336 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  9 18:55:33.797343 kernel: kvm-clock: cpu 0, msr 5efaa001, primary cpu clock
Feb  9 18:55:33.797351 kernel: kvm-clock: using sched offset of 2166023655 cycles
Feb  9 18:55:33.797360 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  9 18:55:33.797368 kernel: tsc: Detected 2794.750 MHz processor
Feb  9 18:55:33.797384 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb  9 18:55:33.797393 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb  9 18:55:33.797402 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000
Feb  9 18:55:33.797412 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  9 18:55:33.797420 kernel: Using GB pages for direct mapping
Feb  9 18:55:33.797428 kernel: ACPI: Early table checksum verification disabled
Feb  9 18:55:33.797437 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS )
Feb  9 18:55:33.797445 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:55:33.797453 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:55:33.797461 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:55:33.797470 kernel: ACPI: FACS 0x000000009CFE0000 000040
Feb  9 18:55:33.797478 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:55:33.797488 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:55:33.797496 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 18:55:33.797504 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec]
Feb  9 18:55:33.797513 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78]
Feb  9 18:55:33.797521 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f]
Feb  9 18:55:33.797531 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c]
Feb  9 18:55:33.797540 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4]
Feb  9 18:55:33.797550 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc]
Feb  9 18:55:33.797563 kernel: No NUMA configuration found
Feb  9 18:55:33.797571 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff]
Feb  9 18:55:33.797580 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff]
Feb  9 18:55:33.797589 kernel: Zone ranges:
Feb  9 18:55:33.797598 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  9 18:55:33.797607 kernel:   DMA32    [mem 0x0000000001000000-0x000000009cfdcfff]
Feb  9 18:55:33.797618 kernel:   Normal   empty
Feb  9 18:55:33.797627 kernel: Movable zone start for each node
Feb  9 18:55:33.797635 kernel: Early memory node ranges
Feb  9 18:55:33.797644 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  9 18:55:33.797653 kernel:   node   0: [mem 0x0000000000100000-0x000000009cfdcfff]
Feb  9 18:55:33.797661 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff]
Feb  9 18:55:33.797670 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  9 18:55:33.797679 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  9 18:55:33.797688 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges
Feb  9 18:55:33.797698 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  9 18:55:33.797707 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  9 18:55:33.797715 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  9 18:55:33.797724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  9 18:55:33.797733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  9 18:55:33.797742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  9 18:55:33.797750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  9 18:55:33.797759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  9 18:55:33.797768 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  9 18:55:33.797779 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb  9 18:55:33.797788 kernel: TSC deadline timer available
Feb  9 18:55:33.797816 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs
Feb  9 18:55:33.797825 kernel: kvm-guest: KVM setup pv remote TLB flush
Feb  9 18:55:33.797834 kernel: kvm-guest: setup PV sched yield
Feb  9 18:55:33.797843 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices
Feb  9 18:55:33.797851 kernel: Booting paravirtualized kernel on KVM
Feb  9 18:55:33.797860 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  9 18:55:33.797869 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1
Feb  9 18:55:33.797880 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288
Feb  9 18:55:33.797889 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152
Feb  9 18:55:33.797897 kernel: pcpu-alloc: [0] 0 1 2 3 
Feb  9 18:55:33.797906 kernel: kvm-guest: setup async PF for cpu 0
Feb  9 18:55:33.797914 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0
Feb  9 18:55:33.797923 kernel: kvm-guest: PV spinlocks enabled
Feb  9 18:55:33.797932 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb  9 18:55:33.797941 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 632733
Feb  9 18:55:33.797949 kernel: Policy zone: DMA32
Feb  9 18:55:33.797959 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6
Feb  9 18:55:33.797971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb  9 18:55:33.797980 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb  9 18:55:33.797988 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb  9 18:55:33.797997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  9 18:55:33.798007 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved)
Feb  9 18:55:33.798016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb  9 18:55:33.798024 kernel: ftrace: allocating 34475 entries in 135 pages
Feb  9 18:55:33.798033 kernel: ftrace: allocated 135 pages with 4 groups
Feb  9 18:55:33.798044 kernel: rcu: Hierarchical RCU implementation.
Feb  9 18:55:33.798053 kernel: rcu:         RCU event tracing is enabled.
Feb  9 18:55:33.798062 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb  9 18:55:33.798071 kernel:         Rude variant of Tasks RCU enabled.
Feb  9 18:55:33.798080 kernel:         Tracing variant of Tasks RCU enabled.
Feb  9 18:55:33.798089 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  9 18:55:33.798098 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb  9 18:55:33.798107 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16
Feb  9 18:55:33.798115 kernel: random: crng init done
Feb  9 18:55:33.798125 kernel: Console: colour VGA+ 80x25
Feb  9 18:55:33.798134 kernel: printk: console [ttyS0] enabled
Feb  9 18:55:33.798140 kernel: ACPI: Core revision 20210730
Feb  9 18:55:33.798147 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Feb  9 18:55:33.798153 kernel: APIC: Switch to symmetric I/O mode setup
Feb  9 18:55:33.798160 kernel: x2apic enabled
Feb  9 18:55:33.798166 kernel: Switched APIC routing to physical x2apic.
Feb  9 18:55:33.798173 kernel: kvm-guest: setup PV IPIs
Feb  9 18:55:33.798179 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Feb  9 18:55:33.798187 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb  9 18:55:33.798194 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750)
Feb  9 18:55:33.798200 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb  9 18:55:33.798207 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb  9 18:55:33.798214 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb  9 18:55:33.798220 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  9 18:55:33.798227 kernel: Spectre V2 : Mitigation: Retpolines
Feb  9 18:55:33.798233 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb  9 18:55:33.798240 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb  9 18:55:33.798252 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb  9 18:55:33.798259 kernel: RETBleed: Mitigation: untrained return thunk
Feb  9 18:55:33.798266 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  9 18:55:33.798274 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Feb  9 18:55:33.798281 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  9 18:55:33.798287 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  9 18:55:33.798294 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  9 18:55:33.798301 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  9 18:55:33.798308 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb  9 18:55:33.798316 kernel: Freeing SMP alternatives memory: 32K
Feb  9 18:55:33.798323 kernel: pid_max: default: 32768 minimum: 301
Feb  9 18:55:33.798330 kernel: LSM: Security Framework initializing
Feb  9 18:55:33.798336 kernel: SELinux:  Initializing.
Feb  9 18:55:33.798343 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb  9 18:55:33.798350 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb  9 18:55:33.798357 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb  9 18:55:33.798365 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb  9 18:55:33.798372 kernel: ... version:                0
Feb  9 18:55:33.798384 kernel: ... bit width:              48
Feb  9 18:55:33.798391 kernel: ... generic registers:      6
Feb  9 18:55:33.798398 kernel: ... value mask:             0000ffffffffffff
Feb  9 18:55:33.798405 kernel: ... max period:             00007fffffffffff
Feb  9 18:55:33.798411 kernel: ... fixed-purpose events:   0
Feb  9 18:55:33.798418 kernel: ... event mask:             000000000000003f
Feb  9 18:55:33.798425 kernel: signal: max sigframe size: 1776
Feb  9 18:55:33.798433 kernel: rcu: Hierarchical SRCU implementation.
Feb  9 18:55:33.798440 kernel: smp: Bringing up secondary CPUs ...
Feb  9 18:55:33.798447 kernel: x86: Booting SMP configuration:
Feb  9 18:55:33.798453 kernel: .... node  #0, CPUs:      #1
Feb  9 18:55:33.798460 kernel: kvm-clock: cpu 1, msr 5efaa041, secondary cpu clock
Feb  9 18:55:33.798467 kernel: kvm-guest: setup async PF for cpu 1
Feb  9 18:55:33.798473 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0
Feb  9 18:55:33.798480 kernel:  #2
Feb  9 18:55:33.798487 kernel: kvm-clock: cpu 2, msr 5efaa081, secondary cpu clock
Feb  9 18:55:33.798494 kernel: kvm-guest: setup async PF for cpu 2
Feb  9 18:55:33.798502 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0
Feb  9 18:55:33.798508 kernel:  #3
Feb  9 18:55:33.798515 kernel: kvm-clock: cpu 3, msr 5efaa0c1, secondary cpu clock
Feb  9 18:55:33.798522 kernel: kvm-guest: setup async PF for cpu 3
Feb  9 18:55:33.798528 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0
Feb  9 18:55:33.798535 kernel: smp: Brought up 1 node, 4 CPUs
Feb  9 18:55:33.798542 kernel: smpboot: Max logical packages: 1
Feb  9 18:55:33.798548 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS)
Feb  9 18:55:33.798555 kernel: devtmpfs: initialized
Feb  9 18:55:33.798563 kernel: x86/mm: Memory block size: 128MB
Feb  9 18:55:33.798570 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  9 18:55:33.798577 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb  9 18:55:33.798584 kernel: pinctrl core: initialized pinctrl subsystem
Feb  9 18:55:33.798590 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  9 18:55:33.798597 kernel: audit: initializing netlink subsys (disabled)
Feb  9 18:55:33.798604 kernel: audit: type=2000 audit(1707504933.984:1): state=initialized audit_enabled=0 res=1
Feb  9 18:55:33.798611 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  9 18:55:33.798617 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  9 18:55:33.798625 kernel: cpuidle: using governor menu
Feb  9 18:55:33.798632 kernel: ACPI: bus type PCI registered
Feb  9 18:55:33.798639 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  9 18:55:33.798645 kernel: dca service started, version 1.12.1
Feb  9 18:55:33.798652 kernel: PCI: Using configuration type 1 for base access
Feb  9 18:55:33.798659 kernel: PCI: Using configuration type 1 for extended access
Feb  9 18:55:33.798666 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  9 18:55:33.798672 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb  9 18:55:33.798679 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb  9 18:55:33.798687 kernel: ACPI: Added _OSI(Module Device)
Feb  9 18:55:33.798694 kernel: ACPI: Added _OSI(Processor Device)
Feb  9 18:55:33.798701 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb  9 18:55:33.798707 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  9 18:55:33.798714 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb  9 18:55:33.798721 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb  9 18:55:33.798728 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb  9 18:55:33.798734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  9 18:55:33.798741 kernel: ACPI: Interpreter enabled
Feb  9 18:55:33.798749 kernel: ACPI: PM: (supports S0 S3 S5)
Feb  9 18:55:33.798756 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  9 18:55:33.798763 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  9 18:55:33.798769 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  9 18:55:33.798776 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  9 18:55:33.798944 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb  9 18:55:33.798956 kernel: acpiphp: Slot [3] registered
Feb  9 18:55:33.798963 kernel: acpiphp: Slot [4] registered
Feb  9 18:55:33.798972 kernel: acpiphp: Slot [5] registered
Feb  9 18:55:33.798979 kernel: acpiphp: Slot [6] registered
Feb  9 18:55:33.798985 kernel: acpiphp: Slot [7] registered
Feb  9 18:55:33.798992 kernel: acpiphp: Slot [8] registered
Feb  9 18:55:33.798999 kernel: acpiphp: Slot [9] registered
Feb  9 18:55:33.799005 kernel: acpiphp: Slot [10] registered
Feb  9 18:55:33.799012 kernel: acpiphp: Slot [11] registered
Feb  9 18:55:33.799019 kernel: acpiphp: Slot [12] registered
Feb  9 18:55:33.799028 kernel: acpiphp: Slot [13] registered
Feb  9 18:55:33.799037 kernel: acpiphp: Slot [14] registered
Feb  9 18:55:33.799048 kernel: acpiphp: Slot [15] registered
Feb  9 18:55:33.799056 kernel: acpiphp: Slot [16] registered
Feb  9 18:55:33.799063 kernel: acpiphp: Slot [17] registered
Feb  9 18:55:33.799070 kernel: acpiphp: Slot [18] registered
Feb  9 18:55:33.799078 kernel: acpiphp: Slot [19] registered
Feb  9 18:55:33.799088 kernel: acpiphp: Slot [20] registered
Feb  9 18:55:33.799096 kernel: acpiphp: Slot [21] registered
Feb  9 18:55:33.799106 kernel: acpiphp: Slot [22] registered
Feb  9 18:55:33.799114 kernel: acpiphp: Slot [23] registered
Feb  9 18:55:33.799125 kernel: acpiphp: Slot [24] registered
Feb  9 18:55:33.799135 kernel: acpiphp: Slot [25] registered
Feb  9 18:55:33.799144 kernel: acpiphp: Slot [26] registered
Feb  9 18:55:33.799152 kernel: acpiphp: Slot [27] registered
Feb  9 18:55:33.799162 kernel: acpiphp: Slot [28] registered
Feb  9 18:55:33.799171 kernel: acpiphp: Slot [29] registered
Feb  9 18:55:33.799180 kernel: acpiphp: Slot [30] registered
Feb  9 18:55:33.799189 kernel: acpiphp: Slot [31] registered
Feb  9 18:55:33.799198 kernel: PCI host bridge to bus 0000:00
Feb  9 18:55:33.799306 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  9 18:55:33.799405 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  9 18:55:33.799493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  9 18:55:33.799580 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window]
Feb  9 18:55:33.799669 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Feb  9 18:55:33.799757 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  9 18:55:33.799917 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb  9 18:55:33.800034 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb  9 18:55:33.800145 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Feb  9 18:55:33.800245 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc0c0-0xc0cf]
Feb  9 18:55:33.800345 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Feb  9 18:55:33.800453 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Feb  9 18:55:33.800552 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Feb  9 18:55:33.800660 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Feb  9 18:55:33.800770 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb  9 18:55:33.800886 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  9 18:55:33.800962 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  9 18:55:33.801052 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000
Feb  9 18:55:33.801121 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref]
Feb  9 18:55:33.801189 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff]
Feb  9 18:55:33.801259 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref]
Feb  9 18:55:33.801327 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  9 18:55:33.801411 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00
Feb  9 18:55:33.801481 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc09f]
Feb  9 18:55:33.801559 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff]
Feb  9 18:55:33.801628 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref]
Feb  9 18:55:33.801704 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Feb  9 18:55:33.801776 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Feb  9 18:55:33.801855 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff]
Feb  9 18:55:33.801923 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref]
Feb  9 18:55:33.802000 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000
Feb  9 18:55:33.802068 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0a0-0xc0bf]
Feb  9 18:55:33.802134 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff]
Feb  9 18:55:33.802202 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref]
Feb  9 18:55:33.802274 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref]
Feb  9 18:55:33.802283 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  9 18:55:33.802290 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  9 18:55:33.802297 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  9 18:55:33.802304 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  9 18:55:33.802311 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  9 18:55:33.802318 kernel: iommu: Default domain type: Translated 
Feb  9 18:55:33.802325 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb  9 18:55:33.802405 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  9 18:55:33.802488 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  9 18:55:33.802588 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  9 18:55:33.802602 kernel: vgaarb: loaded
Feb  9 18:55:33.802611 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  9 18:55:33.802621 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  9 18:55:33.802630 kernel: PTP clock support registered
Feb  9 18:55:33.802639 kernel: PCI: Using ACPI for IRQ routing
Feb  9 18:55:33.802648 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb  9 18:55:33.802660 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb  9 18:55:33.802670 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff]
Feb  9 18:55:33.802679 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Feb  9 18:55:33.802688 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Feb  9 18:55:33.802697 kernel: clocksource: Switched to clocksource kvm-clock
Feb  9 18:55:33.802706 kernel: VFS: Disk quotas dquot_6.6.0
Feb  9 18:55:33.802715 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  9 18:55:33.802725 kernel: pnp: PnP ACPI init
Feb  9 18:55:33.802847 kernel: pnp 00:02: [dma 2]
Feb  9 18:55:33.802866 kernel: pnp: PnP ACPI: found 6 devices
Feb  9 18:55:33.802875 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  9 18:55:33.802885 kernel: NET: Registered PF_INET protocol family
Feb  9 18:55:33.802894 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb  9 18:55:33.802904 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb  9 18:55:33.802913 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  9 18:55:33.802922 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb  9 18:55:33.802932 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear)
Feb  9 18:55:33.802943 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb  9 18:55:33.802953 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb  9 18:55:33.802962 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb  9 18:55:33.802971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  9 18:55:33.802980 kernel: NET: Registered PF_XDP protocol family
Feb  9 18:55:33.803071 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  9 18:55:33.803158 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  9 18:55:33.803235 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  9 18:55:33.803322 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window]
Feb  9 18:55:33.803417 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Feb  9 18:55:33.803490 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  9 18:55:33.803558 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  9 18:55:33.803625 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Feb  9 18:55:33.803634 kernel: PCI: CLS 0 bytes, default 64
Feb  9 18:55:33.803641 kernel: Initialise system trusted keyrings
Feb  9 18:55:33.803648 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb  9 18:55:33.803655 kernel: Key type asymmetric registered
Feb  9 18:55:33.803665 kernel: Asymmetric key parser 'x509' registered
Feb  9 18:55:33.803671 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb  9 18:55:33.803678 kernel: io scheduler mq-deadline registered
Feb  9 18:55:33.803685 kernel: io scheduler kyber registered
Feb  9 18:55:33.803692 kernel: io scheduler bfq registered
Feb  9 18:55:33.803699 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb  9 18:55:33.803706 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  9 18:55:33.803713 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Feb  9 18:55:33.803720 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  9 18:55:33.803729 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  9 18:55:33.803736 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  9 18:55:33.803743 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  9 18:55:33.803750 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  9 18:55:33.803757 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  9 18:55:33.803764 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb  9 18:55:33.803848 kernel: rtc_cmos 00:05: RTC can wake from S4
Feb  9 18:55:33.803930 kernel: rtc_cmos 00:05: registered as rtc0
Feb  9 18:55:33.804006 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T18:55:33 UTC (1707504933)
Feb  9 18:55:33.804095 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs
Feb  9 18:55:33.804109 kernel: NET: Registered PF_INET6 protocol family
Feb  9 18:55:33.804119 kernel: Segment Routing with IPv6
Feb  9 18:55:33.804128 kernel: In-situ OAM (IOAM) with IPv6
Feb  9 18:55:33.804138 kernel: NET: Registered PF_PACKET protocol family
Feb  9 18:55:33.804147 kernel: Key type dns_resolver registered
Feb  9 18:55:33.804156 kernel: IPI shorthand broadcast: enabled
Feb  9 18:55:33.804166 kernel: sched_clock: Marking stable (396099692, 70511248)->(472805059, -6194119)
Feb  9 18:55:33.804178 kernel: registered taskstats version 1
Feb  9 18:55:33.804187 kernel: Loading compiled-in X.509 certificates
Feb  9 18:55:33.804194 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a'
Feb  9 18:55:33.804201 kernel: Key type .fscrypt registered
Feb  9 18:55:33.804207 kernel: Key type fscrypt-provisioning registered
Feb  9 18:55:33.804215 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  9 18:55:33.804223 kernel: ima: Allocated hash algorithm: sha1
Feb  9 18:55:33.804232 kernel: ima: No architecture policies found
Feb  9 18:55:33.804243 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb  9 18:55:33.804253 kernel: Write protecting the kernel read-only data: 28672k
Feb  9 18:55:33.804263 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb  9 18:55:33.804272 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb  9 18:55:33.804282 kernel: Run /init as init process
Feb  9 18:55:33.804291 kernel:   with arguments:
Feb  9 18:55:33.804300 kernel:     /init
Feb  9 18:55:33.804310 kernel:   with environment:
Feb  9 18:55:33.804332 kernel:     HOME=/
Feb  9 18:55:33.804343 kernel:     TERM=linux
Feb  9 18:55:33.804354 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb  9 18:55:33.804367 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 18:55:33.804387 systemd[1]: Detected virtualization kvm.
Feb  9 18:55:33.804412 systemd[1]: Detected architecture x86-64.
Feb  9 18:55:33.804422 systemd[1]: Running in initrd.
Feb  9 18:55:33.804432 systemd[1]: No hostname configured, using default hostname.
Feb  9 18:55:33.804442 systemd[1]: Hostname set to <localhost>.
Feb  9 18:55:33.804456 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 18:55:33.804466 systemd[1]: Queued start job for default target initrd.target.
Feb  9 18:55:33.804476 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 18:55:33.804486 systemd[1]: Reached target cryptsetup.target.
Feb  9 18:55:33.804496 systemd[1]: Reached target paths.target.
Feb  9 18:55:33.804506 systemd[1]: Reached target slices.target.
Feb  9 18:55:33.804516 systemd[1]: Reached target swap.target.
Feb  9 18:55:33.804523 systemd[1]: Reached target timers.target.
Feb  9 18:55:33.804533 systemd[1]: Listening on iscsid.socket.
Feb  9 18:55:33.804540 systemd[1]: Listening on iscsiuio.socket.
Feb  9 18:55:33.804560 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 18:55:33.804568 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 18:55:33.804575 systemd[1]: Listening on systemd-journald.socket.
Feb  9 18:55:33.804583 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 18:55:33.804590 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 18:55:33.804600 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 18:55:33.804607 systemd[1]: Reached target sockets.target.
Feb  9 18:55:33.804625 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 18:55:33.804633 systemd[1]: Finished network-cleanup.service.
Feb  9 18:55:33.804640 systemd[1]: Starting systemd-fsck-usr.service...
Feb  9 18:55:33.804648 systemd[1]: Starting systemd-journald.service...
Feb  9 18:55:33.804656 systemd[1]: Starting systemd-modules-load.service...
Feb  9 18:55:33.804665 systemd[1]: Starting systemd-resolved.service...
Feb  9 18:55:33.804673 systemd[1]: Starting systemd-vconsole-setup.service...
Feb  9 18:55:33.804690 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 18:55:33.804698 systemd[1]: Finished systemd-fsck-usr.service.
Feb  9 18:55:33.804706 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 18:55:33.804717 systemd-journald[197]: Journal started
Feb  9 18:55:33.804768 systemd-journald[197]: Runtime Journal (/run/log/journal/28778960c23d427a8fc23795906bfe6c) is 6.0M, max 48.5M, 42.5M free.
Feb  9 18:55:33.793188 systemd-modules-load[198]: Inserted module 'overlay'
Feb  9 18:55:33.813972 systemd-resolved[199]: Positive Trust Anchors:
Feb  9 18:55:33.816193 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  9 18:55:33.813982 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 18:55:33.814008 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 18:55:33.821985 kernel: Bridge firewalling registered
Feb  9 18:55:33.816193 systemd-resolved[199]: Defaulting to hostname 'linux'.
Feb  9 18:55:33.821980 systemd-modules-load[198]: Inserted module 'br_netfilter'
Feb  9 18:55:33.824896 systemd[1]: Started systemd-journald.service.
Feb  9 18:55:33.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.825181 systemd[1]: Started systemd-resolved.service.
Feb  9 18:55:33.830967 kernel: audit: type=1130 audit(1707504933.824:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.830992 kernel: audit: type=1130 audit(1707504933.826:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.827927 systemd[1]: Finished systemd-vconsole-setup.service.
Feb  9 18:55:33.837357 kernel: audit: type=1130 audit(1707504933.830:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.837392 kernel: audit: type=1130 audit(1707504933.830:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.831043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 18:55:33.831313 systemd[1]: Reached target nss-lookup.target.
Feb  9 18:55:33.831930 systemd[1]: Starting dracut-cmdline-ask.service...
Feb  9 18:55:33.843829 kernel: SCSI subsystem initialized
Feb  9 18:55:33.853270 systemd[1]: Finished dracut-cmdline-ask.service.
Feb  9 18:55:33.859775 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  9 18:55:33.859790 kernel: device-mapper: uevent: version 1.0.3
Feb  9 18:55:33.859810 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb  9 18:55:33.859820 kernel: audit: type=1130 audit(1707504933.856:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.859818 systemd[1]: Starting dracut-cmdline.service...
Feb  9 18:55:33.862668 systemd-modules-load[198]: Inserted module 'dm_multipath'
Feb  9 18:55:33.863223 systemd[1]: Finished systemd-modules-load.service.
Feb  9 18:55:33.867130 kernel: audit: type=1130 audit(1707504933.862:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.864131 systemd[1]: Starting systemd-sysctl.service...
Feb  9 18:55:33.872399 systemd[1]: Finished systemd-sysctl.service.
Feb  9 18:55:33.876113 kernel: audit: type=1130 audit(1707504933.871:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.878957 dracut-cmdline[216]: dracut-dracut-053
Feb  9 18:55:33.880871 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6
Feb  9 18:55:33.934829 kernel: Loading iSCSI transport class v2.0-870.
Feb  9 18:55:33.945819 kernel: iscsi: registered transport (tcp)
Feb  9 18:55:33.964833 kernel: iscsi: registered transport (qla4xxx)
Feb  9 18:55:33.964865 kernel: QLogic iSCSI HBA Driver
Feb  9 18:55:33.985774 systemd[1]: Finished dracut-cmdline.service.
Feb  9 18:55:33.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:33.987230 systemd[1]: Starting dracut-pre-udev.service...
Feb  9 18:55:33.989838 kernel: audit: type=1130 audit(1707504933.985:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:34.038829 kernel: raid6: avx2x4   gen() 20061 MB/s
Feb  9 18:55:34.055822 kernel: raid6: avx2x4   xor()  5958 MB/s
Feb  9 18:55:34.072825 kernel: raid6: avx2x2   gen() 20683 MB/s
Feb  9 18:55:34.089823 kernel: raid6: avx2x2   xor() 13332 MB/s
Feb  9 18:55:34.106823 kernel: raid6: avx2x1   gen() 17261 MB/s
Feb  9 18:55:34.123829 kernel: raid6: avx2x1   xor() 11282 MB/s
Feb  9 18:55:34.140833 kernel: raid6: sse2x4   gen() 10089 MB/s
Feb  9 18:55:34.157833 kernel: raid6: sse2x4   xor()  4751 MB/s
Feb  9 18:55:34.174831 kernel: raid6: sse2x2   gen() 14556 MB/s
Feb  9 18:55:34.191826 kernel: raid6: sse2x2   xor()  9701 MB/s
Feb  9 18:55:34.208823 kernel: raid6: sse2x1   gen() 12265 MB/s
Feb  9 18:55:34.226320 kernel: raid6: sse2x1   xor()  7664 MB/s
Feb  9 18:55:34.226352 kernel: raid6: using algorithm avx2x2 gen() 20683 MB/s
Feb  9 18:55:34.226385 kernel: raid6: .... xor() 13332 MB/s, rmw enabled
Feb  9 18:55:34.226398 kernel: raid6: using avx2x2 recovery algorithm
Feb  9 18:55:34.237835 kernel: xor: automatically using best checksumming function   avx       
Feb  9 18:55:34.325835 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb  9 18:55:34.334651 systemd[1]: Finished dracut-pre-udev.service.
Feb  9 18:55:34.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:34.335000 audit: BPF prog-id=7 op=LOAD
Feb  9 18:55:34.337000 audit: BPF prog-id=8 op=LOAD
Feb  9 18:55:34.338448 systemd[1]: Starting systemd-udevd.service...
Feb  9 18:55:34.339446 kernel: audit: type=1130 audit(1707504934.334:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:34.349648 systemd-udevd[400]: Using default interface naming scheme 'v252'.
Feb  9 18:55:34.353316 systemd[1]: Started systemd-udevd.service.
Feb  9 18:55:34.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:34.356043 systemd[1]: Starting dracut-pre-trigger.service...
Feb  9 18:55:34.364907 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation
Feb  9 18:55:34.389870 systemd[1]: Finished dracut-pre-trigger.service.
Feb  9 18:55:34.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:34.392262 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 18:55:34.426061 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 18:55:34.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:34.451814 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb  9 18:55:34.454245 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb  9 18:55:34.454266 kernel: GPT:9289727 != 19775487
Feb  9 18:55:34.454280 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb  9 18:55:34.454288 kernel: GPT:9289727 != 19775487
Feb  9 18:55:34.455167 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb  9 18:55:34.455193 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:55:34.459831 kernel: cryptd: max_cpu_qlen set to 1000
Feb  9 18:55:34.470894 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  9 18:55:34.470914 kernel: AES CTR mode by8 optimization enabled
Feb  9 18:55:34.478807 kernel: libata version 3.00 loaded.
Feb  9 18:55:34.481808 kernel: ata_piix 0000:00:01.1: version 2.13
Feb  9 18:55:34.482808 kernel: scsi host0: ata_piix
Feb  9 18:55:34.482918 kernel: scsi host1: ata_piix
Feb  9 18:55:34.483002 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14
Feb  9 18:55:34.483013 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15
Feb  9 18:55:34.491549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb  9 18:55:34.510182 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459)
Feb  9 18:55:34.515713 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb  9 18:55:34.524635 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb  9 18:55:34.525579 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb  9 18:55:34.530697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 18:55:34.532037 systemd[1]: Starting disk-uuid.service...
Feb  9 18:55:34.540904 disk-uuid[518]: Primary Header is updated.
Feb  9 18:55:34.540904 disk-uuid[518]: Secondary Entries is updated.
Feb  9 18:55:34.540904 disk-uuid[518]: Secondary Header is updated.
Feb  9 18:55:34.544814 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:55:34.546829 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:55:34.635820 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb  9 18:55:34.635888 kernel: scsi 1:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb  9 18:55:34.668950 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb  9 18:55:34.669222 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb  9 18:55:34.686841 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0
Feb  9 18:55:35.547836 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 18:55:35.548021 disk-uuid[519]: The operation has completed successfully.
Feb  9 18:55:35.567532 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb  9 18:55:35.567638 systemd[1]: Finished disk-uuid.service.
Feb  9 18:55:35.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.578758 systemd[1]: Starting verity-setup.service...
Feb  9 18:55:35.589812 kernel: device-mapper: verity: sha256 using implementation "sha256-ni"
Feb  9 18:55:35.606315 systemd[1]: Found device dev-mapper-usr.device.
Feb  9 18:55:35.608309 systemd[1]: Mounting sysusr-usr.mount...
Feb  9 18:55:35.611238 systemd[1]: Finished verity-setup.service.
Feb  9 18:55:35.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.662812 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb  9 18:55:35.663166 systemd[1]: Mounted sysusr-usr.mount.
Feb  9 18:55:35.663933 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb  9 18:55:35.665670 systemd[1]: Starting ignition-setup.service...
Feb  9 18:55:35.667469 systemd[1]: Starting parse-ip-for-networkd.service...
Feb  9 18:55:35.676344 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  9 18:55:35.676384 kernel: BTRFS info (device vda6): using free space tree
Feb  9 18:55:35.676394 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 18:55:35.683314 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb  9 18:55:35.691343 systemd[1]: Finished ignition-setup.service.
Feb  9 18:55:35.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.692424 systemd[1]: Starting ignition-fetch-offline.service...
Feb  9 18:55:35.727206 ignition[638]: Ignition 2.14.0
Feb  9 18:55:35.727819 ignition[638]: Stage: fetch-offline
Feb  9 18:55:35.727868 ignition[638]: no configs at "/usr/lib/ignition/base.d"
Feb  9 18:55:35.727879 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:55:35.728006 ignition[638]: parsed url from cmdline: ""
Feb  9 18:55:35.728010 ignition[638]: no config URL provided
Feb  9 18:55:35.728017 ignition[638]: reading system config file "/usr/lib/ignition/user.ign"
Feb  9 18:55:35.728025 ignition[638]: no config at "/usr/lib/ignition/user.ign"
Feb  9 18:55:35.728043 ignition[638]: op(1): [started]  loading QEMU firmware config module
Feb  9 18:55:35.728049 ignition[638]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb  9 18:55:35.731849 ignition[638]: op(1): [finished] loading QEMU firmware config module
Feb  9 18:55:35.735703 systemd[1]: Finished parse-ip-for-networkd.service.
Feb  9 18:55:35.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.736000 audit: BPF prog-id=9 op=LOAD
Feb  9 18:55:35.737402 systemd[1]: Starting systemd-networkd.service...
Feb  9 18:55:35.745501 ignition[638]: parsing config with SHA512: b58c58f9df3181f0797f9ae40c719de1e5194be531c2280cb57f4d2be97d2abddf7c13704c790f74907eb87ad3f5a4f8cfc584f59f68341192b946a64eedd293
Feb  9 18:55:35.765695 systemd-networkd[713]: lo: Link UP
Feb  9 18:55:35.765708 systemd-networkd[713]: lo: Gained carrier
Feb  9 18:55:35.767037 systemd-networkd[713]: Enumeration completed
Feb  9 18:55:35.767517 systemd[1]: Started systemd-networkd.service.
Feb  9 18:55:35.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.768764 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 18:55:35.768867 systemd[1]: Reached target network.target.
Feb  9 18:55:35.769879 systemd[1]: Starting iscsiuio.service...
Feb  9 18:55:35.771679 unknown[638]: fetched base config from "system"
Feb  9 18:55:35.771686 unknown[638]: fetched user config from "qemu"
Feb  9 18:55:35.772250 ignition[638]: fetch-offline: fetch-offline passed
Feb  9 18:55:35.772303 ignition[638]: Ignition finished successfully
Feb  9 18:55:35.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.773284 systemd[1]: Finished ignition-fetch-offline.service.
Feb  9 18:55:35.773872 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb  9 18:55:35.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.774489 systemd[1]: Starting ignition-kargs.service...
Feb  9 18:55:35.775167 systemd[1]: Started iscsiuio.service.
Feb  9 18:55:35.776079 systemd[1]: Starting iscsid.service...
Feb  9 18:55:35.779388 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 18:55:35.779388 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log
Feb  9 18:55:35.779388 iscsid[719]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb  9 18:55:35.779388 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb  9 18:55:35.779388 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb  9 18:55:35.779388 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 18:55:35.779388 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb  9 18:55:35.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.787204 ignition[717]: Ignition 2.14.0
Feb  9 18:55:35.779594 systemd-networkd[713]: eth0: Link UP
Feb  9 18:55:35.787210 ignition[717]: Stage: kargs
Feb  9 18:55:35.780540 systemd-networkd[713]: eth0: Gained carrier
Feb  9 18:55:35.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.787300 ignition[717]: no configs at "/usr/lib/ignition/base.d"
Feb  9 18:55:35.780815 systemd[1]: Started iscsid.service.
Feb  9 18:55:35.787308 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:55:35.783397 systemd[1]: Starting dracut-initqueue.service...
Feb  9 18:55:35.788224 ignition[717]: kargs: kargs passed
Feb  9 18:55:35.789644 systemd[1]: Finished ignition-kargs.service.
Feb  9 18:55:35.788259 ignition[717]: Ignition finished successfully
Feb  9 18:55:35.790999 systemd[1]: Starting ignition-disks.service...
Feb  9 18:55:35.798759 ignition[728]: Ignition 2.14.0
Feb  9 18:55:35.798770 ignition[728]: Stage: disks
Feb  9 18:55:35.798889 ignition[728]: no configs at "/usr/lib/ignition/base.d"
Feb  9 18:55:35.799875 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb  9 18:55:35.798902 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:55:35.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.801045 systemd[1]: Finished ignition-disks.service.
Feb  9 18:55:35.800042 ignition[728]: disks: disks passed
Feb  9 18:55:35.802077 systemd[1]: Reached target initrd-root-device.target.
Feb  9 18:55:35.800085 ignition[728]: Ignition finished successfully
Feb  9 18:55:35.803307 systemd[1]: Reached target local-fs-pre.target.
Feb  9 18:55:35.804495 systemd[1]: Reached target local-fs.target.
Feb  9 18:55:35.805493 systemd[1]: Reached target sysinit.target.
Feb  9 18:55:35.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.806167 systemd[1]: Reached target basic.target.
Feb  9 18:55:35.809055 systemd[1]: Finished dracut-initqueue.service.
Feb  9 18:55:35.809904 systemd[1]: Reached target remote-fs-pre.target.
Feb  9 18:55:35.810913 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 18:55:35.811521 systemd[1]: Reached target remote-fs.target.
Feb  9 18:55:35.815673 systemd[1]: Starting dracut-pre-mount.service...
Feb  9 18:55:35.823121 systemd[1]: Finished dracut-pre-mount.service.
Feb  9 18:55:35.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.825014 systemd[1]: Starting systemd-fsck-root.service...
Feb  9 18:55:35.834198 systemd-fsck[747]: ROOT: clean, 602/553520 files, 56014/553472 blocks
Feb  9 18:55:35.839200 systemd[1]: Finished systemd-fsck-root.service.
Feb  9 18:55:35.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.841567 systemd[1]: Mounting sysroot.mount...
Feb  9 18:55:35.847378 systemd[1]: Mounted sysroot.mount.
Feb  9 18:55:35.848306 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb  9 18:55:35.848313 systemd[1]: Reached target initrd-root-fs.target.
Feb  9 18:55:35.850064 systemd[1]: Mounting sysroot-usr.mount...
Feb  9 18:55:35.851278 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Feb  9 18:55:35.851312 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb  9 18:55:35.851340 systemd[1]: Reached target ignition-diskful.target.
Feb  9 18:55:35.855251 systemd[1]: Mounted sysroot-usr.mount.
Feb  9 18:55:35.856955 systemd[1]: Starting initrd-setup-root.service...
Feb  9 18:55:35.861122 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory
Feb  9 18:55:35.864265 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory
Feb  9 18:55:35.867507 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory
Feb  9 18:55:35.870595 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory
Feb  9 18:55:35.894435 systemd[1]: Finished initrd-setup-root.service.
Feb  9 18:55:35.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.896220 systemd[1]: Starting ignition-mount.service...
Feb  9 18:55:35.897642 systemd[1]: Starting sysroot-boot.service...
Feb  9 18:55:35.900264 bash[798]: umount: /sysroot/usr/share/oem: not mounted.
Feb  9 18:55:35.906496 ignition[799]: INFO     : Ignition 2.14.0
Feb  9 18:55:35.906496 ignition[799]: INFO     : Stage: mount
Feb  9 18:55:35.907540 ignition[799]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 18:55:35.907540 ignition[799]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:55:35.907540 ignition[799]: INFO     : mount: mount passed
Feb  9 18:55:35.907540 ignition[799]: INFO     : Ignition finished successfully
Feb  9 18:55:35.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:35.908005 systemd[1]: Finished ignition-mount.service.
Feb  9 18:55:35.917099 systemd[1]: Finished sysroot-boot.service.
Feb  9 18:55:35.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:36.135106 systemd-resolved[199]: Detected conflict on linux IN A 10.0.0.91
Feb  9 18:55:36.135122 systemd-resolved[199]: Hostname conflict, changing published hostname from 'linux' to 'linux5'.
Feb  9 18:55:36.615149 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  9 18:55:36.620933 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808)
Feb  9 18:55:36.620964 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  9 18:55:36.620978 kernel: BTRFS info (device vda6): using free space tree
Feb  9 18:55:36.622010 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 18:55:36.624633 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  9 18:55:36.625766 systemd[1]: Starting ignition-files.service...
Feb  9 18:55:36.637722 ignition[828]: INFO     : Ignition 2.14.0
Feb  9 18:55:36.637722 ignition[828]: INFO     : Stage: files
Feb  9 18:55:36.638895 ignition[828]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 18:55:36.638895 ignition[828]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:55:36.641054 ignition[828]: DEBUG    : files: compiled without relabeling support, skipping
Feb  9 18:55:36.642050 ignition[828]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb  9 18:55:36.642050 ignition[828]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb  9 18:55:36.644283 ignition[828]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb  9 18:55:36.645221 ignition[828]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb  9 18:55:36.646420 unknown[828]: wrote ssh authorized keys file for user: core
Feb  9 18:55:36.647136 ignition[828]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb  9 18:55:36.648240 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  9 18:55:36.649581 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  9 18:55:36.650714 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Feb  9 18:55:36.651936 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1
Feb  9 18:55:37.014603 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb  9 18:55:37.135689 ignition[828]: DEBUG    : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d
Feb  9 18:55:37.137692 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Feb  9 18:55:37.137692 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz"
Feb  9 18:55:37.137692 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1
Feb  9 18:55:37.419916 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb  9 18:55:37.465918 systemd-networkd[713]: eth0: Gained IPv6LL
Feb  9 18:55:37.540043 ignition[828]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449
Feb  9 18:55:37.542099 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz"
Feb  9 18:55:37.542099 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb  9 18:55:37.542099 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1
Feb  9 18:55:37.606984 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb  9 18:55:38.082928 ignition[828]: DEBUG    : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660
Feb  9 18:55:38.085116 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb  9 18:55:38.085116 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb  9 18:55:38.085116 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1
Feb  9 18:55:38.131140 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb  9 18:55:39.075523 ignition[828]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/install.sh"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(b): [started]  processing unit "containerd.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(b): op(c): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(b): [finished] processing unit "containerd.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(d): [started]  processing unit "prepare-cni-plugins.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(d): op(e): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(d): [finished] processing unit "prepare-cni-plugins.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(f): [started]  processing unit "prepare-critools.service"
Feb  9 18:55:39.078308 ignition[828]: INFO     : files: op(f): op(10): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(f): [finished] processing unit "prepare-critools.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(11): [started]  processing unit "coreos-metadata.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(11): op(12): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(11): [finished] processing unit "coreos-metadata.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(13): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(14): [started]  setting preset to enabled for "prepare-critools.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(14): [finished] setting preset to enabled for "prepare-critools.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(15): [started]  setting preset to disabled for "coreos-metadata.service"
Feb  9 18:55:39.106144 ignition[828]: INFO     : files: op(15): op(16): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb  9 18:55:39.198420 ignition[828]: INFO     : files: op(15): op(16): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb  9 18:55:39.199632 ignition[828]: INFO     : files: op(15): [finished] setting preset to disabled for "coreos-metadata.service"
Feb  9 18:55:39.200643 ignition[828]: INFO     : files: createResultFile: createFiles: op(17): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb  9 18:55:39.201883 ignition[828]: INFO     : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb  9 18:55:39.203031 ignition[828]: INFO     : files: files passed
Feb  9 18:55:39.203598 ignition[828]: INFO     : Ignition finished successfully
Feb  9 18:55:39.205015 systemd[1]: Finished ignition-files.service.
Feb  9 18:55:39.209676 kernel: kauditd_printk_skb: 23 callbacks suppressed
Feb  9 18:55:39.209700 kernel: audit: type=1130 audit(1707504939.204:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.206091 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb  9 18:55:39.209710 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb  9 18:55:39.213000 initrd-setup-root-after-ignition[851]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory
Feb  9 18:55:39.217065 kernel: audit: type=1130 audit(1707504939.212:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.210373 systemd[1]: Starting ignition-quench.service...
Feb  9 18:55:39.223145 kernel: audit: type=1130 audit(1707504939.217:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.223159 kernel: audit: type=1131 audit(1707504939.217:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.223235 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb  9 18:55:39.212219 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb  9 18:55:39.213143 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb  9 18:55:39.213210 systemd[1]: Finished ignition-quench.service.
Feb  9 18:55:39.217158 systemd[1]: Reached target ignition-complete.target.
Feb  9 18:55:39.223663 systemd[1]: Starting initrd-parse-etc.service...
Feb  9 18:55:39.233857 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  9 18:55:39.233933 systemd[1]: Finished initrd-parse-etc.service.
Feb  9 18:55:39.240513 kernel: audit: type=1130 audit(1707504939.235:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.240531 kernel: audit: type=1131 audit(1707504939.235:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.235169 systemd[1]: Reached target initrd-fs.target.
Feb  9 18:55:39.240524 systemd[1]: Reached target initrd.target.
Feb  9 18:55:39.241093 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb  9 18:55:39.241637 systemd[1]: Starting dracut-pre-pivot.service...
Feb  9 18:55:39.253812 systemd[1]: Finished dracut-pre-pivot.service.
Feb  9 18:55:39.257355 kernel: audit: type=1130 audit(1707504939.253:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.257380 systemd[1]: Starting initrd-cleanup.service...
Feb  9 18:55:39.266369 systemd[1]: Stopped target nss-lookup.target.
Feb  9 18:55:39.266622 systemd[1]: Stopped target remote-cryptsetup.target.
Feb  9 18:55:39.267758 systemd[1]: Stopped target timers.target.
Feb  9 18:55:39.268079 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  9 18:55:39.272935 kernel: audit: type=1131 audit(1707504939.268:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.268155 systemd[1]: Stopped dracut-pre-pivot.service.
Feb  9 18:55:39.269744 systemd[1]: Stopped target initrd.target.
Feb  9 18:55:39.273272 systemd[1]: Stopped target basic.target.
Feb  9 18:55:39.273484 systemd[1]: Stopped target ignition-complete.target.
Feb  9 18:55:39.273706 systemd[1]: Stopped target ignition-diskful.target.
Feb  9 18:55:39.274062 systemd[1]: Stopped target initrd-root-device.target.
Feb  9 18:55:39.274294 systemd[1]: Stopped target remote-fs.target.
Feb  9 18:55:39.274510 systemd[1]: Stopped target remote-fs-pre.target.
Feb  9 18:55:39.274746 systemd[1]: Stopped target sysinit.target.
Feb  9 18:55:39.275082 systemd[1]: Stopped target local-fs.target.
Feb  9 18:55:39.280808 systemd[1]: Stopped target local-fs-pre.target.
Feb  9 18:55:39.281125 systemd[1]: Stopped target swap.target.
Feb  9 18:55:39.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.281327 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  9 18:55:39.287681 kernel: audit: type=1131 audit(1707504939.282:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.281403 systemd[1]: Stopped dracut-pre-mount.service.
Feb  9 18:55:39.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.283548 systemd[1]: Stopped target cryptsetup.target.
Feb  9 18:55:39.291010 kernel: audit: type=1131 audit(1707504939.287:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.286777 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  9 18:55:39.286863 systemd[1]: Stopped dracut-initqueue.service.
Feb  9 18:55:39.288051 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb  9 18:55:39.288125 systemd[1]: Stopped ignition-fetch-offline.service.
Feb  9 18:55:39.290899 systemd[1]: Stopped target paths.target.
Feb  9 18:55:39.291138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  9 18:55:39.294836 systemd[1]: Stopped systemd-ask-password-console.path.
Feb  9 18:55:39.295086 systemd[1]: Stopped target slices.target.
Feb  9 18:55:39.296273 systemd[1]: Stopped target sockets.target.
Feb  9 18:55:39.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.297391 systemd[1]: iscsid.socket: Deactivated successfully.
Feb  9 18:55:39.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.297448 systemd[1]: Closed iscsid.socket.
Feb  9 18:55:39.298421 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb  9 18:55:39.298498 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb  9 18:55:39.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.299301 systemd[1]: ignition-files.service: Deactivated successfully.
Feb  9 18:55:39.299374 systemd[1]: Stopped ignition-files.service.
Feb  9 18:55:39.300941 systemd[1]: Stopping ignition-mount.service...
Feb  9 18:55:39.301321 systemd[1]: Stopping iscsiuio.service...
Feb  9 18:55:39.302105 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  9 18:55:39.302237 systemd[1]: Stopped kmod-static-nodes.service.
Feb  9 18:55:39.304180 systemd[1]: Stopping sysroot-boot.service...
Feb  9 18:55:39.308333 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  9 18:55:39.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.308501 systemd[1]: Stopped systemd-udev-trigger.service.
Feb  9 18:55:39.309314 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  9 18:55:39.309432 systemd[1]: Stopped dracut-pre-trigger.service.
Feb  9 18:55:39.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.312596 ignition[868]: INFO     : Ignition 2.14.0
Feb  9 18:55:39.312596 ignition[868]: INFO     : Stage: umount
Feb  9 18:55:39.312596 ignition[868]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb  9 18:55:39.312596 ignition[868]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb  9 18:55:39.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.313205 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  9 18:55:39.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.318862 ignition[868]: INFO     : umount: umount passed
Feb  9 18:55:39.318862 ignition[868]: INFO     : Ignition finished successfully
Feb  9 18:55:39.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.313291 systemd[1]: Finished initrd-cleanup.service.
Feb  9 18:55:39.314394 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb  9 18:55:39.314459 systemd[1]: Stopped ignition-mount.service.
Feb  9 18:55:39.315731 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb  9 18:55:39.315767 systemd[1]: Stopped ignition-disks.service.
Feb  9 18:55:39.317001 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb  9 18:55:39.317041 systemd[1]: Stopped ignition-kargs.service.
Feb  9 18:55:39.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.317668 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb  9 18:55:39.317730 systemd[1]: Stopped ignition-setup.service.
Feb  9 18:55:39.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.318939 systemd[1]: iscsiuio.service: Deactivated successfully.
Feb  9 18:55:39.319023 systemd[1]: Stopped iscsiuio.service.
Feb  9 18:55:39.320006 systemd[1]: Stopped target network.target.
Feb  9 18:55:39.321074 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb  9 18:55:39.321100 systemd[1]: Closed iscsiuio.socket.
Feb  9 18:55:39.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.322096 systemd[1]: Stopping systemd-networkd.service...
Feb  9 18:55:39.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.323458 systemd[1]: Stopping systemd-resolved.service...
Feb  9 18:55:39.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.325295 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb  9 18:55:39.325827 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb  9 18:55:39.325896 systemd[1]: Stopped sysroot-boot.service.
Feb  9 18:55:39.326680 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb  9 18:55:39.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.326709 systemd[1]: Stopped initrd-setup-root.service.
Feb  9 18:55:39.326838 systemd-networkd[713]: eth0: DHCPv6 lease lost
Feb  9 18:55:39.341000 audit: BPF prog-id=9 op=UNLOAD
Feb  9 18:55:39.327999 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  9 18:55:39.328068 systemd[1]: Stopped systemd-networkd.service.
Feb  9 18:55:39.330426 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb  9 18:55:39.330466 systemd[1]: Closed systemd-networkd.socket.
Feb  9 18:55:39.332211 systemd[1]: Stopping network-cleanup.service...
Feb  9 18:55:39.345000 audit: BPF prog-id=6 op=UNLOAD
Feb  9 18:55:39.333282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb  9 18:55:39.333321 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb  9 18:55:39.334477 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 18:55:39.334512 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 18:55:39.335683 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  9 18:55:39.335712 systemd[1]: Stopped systemd-modules-load.service.
Feb  9 18:55:39.336999 systemd[1]: Stopping systemd-udevd.service...
Feb  9 18:55:39.338769 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  9 18:55:39.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.339153 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb  9 18:55:39.339228 systemd[1]: Stopped systemd-resolved.service.
Feb  9 18:55:39.348150 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb  9 18:55:39.349464 systemd[1]: Stopped network-cleanup.service.
Feb  9 18:55:39.354844 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  9 18:55:39.355769 systemd[1]: Stopped systemd-udevd.service.
Feb  9 18:55:39.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.357586 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  9 18:55:39.357621 systemd[1]: Closed systemd-udevd-control.socket.
Feb  9 18:55:39.359482 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  9 18:55:39.359514 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb  9 18:55:39.361434 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  9 18:55:39.361476 systemd[1]: Stopped dracut-pre-udev.service.
Feb  9 18:55:39.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.363228 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  9 18:55:39.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.363268 systemd[1]: Stopped dracut-cmdline.service.
Feb  9 18:55:39.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.364583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb  9 18:55:39.364614 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb  9 18:55:39.367759 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb  9 18:55:39.369034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  9 18:55:39.369081 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb  9 18:55:39.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.373016 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  9 18:55:39.373880 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb  9 18:55:39.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:39.375206 systemd[1]: Reached target initrd-switch-root.target.
Feb  9 18:55:39.376918 systemd[1]: Starting initrd-switch-root.service...
Feb  9 18:55:39.381580 systemd[1]: Switching root.
Feb  9 18:55:39.384000 audit: BPF prog-id=5 op=UNLOAD
Feb  9 18:55:39.384000 audit: BPF prog-id=4 op=UNLOAD
Feb  9 18:55:39.384000 audit: BPF prog-id=3 op=UNLOAD
Feb  9 18:55:39.384000 audit: BPF prog-id=8 op=UNLOAD
Feb  9 18:55:39.384000 audit: BPF prog-id=7 op=UNLOAD
Feb  9 18:55:39.399409 iscsid[719]: iscsid shutting down.
Feb  9 18:55:39.399923 systemd-journald[197]: Received SIGTERM from PID 1 (systemd).
Feb  9 18:55:39.399967 systemd-journald[197]: Journal stopped
Feb  9 18:55:42.854033 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb  9 18:55:42.854092 kernel: SELinux:  Class anon_inode not defined in policy.
Feb  9 18:55:42.854104 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb  9 18:55:42.854114 kernel: SELinux:  policy capability network_peer_controls=1
Feb  9 18:55:42.854123 kernel: SELinux:  policy capability open_perms=1
Feb  9 18:55:42.854137 kernel: SELinux:  policy capability extended_socket_class=1
Feb  9 18:55:42.854146 kernel: SELinux:  policy capability always_check_network=0
Feb  9 18:55:42.854168 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  9 18:55:42.854182 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  9 18:55:42.854192 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb  9 18:55:42.854208 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb  9 18:55:42.854219 systemd[1]: Successfully loaded SELinux policy in 39.015ms.
Feb  9 18:55:42.854240 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.749ms.
Feb  9 18:55:42.854251 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 18:55:42.854262 systemd[1]: Detected virtualization kvm.
Feb  9 18:55:42.854280 systemd[1]: Detected architecture x86-64.
Feb  9 18:55:42.854297 systemd[1]: Detected first boot.
Feb  9 18:55:42.854308 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 18:55:42.854321 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb  9 18:55:42.854331 systemd[1]: Populated /etc with preset unit settings.
Feb  9 18:55:42.854341 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 18:55:42.854357 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 18:55:42.854368 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 18:55:42.854380 systemd[1]: Queued start job for default target multi-user.target.
Feb  9 18:55:42.854390 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Feb  9 18:55:42.854400 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb  9 18:55:42.854411 systemd[1]: Created slice system-addon\x2drun.slice.
Feb  9 18:55:42.854421 systemd[1]: Created slice system-getty.slice.
Feb  9 18:55:42.854431 systemd[1]: Created slice system-modprobe.slice.
Feb  9 18:55:42.854441 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb  9 18:55:42.854452 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb  9 18:55:42.854462 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb  9 18:55:42.854472 systemd[1]: Created slice user.slice.
Feb  9 18:55:42.854483 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 18:55:42.854493 systemd[1]: Started systemd-ask-password-wall.path.
Feb  9 18:55:42.854504 systemd[1]: Set up automount boot.automount.
Feb  9 18:55:42.854514 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb  9 18:55:42.854524 systemd[1]: Reached target integritysetup.target.
Feb  9 18:55:42.854534 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 18:55:42.854544 systemd[1]: Reached target remote-fs.target.
Feb  9 18:55:42.854554 systemd[1]: Reached target slices.target.
Feb  9 18:55:42.854564 systemd[1]: Reached target swap.target.
Feb  9 18:55:42.854576 systemd[1]: Reached target torcx.target.
Feb  9 18:55:42.854589 systemd[1]: Reached target veritysetup.target.
Feb  9 18:55:42.854601 systemd[1]: Listening on systemd-coredump.socket.
Feb  9 18:55:42.854611 systemd[1]: Listening on systemd-initctl.socket.
Feb  9 18:55:42.854620 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 18:55:42.854630 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 18:55:42.854640 systemd[1]: Listening on systemd-journald.socket.
Feb  9 18:55:42.854650 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 18:55:42.854660 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 18:55:42.854670 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 18:55:42.854680 systemd[1]: Listening on systemd-userdbd.socket.
Feb  9 18:55:42.854691 systemd[1]: Mounting dev-hugepages.mount...
Feb  9 18:55:42.854705 systemd[1]: Mounting dev-mqueue.mount...
Feb  9 18:55:42.854715 systemd[1]: Mounting media.mount...
Feb  9 18:55:42.854725 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 18:55:42.854735 systemd[1]: Mounting sys-kernel-debug.mount...
Feb  9 18:55:42.854745 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb  9 18:55:42.854755 systemd[1]: Mounting tmp.mount...
Feb  9 18:55:42.854764 systemd[1]: Starting flatcar-tmpfiles.service...
Feb  9 18:55:42.854776 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  9 18:55:42.854786 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 18:55:42.854836 systemd[1]: Starting modprobe@configfs.service...
Feb  9 18:55:42.854846 systemd[1]: Starting modprobe@dm_mod.service...
Feb  9 18:55:42.854865 systemd[1]: Starting modprobe@drm.service...
Feb  9 18:55:42.854877 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  9 18:55:42.854886 systemd[1]: Starting modprobe@fuse.service...
Feb  9 18:55:42.854897 systemd[1]: Starting modprobe@loop.service...
Feb  9 18:55:42.854908 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  9 18:55:42.854920 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb  9 18:55:42.854931 systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
Feb  9 18:55:42.854945 systemd[1]: Starting systemd-journald.service...
Feb  9 18:55:42.854955 kernel: fuse: init (API version 7.34)
Feb  9 18:55:42.854965 systemd[1]: Starting systemd-modules-load.service...
Feb  9 18:55:42.854975 systemd[1]: Starting systemd-network-generator.service...
Feb  9 18:55:42.854985 systemd[1]: Starting systemd-remount-fs.service...
Feb  9 18:55:42.854996 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 18:55:42.855007 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 18:55:42.855018 systemd[1]: Mounted dev-hugepages.mount.
Feb  9 18:55:42.855028 systemd[1]: Mounted dev-mqueue.mount.
Feb  9 18:55:42.855038 systemd[1]: Mounted media.mount.
Feb  9 18:55:42.855049 systemd[1]: Mounted sys-kernel-debug.mount.
Feb  9 18:55:42.855058 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb  9 18:55:42.855069 systemd[1]: Mounted tmp.mount.
Feb  9 18:55:42.855083 systemd-journald[1007]: Journal started
Feb  9 18:55:42.855125 systemd-journald[1007]: Runtime Journal (/run/log/journal/28778960c23d427a8fc23795906bfe6c) is 6.0M, max 48.5M, 42.5M free.
Feb  9 18:55:42.783000 audit[1]: AVC avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  9 18:55:42.783000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Feb  9 18:55:42.847000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb  9 18:55:42.847000 audit[1007]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff36ebd4d0 a2=4000 a3=7fff36ebd56c items=0 ppid=1 pid=1007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 18:55:42.847000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb  9 18:55:42.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.860091 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 18:55:42.861872 systemd[1]: Started systemd-journald.service.
Feb  9 18:55:42.861893 kernel: loop: module loaded
Feb  9 18:55:42.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.862498 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  9 18:55:42.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.862653 systemd[1]: Finished modprobe@configfs.service.
Feb  9 18:55:42.863468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  9 18:55:42.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.863605 systemd[1]: Finished modprobe@dm_mod.service.
Feb  9 18:55:42.864388 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  9 18:55:42.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.864514 systemd[1]: Finished modprobe@drm.service.
Feb  9 18:55:42.865290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  9 18:55:42.865408 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  9 18:55:42.866427 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  9 18:55:42.866716 systemd[1]: Finished modprobe@fuse.service.
Feb  9 18:55:42.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.867535 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  9 18:55:42.867786 systemd[1]: Finished modprobe@loop.service.
Feb  9 18:55:42.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.868881 systemd[1]: Finished systemd-modules-load.service.
Feb  9 18:55:42.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.869940 systemd[1]: Finished systemd-network-generator.service.
Feb  9 18:55:42.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.871105 systemd[1]: Finished systemd-remount-fs.service.
Feb  9 18:55:42.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.872375 systemd[1]: Reached target network-pre.target.
Feb  9 18:55:42.874166 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb  9 18:55:42.876105 systemd[1]: Mounting sys-kernel-config.mount...
Feb  9 18:55:42.879160 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  9 18:55:42.880831 systemd[1]: Starting systemd-hwdb-update.service...
Feb  9 18:55:42.884570 systemd[1]: Starting systemd-journal-flush.service...
Feb  9 18:55:42.885245 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  9 18:55:42.886336 systemd[1]: Starting systemd-random-seed.service...
Feb  9 18:55:42.886917 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  9 18:55:42.888284 systemd[1]: Starting systemd-sysctl.service...
Feb  9 18:55:42.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.890681 systemd[1]: Finished flatcar-tmpfiles.service.
Feb  9 18:55:42.891486 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb  9 18:55:42.893080 systemd[1]: Mounted sys-kernel-config.mount.
Feb  9 18:55:42.893943 systemd-journald[1007]: Time spent on flushing to /var/log/journal/28778960c23d427a8fc23795906bfe6c is 23.094ms for 1044 entries.
Feb  9 18:55:42.893943 systemd-journald[1007]: System Journal (/var/log/journal/28778960c23d427a8fc23795906bfe6c) is 8.0M, max 195.6M, 187.6M free.
Feb  9 18:55:42.930189 systemd-journald[1007]: Received client request to flush runtime journal.
Feb  9 18:55:42.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.894927 systemd[1]: Starting systemd-sysusers.service...
Feb  9 18:55:42.902138 systemd[1]: Finished systemd-sysctl.service.
Feb  9 18:55:42.904720 systemd[1]: Finished systemd-random-seed.service.
Feb  9 18:55:42.931493 udevadm[1056]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb  9 18:55:42.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:42.905556 systemd[1]: Reached target first-boot-complete.target.
Feb  9 18:55:42.906680 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 18:55:42.908869 systemd[1]: Starting systemd-udev-settle.service...
Feb  9 18:55:42.916713 systemd[1]: Finished systemd-sysusers.service.
Feb  9 18:55:42.918568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 18:55:42.931145 systemd[1]: Finished systemd-journal-flush.service.
Feb  9 18:55:42.937988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 18:55:42.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.431258 systemd[1]: Finished systemd-hwdb-update.service.
Feb  9 18:55:43.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.433294 systemd[1]: Starting systemd-udevd.service...
Feb  9 18:55:43.448564 systemd-udevd[1064]: Using default interface naming scheme 'v252'.
Feb  9 18:55:43.461199 systemd[1]: Started systemd-udevd.service.
Feb  9 18:55:43.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.463721 systemd[1]: Starting systemd-networkd.service...
Feb  9 18:55:43.469255 systemd[1]: Starting systemd-userdbd.service...
Feb  9 18:55:43.487542 systemd[1]: Found device dev-ttyS0.device.
Feb  9 18:55:43.504693 systemd[1]: Started systemd-userdbd.service.
Feb  9 18:55:43.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.518175 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 18:55:43.534842 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Feb  9 18:55:43.543833 kernel: ACPI: button: Power Button [PWRF]
Feb  9 18:55:43.557153 systemd-networkd[1075]: lo: Link UP
Feb  9 18:55:43.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.557162 systemd-networkd[1075]: lo: Gained carrier
Feb  9 18:55:43.557536 systemd-networkd[1075]: Enumeration completed
Feb  9 18:55:43.557642 systemd-networkd[1075]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 18:55:43.557656 systemd[1]: Started systemd-networkd.service.
Feb  9 18:55:43.559702 systemd-networkd[1075]: eth0: Link UP
Feb  9 18:55:43.559708 systemd-networkd[1075]: eth0: Gained carrier
Feb  9 18:55:43.555000 audit[1084]: AVC avc:  denied  { confidentiality } for  pid=1084 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  9 18:55:43.555000 audit[1084]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d41d25f500 a1=32194 a2=7ff1f66f0bc5 a3=5 items=108 ppid=1064 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 18:55:43.555000 audit: CWD cwd="/"
Feb  9 18:55:43.555000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=1 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=2 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=3 name=(null) inode=14996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=4 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=5 name=(null) inode=14997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=6 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=7 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=8 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=9 name=(null) inode=14999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=10 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=11 name=(null) inode=15000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=12 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=13 name=(null) inode=15001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=14 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=15 name=(null) inode=15002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=16 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=17 name=(null) inode=15003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=18 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=19 name=(null) inode=15004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=20 name=(null) inode=15004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=21 name=(null) inode=15005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=22 name=(null) inode=15004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=23 name=(null) inode=15006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=24 name=(null) inode=15004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=25 name=(null) inode=15007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=26 name=(null) inode=15004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=27 name=(null) inode=15008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=28 name=(null) inode=15004 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=29 name=(null) inode=15009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=30 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=31 name=(null) inode=15010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=32 name=(null) inode=15010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=33 name=(null) inode=15011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=34 name=(null) inode=15010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=35 name=(null) inode=15012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=36 name=(null) inode=15010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=37 name=(null) inode=15013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=38 name=(null) inode=15010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=39 name=(null) inode=15014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=40 name=(null) inode=15010 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=41 name=(null) inode=15015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=42 name=(null) inode=14995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=43 name=(null) inode=15016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=44 name=(null) inode=15016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=45 name=(null) inode=15017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=46 name=(null) inode=15016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=47 name=(null) inode=15018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=48 name=(null) inode=15016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=49 name=(null) inode=15019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=50 name=(null) inode=15016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=51 name=(null) inode=15020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=52 name=(null) inode=15016 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=53 name=(null) inode=15021 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=55 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=56 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=57 name=(null) inode=15023 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=58 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=59 name=(null) inode=15024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=60 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=61 name=(null) inode=15025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=62 name=(null) inode=15025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=63 name=(null) inode=15026 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=64 name=(null) inode=15025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=65 name=(null) inode=15027 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=66 name=(null) inode=15025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=67 name=(null) inode=15028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=68 name=(null) inode=15025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=69 name=(null) inode=15029 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=70 name=(null) inode=15025 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=71 name=(null) inode=15030 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=72 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=73 name=(null) inode=15031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=74 name=(null) inode=15031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=75 name=(null) inode=15032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=76 name=(null) inode=15031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=77 name=(null) inode=15033 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=78 name=(null) inode=15031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=79 name=(null) inode=15034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=80 name=(null) inode=15031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=81 name=(null) inode=15035 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=82 name=(null) inode=15031 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=83 name=(null) inode=15036 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=84 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=85 name=(null) inode=15037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=86 name=(null) inode=15037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=87 name=(null) inode=15038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=88 name=(null) inode=15037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=89 name=(null) inode=15039 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=90 name=(null) inode=15037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=91 name=(null) inode=15040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=92 name=(null) inode=15037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=93 name=(null) inode=15041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=94 name=(null) inode=15037 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=95 name=(null) inode=15042 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=96 name=(null) inode=15022 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=97 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=98 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=99 name=(null) inode=15044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=100 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=101 name=(null) inode=15045 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=102 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=103 name=(null) inode=15046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=104 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=105 name=(null) inode=15047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=106 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PATH item=107 name=(null) inode=15048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 18:55:43.555000 audit: PROCTITLE proctitle="(udev-worker)"
Feb  9 18:55:43.573354 systemd-networkd[1075]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb  9 18:55:43.580833 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  9 18:55:43.581824 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Feb  9 18:55:43.587854 kernel: mousedev: PS/2 mouse device common for all mice
Feb  9 18:55:43.686103 kernel: kvm: Nested Virtualization enabled
Feb  9 18:55:43.686198 kernel: SVM: kvm: Nested Paging enabled
Feb  9 18:55:43.686213 kernel: SVM: Virtual VMLOAD VMSAVE supported
Feb  9 18:55:43.686924 kernel: SVM: Virtual GIF supported
Feb  9 18:55:43.703818 kernel: EDAC MC: Ver: 3.0.0
Feb  9 18:55:43.722262 systemd[1]: Finished systemd-udev-settle.service.
Feb  9 18:55:43.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.724449 systemd[1]: Starting lvm2-activation-early.service...
Feb  9 18:55:43.733404 lvm[1101]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 18:55:43.754528 systemd[1]: Finished lvm2-activation-early.service.
Feb  9 18:55:43.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.755381 systemd[1]: Reached target cryptsetup.target.
Feb  9 18:55:43.757247 systemd[1]: Starting lvm2-activation.service...
Feb  9 18:55:43.761670 lvm[1103]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 18:55:43.783709 systemd[1]: Finished lvm2-activation.service.
Feb  9 18:55:43.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.784425 systemd[1]: Reached target local-fs-pre.target.
Feb  9 18:55:43.785044 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  9 18:55:43.785072 systemd[1]: Reached target local-fs.target.
Feb  9 18:55:43.785660 systemd[1]: Reached target machines.target.
Feb  9 18:55:43.787200 systemd[1]: Starting ldconfig.service...
Feb  9 18:55:43.787998 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb  9 18:55:43.788057 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 18:55:43.789159 systemd[1]: Starting systemd-boot-update.service...
Feb  9 18:55:43.790873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb  9 18:55:43.792715 systemd[1]: Starting systemd-machine-id-commit.service...
Feb  9 18:55:43.793662 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb  9 18:55:43.793707 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb  9 18:55:43.794972 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb  9 18:55:43.795907 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1106 (bootctl)
Feb  9 18:55:43.796845 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb  9 18:55:43.801656 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb  9 18:55:43.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.808747 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb  9 18:55:43.809404 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb  9 18:55:43.810501 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb  9 18:55:43.831574 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31)
Feb  9 18:55:43.831574 systemd-fsck[1115]: /dev/vda1: 789 files, 115339/258078 clusters
Feb  9 18:55:43.833375 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb  9 18:55:43.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:43.835395 systemd[1]: Mounting boot.mount...
Feb  9 18:55:43.842913 systemd[1]: Mounted boot.mount.
Feb  9 18:55:43.854873 systemd[1]: Finished systemd-boot-update.service.
Feb  9 18:55:43.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.274819 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb  9 18:55:44.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.278827 kernel: kauditd_printk_skb: 197 callbacks suppressed
Feb  9 18:55:44.278963 kernel: audit: type=1130 audit(1707504944.275:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.277533 systemd[1]: Starting audit-rules.service...
Feb  9 18:55:44.281056 systemd[1]: Starting clean-ca-certificates.service...
Feb  9 18:55:44.282831 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb  9 18:55:44.284771 systemd[1]: Starting systemd-resolved.service...
Feb  9 18:55:44.286521 systemd[1]: Starting systemd-timesyncd.service...
Feb  9 18:55:44.288009 systemd[1]: Starting systemd-update-utmp.service...
Feb  9 18:55:44.289101 systemd[1]: Finished clean-ca-certificates.service.
Feb  9 18:55:44.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.292806 kernel: audit: type=1130 audit(1707504944.289:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.293051 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb  9 18:55:44.303653 kernel: audit: type=1127 audit(1707504944.293:123): pid=1128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.303785 kernel: audit: type=1130 audit(1707504944.299:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.293000 audit[1128]: SYSTEM_BOOT pid=1128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.299826 systemd[1]: Finished systemd-update-utmp.service.
Feb  9 18:55:44.314030 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb  9 18:55:44.319823 kernel: audit: type=1130 audit(1707504944.314:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 18:55:44.320000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  9 18:55:44.320000 audit[1145]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe704d83b0 a2=420 a3=0 items=0 ppid=1122 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 18:55:44.325319 ldconfig[1105]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb  9 18:55:44.325682 augenrules[1145]: No rules
Feb  9 18:55:44.326802 kernel: audit: type=1305 audit(1707504944.320:126): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  9 18:55:44.326843 kernel: audit: type=1300 audit(1707504944.320:126): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe704d83b0 a2=420 a3=0 items=0 ppid=1122 pid=1145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 18:55:44.326859 kernel: audit: type=1327 audit(1707504944.320:126): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  9 18:55:44.320000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  9 18:55:44.328821 systemd[1]: Finished audit-rules.service.
Feb  9 18:55:44.338733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb  9 18:55:44.339462 systemd[1]: Finished ldconfig.service.
Feb  9 18:55:44.340667 systemd[1]: Finished systemd-machine-id-commit.service.
Feb  9 18:55:44.342938 systemd[1]: Starting systemd-update-done.service...
Feb  9 18:55:44.351033 systemd[1]: Finished systemd-update-done.service.
Feb  9 18:55:44.358661 systemd-resolved[1125]: Positive Trust Anchors:
Feb  9 18:55:44.358673 systemd-resolved[1125]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 18:55:44.358698 systemd-resolved[1125]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 18:55:44.361956 systemd[1]: Started systemd-timesyncd.service.
Feb  9 18:55:44.363207 systemd[1]: Reached target time-set.target.
Feb  9 18:55:45.237206 systemd-timesyncd[1127]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb  9 18:55:45.237660 systemd-timesyncd[1127]: Initial clock synchronization to Fri 2024-02-09 18:55:45.237114 UTC.
Feb  9 18:55:45.241641 systemd-resolved[1125]: Defaulting to hostname 'linux'.
Feb  9 18:55:45.243268 systemd[1]: Started systemd-resolved.service.
Feb  9 18:55:45.244015 systemd[1]: Reached target network.target.
Feb  9 18:55:45.244677 systemd[1]: Reached target nss-lookup.target.
Feb  9 18:55:45.245345 systemd[1]: Reached target sysinit.target.
Feb  9 18:55:45.246071 systemd[1]: Started motdgen.path.
Feb  9 18:55:45.246678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb  9 18:55:45.247686 systemd[1]: Started logrotate.timer.
Feb  9 18:55:45.248372 systemd[1]: Started mdadm.timer.
Feb  9 18:55:45.248955 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb  9 18:55:45.249631 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb  9 18:55:45.249673 systemd[1]: Reached target paths.target.
Feb  9 18:55:45.250283 systemd[1]: Reached target timers.target.
Feb  9 18:55:45.251266 systemd[1]: Listening on dbus.socket.
Feb  9 18:55:45.253364 systemd[1]: Starting docker.socket...
Feb  9 18:55:45.255034 systemd[1]: Listening on sshd.socket.
Feb  9 18:55:45.255737 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 18:55:45.256128 systemd[1]: Listening on docker.socket.
Feb  9 18:55:45.256763 systemd[1]: Reached target sockets.target.
Feb  9 18:55:45.257455 systemd[1]: Reached target basic.target.
Feb  9 18:55:45.258238 systemd[1]: System is tainted: cgroupsv1
Feb  9 18:55:45.258293 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 18:55:45.258321 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 18:55:45.259570 systemd[1]: Starting containerd.service...
Feb  9 18:55:45.261444 systemd[1]: Starting dbus.service...
Feb  9 18:55:45.263435 systemd[1]: Starting enable-oem-cloudinit.service...
Feb  9 18:55:45.265416 systemd[1]: Starting extend-filesystems.service...
Feb  9 18:55:45.266234 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb  9 18:55:45.267641 systemd[1]: Starting motdgen.service...
Feb  9 18:55:45.269214 systemd[1]: Starting prepare-cni-plugins.service...
Feb  9 18:55:45.271933 jq[1161]: false
Feb  9 18:55:45.271981 systemd[1]: Starting prepare-critools.service...
Feb  9 18:55:45.274213 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb  9 18:55:45.276120 systemd[1]: Starting sshd-keygen.service...
Feb  9 18:55:45.281910 systemd[1]: Starting systemd-logind.service...
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found sr0
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda1
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda2
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda3
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found usr
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda4
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda6
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda7
Feb  9 18:55:45.284462 extend-filesystems[1162]: Found vda9
Feb  9 18:55:45.284462 extend-filesystems[1162]: Checking size of /dev/vda9
Feb  9 18:55:45.282634 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 18:55:45.285561 dbus-daemon[1160]: [system] SELinux support is enabled
Feb  9 18:55:45.282696 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb  9 18:55:45.284017 systemd[1]: Starting update-engine.service...
Feb  9 18:55:45.285837 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb  9 18:55:45.308565 jq[1182]: true
Feb  9 18:55:45.287351 systemd[1]: Started dbus.service.
Feb  9 18:55:45.291932 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb  9 18:55:45.292189 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb  9 18:55:45.293626 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb  9 18:55:45.293906 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb  9 18:55:45.309286 tar[1189]: ./
Feb  9 18:55:45.309286 tar[1189]: ./macvlan
Feb  9 18:55:45.296515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb  9 18:55:45.296552 systemd[1]: Reached target system-config.target.
Feb  9 18:55:45.298545 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb  9 18:55:45.298560 systemd[1]: Reached target user-config.target.
Feb  9 18:55:45.310487 tar[1190]: crictl
Feb  9 18:55:45.310630 systemd[1]: motdgen.service: Deactivated successfully.
Feb  9 18:55:45.310875 systemd[1]: Finished motdgen.service.
Feb  9 18:55:45.328927 extend-filesystems[1162]: Resized partition /dev/vda9
Feb  9 18:55:45.402082 extend-filesystems[1201]: resize2fs 1.46.5 (30-Dec-2021)
Feb  9 18:55:45.404310 jq[1191]: true
Feb  9 18:55:45.431887 env[1192]: time="2024-02-09T18:55:45.431835932Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb  9 18:55:45.441294 systemd-logind[1178]: Watching system buttons on /dev/input/event1 (Power Button)
Feb  9 18:55:45.441313 systemd-logind[1178]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb  9 18:55:45.442518 systemd-logind[1178]: New seat seat0.
Feb  9 18:55:45.445558 systemd[1]: Started systemd-logind.service.
Feb  9 18:55:45.455762 tar[1189]: ./static
Feb  9 18:55:45.458858 update_engine[1180]: I0209 18:55:45.458495  1180 main.cc:92] Flatcar Update Engine starting
Feb  9 18:55:45.460671 systemd[1]: Started update-engine.service.
Feb  9 18:55:45.461484 update_engine[1180]: I0209 18:55:45.461315  1180 update_check_scheduler.cc:74] Next update check in 4m39s
Feb  9 18:55:45.463051 systemd[1]: Started locksmithd.service.
Feb  9 18:55:45.484894 env[1192]: time="2024-02-09T18:55:45.484299361Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb  9 18:55:45.484894 env[1192]: time="2024-02-09T18:55:45.484486172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:55:45.488255 env[1192]: time="2024-02-09T18:55:45.488046147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb  9 18:55:45.488255 env[1192]: time="2024-02-09T18:55:45.488084539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:55:45.488559 env[1192]: time="2024-02-09T18:55:45.488530655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 18:55:45.488638 env[1192]: time="2024-02-09T18:55:45.488618349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb  9 18:55:45.488718 env[1192]: time="2024-02-09T18:55:45.488698510Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb  9 18:55:45.488790 env[1192]: time="2024-02-09T18:55:45.488771376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb  9 18:55:45.488975 env[1192]: time="2024-02-09T18:55:45.488958627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:55:45.489290 env[1192]: time="2024-02-09T18:55:45.489273678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb  9 18:55:45.489507 env[1192]: time="2024-02-09T18:55:45.489487319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 18:55:45.489581 env[1192]: time="2024-02-09T18:55:45.489562109Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb  9 18:55:45.489705 env[1192]: time="2024-02-09T18:55:45.489687073Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb  9 18:55:45.489788 env[1192]: time="2024-02-09T18:55:45.489769468Z" level=info msg="metadata content store policy set" policy=shared
Feb  9 18:55:45.491866 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb  9 18:55:45.507258 tar[1189]: ./vlan
Feb  9 18:55:45.541122 tar[1189]: ./portmap
Feb  9 18:55:45.581878 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb  9 18:55:45.613963 extend-filesystems[1201]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb  9 18:55:45.613963 extend-filesystems[1201]: old_desc_blocks = 1, new_desc_blocks = 1
Feb  9 18:55:45.613963 extend-filesystems[1201]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb  9 18:55:45.629490 extend-filesystems[1162]: Resized filesystem in /dev/vda9
Feb  9 18:55:45.630163 bash[1224]: Updated "/home/core/.ssh/authorized_keys"
Feb  9 18:55:45.624898 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb  9 18:55:45.625144 systemd[1]: Finished extend-filesystems.service.
Feb  9 18:55:45.627438 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb  9 18:55:45.631074 env[1192]: time="2024-02-09T18:55:45.631022918Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb  9 18:55:45.631138 env[1192]: time="2024-02-09T18:55:45.631086657Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb  9 18:55:45.631138 env[1192]: time="2024-02-09T18:55:45.631100213Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb  9 18:55:45.631183 env[1192]: time="2024-02-09T18:55:45.631150627Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631183 env[1192]: time="2024-02-09T18:55:45.631165636Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631183 env[1192]: time="2024-02-09T18:55:45.631179311Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631239 env[1192]: time="2024-02-09T18:55:45.631191835Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631239 env[1192]: time="2024-02-09T18:55:45.631207785Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631239 env[1192]: time="2024-02-09T18:55:45.631221841Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631239 env[1192]: time="2024-02-09T18:55:45.631234735Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631322 env[1192]: time="2024-02-09T18:55:45.631249703Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631322 env[1192]: time="2024-02-09T18:55:45.631263599Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb  9 18:55:45.631432 env[1192]: time="2024-02-09T18:55:45.631392801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb  9 18:55:45.631563 env[1192]: time="2024-02-09T18:55:45.631487429Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb  9 18:55:45.631896 env[1192]: time="2024-02-09T18:55:45.631870557Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb  9 18:55:45.631946 env[1192]: time="2024-02-09T18:55:45.631907637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.631946 env[1192]: time="2024-02-09T18:55:45.631920771Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb  9 18:55:45.631987 env[1192]: time="2024-02-09T18:55:45.631973911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632008 env[1192]: time="2024-02-09T18:55:45.631987306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632008 env[1192]: time="2024-02-09T18:55:45.631998738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632052 env[1192]: time="2024-02-09T18:55:45.632008887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632052 env[1192]: time="2024-02-09T18:55:45.632020949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632052 env[1192]: time="2024-02-09T18:55:45.632031739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632052 env[1192]: time="2024-02-09T18:55:45.632041648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632052 env[1192]: time="2024-02-09T18:55:45.632051066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632157 env[1192]: time="2024-02-09T18:55:45.632062928Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632176601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632194224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632205365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632215534Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632231324Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632242895Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb  9 18:55:45.632275 env[1192]: time="2024-02-09T18:55:45.632261490Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb  9 18:55:45.632432 env[1192]: time="2024-02-09T18:55:45.632295294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb  9 18:55:45.632533 env[1192]: time="2024-02-09T18:55:45.632483787Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb  9 18:55:45.632533 env[1192]: time="2024-02-09T18:55:45.632534983Z" level=info msg="Connect containerd service"
Feb  9 18:55:45.633531 env[1192]: time="2024-02-09T18:55:45.632574257Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb  9 18:55:45.634006 tar[1189]: ./host-local
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634223830Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634514575Z" level=info msg="Start subscribing containerd event"
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634593543Z" level=info msg="Start recovering state"
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634639389Z" level=info msg="Start event monitor"
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634672220Z" level=info msg="Start snapshots syncer"
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634684043Z" level=info msg="Start cni network conf syncer for default"
Feb  9 18:55:45.634739 env[1192]: time="2024-02-09T18:55:45.634690374Z" level=info msg="Start streaming server"
Feb  9 18:55:45.635177 env[1192]: time="2024-02-09T18:55:45.635148103Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb  9 18:55:45.635296 env[1192]: time="2024-02-09T18:55:45.635281883Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb  9 18:55:45.635636 env[1192]: time="2024-02-09T18:55:45.635622502Z" level=info msg="containerd successfully booted in 0.204628s"
Feb  9 18:55:45.635700 systemd[1]: Started containerd.service.
Feb  9 18:55:45.662235 locksmithd[1225]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb  9 18:55:45.668642 tar[1189]: ./vrf
Feb  9 18:55:45.710256 tar[1189]: ./bridge
Feb  9 18:55:45.760251 tar[1189]: ./tuning
Feb  9 18:55:45.804478 tar[1189]: ./firewall
Feb  9 18:55:45.854344 tar[1189]: ./host-device
Feb  9 18:55:45.889684 tar[1189]: ./sbr
Feb  9 18:55:45.956114 systemd-networkd[1075]: eth0: Gained IPv6LL
Feb  9 18:55:45.960152 tar[1189]: ./loopback
Feb  9 18:55:46.001446 tar[1189]: ./dhcp
Feb  9 18:55:46.017410 systemd[1]: Finished prepare-critools.service.
Feb  9 18:55:46.082859 tar[1189]: ./ptp
Feb  9 18:55:46.115317 tar[1189]: ./ipvlan
Feb  9 18:55:46.146098 tar[1189]: ./bandwidth
Feb  9 18:55:46.184991 systemd[1]: Finished prepare-cni-plugins.service.
Feb  9 18:55:46.722513 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb  9 18:55:46.747698 systemd[1]: Finished sshd-keygen.service.
Feb  9 18:55:46.749722 systemd[1]: Starting issuegen.service...
Feb  9 18:55:46.754254 systemd[1]: issuegen.service: Deactivated successfully.
Feb  9 18:55:46.754431 systemd[1]: Finished issuegen.service.
Feb  9 18:55:46.756055 systemd[1]: Starting systemd-user-sessions.service...
Feb  9 18:55:46.760568 systemd[1]: Finished systemd-user-sessions.service.
Feb  9 18:55:46.762304 systemd[1]: Started getty@tty1.service.
Feb  9 18:55:46.763915 systemd[1]: Started serial-getty@ttyS0.service.
Feb  9 18:55:46.764743 systemd[1]: Reached target getty.target.
Feb  9 18:55:46.765422 systemd[1]: Reached target multi-user.target.
Feb  9 18:55:46.767157 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb  9 18:55:46.774237 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  9 18:55:46.774421 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb  9 18:55:46.775322 systemd[1]: Startup finished in 6.325s (kernel) + 6.467s (userspace) = 12.793s.
Feb  9 18:55:54.792470 systemd[1]: Created slice system-sshd.slice.
Feb  9 18:55:54.793539 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:33400.service.
Feb  9 18:55:54.828024 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 33400 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc
Feb  9 18:55:54.829655 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:55:54.838095 systemd-logind[1178]: New session 1 of user core.
Feb  9 18:55:54.838840 systemd[1]: Created slice user-500.slice.
Feb  9 18:55:54.839644 systemd[1]: Starting user-runtime-dir@500.service...
Feb  9 18:55:54.847591 systemd[1]: Finished user-runtime-dir@500.service.
Feb  9 18:55:54.849174 systemd[1]: Starting user@500.service...
Feb  9 18:55:54.851583 (systemd)[1272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:55:54.915887 systemd[1272]: Queued start job for default target default.target.
Feb  9 18:55:54.916071 systemd[1272]: Reached target paths.target.
Feb  9 18:55:54.916086 systemd[1272]: Reached target sockets.target.
Feb  9 18:55:54.916097 systemd[1272]: Reached target timers.target.
Feb  9 18:55:54.916108 systemd[1272]: Reached target basic.target.
Feb  9 18:55:54.916143 systemd[1272]: Reached target default.target.
Feb  9 18:55:54.916163 systemd[1272]: Startup finished in 59ms.
Feb  9 18:55:54.916371 systemd[1]: Started user@500.service.
Feb  9 18:55:54.917622 systemd[1]: Started session-1.scope.
Feb  9 18:55:54.966450 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:33410.service.
Feb  9 18:55:54.995841 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 33410 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc
Feb  9 18:55:54.997017 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:55:55.001785 systemd-logind[1178]: New session 2 of user core.
Feb  9 18:55:55.002554 systemd[1]: Started session-2.scope.
Feb  9 18:55:55.056270 sshd[1282]: pam_unix(sshd:session): session closed for user core
Feb  9 18:55:55.058347 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:33414.service.
Feb  9 18:55:55.058738 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:33410.service: Deactivated successfully.
Feb  9 18:55:55.059467 systemd-logind[1178]: Session 2 logged out. Waiting for processes to exit.
Feb  9 18:55:55.059505 systemd[1]: session-2.scope: Deactivated successfully.
Feb  9 18:55:55.060198 systemd-logind[1178]: Removed session 2.
Feb  9 18:55:55.088409 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 33414 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc
Feb  9 18:55:55.089549 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:55:55.092794 systemd-logind[1178]: New session 3 of user core.
Feb  9 18:55:55.093517 systemd[1]: Started session-3.scope.
Feb  9 18:55:55.143134 sshd[1287]: pam_unix(sshd:session): session closed for user core
Feb  9 18:55:55.145157 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:33418.service.
Feb  9 18:55:55.145941 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:33414.service: Deactivated successfully.
Feb  9 18:55:55.146639 systemd[1]: session-3.scope: Deactivated successfully.
Feb  9 18:55:55.146733 systemd-logind[1178]: Session 3 logged out. Waiting for processes to exit.
Feb  9 18:55:55.147585 systemd-logind[1178]: Removed session 3.
Feb  9 18:55:55.173653 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 33418 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc
Feb  9 18:55:55.174646 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:55:55.177765 systemd-logind[1178]: New session 4 of user core.
Feb  9 18:55:55.178476 systemd[1]: Started session-4.scope.
Feb  9 18:55:55.230551 sshd[1294]: pam_unix(sshd:session): session closed for user core
Feb  9 18:55:55.232772 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:33428.service.
Feb  9 18:55:55.233531 systemd-logind[1178]: Session 4 logged out. Waiting for processes to exit.
Feb  9 18:55:55.233709 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:33418.service: Deactivated successfully.
Feb  9 18:55:55.234300 systemd[1]: session-4.scope: Deactivated successfully.
Feb  9 18:55:55.234681 systemd-logind[1178]: Removed session 4.
Feb  9 18:55:55.262596 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 33428 ssh2: RSA SHA256:ykpv2PfBe3Q14nkyYOIn6pLGnIi82XRDx9K/jsWifZc
Feb  9 18:55:55.263662 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 18:55:55.266740 systemd-logind[1178]: New session 5 of user core.
Feb  9 18:55:55.267318 systemd[1]: Started session-5.scope.
Feb  9 18:55:55.321672 sudo[1307]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb  9 18:55:55.321842 sudo[1307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb  9 18:55:55.824528 systemd[1]: Reloading.
Feb  9 18:55:55.882080 /usr/lib/systemd/system-generators/torcx-generator[1336]: time="2024-02-09T18:55:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 18:55:55.882104 /usr/lib/systemd/system-generators/torcx-generator[1336]: time="2024-02-09T18:55:55Z" level=info msg="torcx already run"
Feb  9 18:55:55.945913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 18:55:55.945931 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 18:55:55.964498 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 18:55:56.029337 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  9 18:55:56.035513 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  9 18:55:56.035930 systemd[1]: Reached target network-online.target.
Feb  9 18:55:56.037225 systemd[1]: Started kubelet.service.
Feb  9 18:55:56.046635 systemd[1]: Starting coreos-metadata.service...
Feb  9 18:55:56.053381 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb  9 18:55:56.053583 systemd[1]: Finished coreos-metadata.service.
Feb  9 18:55:56.089611 kubelet[1384]: E0209 18:55:56.089490    1384 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb  9 18:55:56.091421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 18:55:56.091551 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 18:55:56.310521 systemd[1]: Stopped kubelet.service.
Feb  9 18:55:56.325033 systemd[1]: Reloading.
Feb  9 18:55:56.387442 /usr/lib/systemd/system-generators/torcx-generator[1456]: time="2024-02-09T18:55:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 18:55:56.387479 /usr/lib/systemd/system-generators/torcx-generator[1456]: time="2024-02-09T18:55:56Z" level=info msg="torcx already run"
Feb  9 18:55:56.445777 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 18:55:56.445791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 18:55:56.462349 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 18:55:56.534295 systemd[1]: Started kubelet.service.
Feb  9 18:55:56.579576 kubelet[1504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 18:55:56.579576 kubelet[1504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 18:55:56.580035 kubelet[1504]: I0209 18:55:56.579619    1504 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  9 18:55:56.581043 kubelet[1504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 18:55:56.581043 kubelet[1504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 18:55:57.087346 kubelet[1504]: I0209 18:55:57.087306    1504 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb  9 18:55:57.087346 kubelet[1504]: I0209 18:55:57.087336    1504 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  9 18:55:57.087612 kubelet[1504]: I0209 18:55:57.087598    1504 server.go:836] "Client rotation is on, will bootstrap in background"
Feb  9 18:55:57.089293 kubelet[1504]: I0209 18:55:57.089273    1504 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 18:55:57.092949 kubelet[1504]: I0209 18:55:57.092930    1504 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  9 18:55:57.093315 kubelet[1504]: I0209 18:55:57.093300    1504 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  9 18:55:57.093413 kubelet[1504]: I0209 18:55:57.093398    1504 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb  9 18:55:57.093527 kubelet[1504]: I0209 18:55:57.093428    1504 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb  9 18:55:57.093527 kubelet[1504]: I0209 18:55:57.093441    1504 container_manager_linux.go:308] "Creating device plugin manager"
Feb  9 18:55:57.093577 kubelet[1504]: I0209 18:55:57.093541    1504 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 18:55:57.096823 kubelet[1504]: I0209 18:55:57.096789    1504 kubelet.go:398] "Attempting to sync node with API server"
Feb  9 18:55:57.096823 kubelet[1504]: I0209 18:55:57.096827    1504 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  9 18:55:57.096955 kubelet[1504]: I0209 18:55:57.096863    1504 kubelet.go:297] "Adding apiserver pod source"
Feb  9 18:55:57.096955 kubelet[1504]: I0209 18:55:57.096886    1504 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  9 18:55:57.097104 kubelet[1504]: E0209 18:55:57.097087    1504 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:55:57.097176 kubelet[1504]: E0209 18:55:57.097120    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:55:57.097346 kubelet[1504]: I0209 18:55:57.097332    1504 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  9 18:55:57.097639 kubelet[1504]: W0209 18:55:57.097627    1504 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb  9 18:55:57.098032 kubelet[1504]: I0209 18:55:57.098018    1504 server.go:1186] "Started kubelet"
Feb  9 18:55:57.098163 kubelet[1504]: I0209 18:55:57.098145    1504 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb  9 18:55:57.099264 kubelet[1504]: E0209 18:55:57.098914    1504 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  9 18:55:57.099264 kubelet[1504]: E0209 18:55:57.098940    1504 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  9 18:55:57.099264 kubelet[1504]: I0209 18:55:57.098961    1504 server.go:451] "Adding debug handlers to kubelet server"
Feb  9 18:55:57.100554 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb  9 18:55:57.100655 kubelet[1504]: I0209 18:55:57.100644    1504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  9 18:55:57.100746 kubelet[1504]: I0209 18:55:57.100730    1504 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb  9 18:55:57.100987 kubelet[1504]: I0209 18:55:57.100946    1504 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  9 18:55:57.101286 kubelet[1504]: E0209 18:55:57.101265    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:55:57.123873 kubelet[1504]: W0209 18:55:57.123768    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:55:57.123873 kubelet[1504]: E0209 18:55:57.123809    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:55:57.123873 kubelet[1504]: E0209 18:55:57.123868    1504 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.91" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:55:57.124097 kubelet[1504]: E0209 18:55:57.123911    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b6340b6083", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 97992323, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 97992323, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.124198 kubelet[1504]: W0209 18:55:57.124156    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:55:57.124198 kubelet[1504]: E0209 18:55:57.124166    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:55:57.124198 kubelet[1504]: W0209 18:55:57.124185    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:55:57.124298 kubelet[1504]: E0209 18:55:57.124222    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:55:57.130410 kubelet[1504]: E0209 18:55:57.130273    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63419a705", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 98927877, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 98927877, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.140098 kubelet[1504]: E0209 18:55:57.140017    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.140279 kubelet[1504]: I0209 18:55:57.140231    1504 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  9 18:55:57.140279 kubelet[1504]: I0209 18:55:57.140242    1504 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  9 18:55:57.140279 kubelet[1504]: I0209 18:55:57.140256    1504 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 18:55:57.140654 kubelet[1504]: E0209 18:55:57.140585    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.141285 kubelet[1504]: E0209 18:55:57.141242    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.142439 kubelet[1504]: I0209 18:55:57.142420    1504 policy_none.go:49] "None policy: Start"
Feb  9 18:55:57.142884 kubelet[1504]: I0209 18:55:57.142862    1504 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  9 18:55:57.142927 kubelet[1504]: I0209 18:55:57.142903    1504 state_mem.go:35] "Initializing new in-memory state store"
Feb  9 18:55:57.148608 kubelet[1504]: I0209 18:55:57.148586    1504 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  9 18:55:57.148980 kubelet[1504]: I0209 18:55:57.148963    1504 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  9 18:55:57.150163 kubelet[1504]: E0209 18:55:57.150082    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b6371b1074", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 149352052, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 149352052, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.150337 kubelet[1504]: E0209 18:55:57.150313    1504 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.91\" not found"
Feb  9 18:55:57.202007 kubelet[1504]: I0209 18:55:57.201976    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:55:57.203151 kubelet[1504]: E0209 18:55:57.203124    1504 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.91"
Feb  9 18:55:57.203224 kubelet[1504]: E0209 18:55:57.203158    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 201913114, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636814ed6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.203931 kubelet[1504]: E0209 18:55:57.203882    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 201922932, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636816865" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.204544 kubelet[1504]: E0209 18:55:57.204494    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 201926679, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b63681783e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.217301 kubelet[1504]: I0209 18:55:57.217267    1504 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb  9 18:55:57.232428 kubelet[1504]: I0209 18:55:57.232402    1504 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb  9 18:55:57.232428 kubelet[1504]: I0209 18:55:57.232438    1504 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb  9 18:55:57.232580 kubelet[1504]: I0209 18:55:57.232474    1504 kubelet.go:2113] "Starting kubelet main sync loop"
Feb  9 18:55:57.232580 kubelet[1504]: E0209 18:55:57.232532    1504 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb  9 18:55:57.233269 kubelet[1504]: W0209 18:55:57.233257    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:55:57.233394 kubelet[1504]: E0209 18:55:57.233376    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:55:57.324916 kubelet[1504]: E0209 18:55:57.324873    1504 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.91" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:55:57.404303 kubelet[1504]: I0209 18:55:57.404184    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:55:57.405459 kubelet[1504]: E0209 18:55:57.405426    1504 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.91"
Feb  9 18:55:57.405631 kubelet[1504]: E0209 18:55:57.405417    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 404130932, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636814ed6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.406277 kubelet[1504]: E0209 18:55:57.406184    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 404142234, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636816865" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.499776 kubelet[1504]: E0209 18:55:57.499671    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 404146151, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b63681783e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.726795 kubelet[1504]: E0209 18:55:57.726751    1504 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.91" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:55:57.806844 kubelet[1504]: I0209 18:55:57.806802    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:55:57.807942 kubelet[1504]: E0209 18:55:57.807924    1504 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.91"
Feb  9 18:55:57.807990 kubelet[1504]: E0209 18:55:57.807892    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 806756155, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636814ed6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.899212 kubelet[1504]: E0209 18:55:57.899112    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 806766113, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636816865" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:57.942536 kubelet[1504]: W0209 18:55:57.942494    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:55:57.942536 kubelet[1504]: E0209 18:55:57.942526    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:55:58.033057 kubelet[1504]: W0209 18:55:58.032961    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:55:58.033057 kubelet[1504]: E0209 18:55:58.032992    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:55:58.097388 kubelet[1504]: E0209 18:55:58.097330    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:55:58.099363 kubelet[1504]: E0209 18:55:58.099265    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 806768538, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b63681783e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:58.528624 kubelet[1504]: E0209 18:55:58.528577    1504 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.91" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:55:58.608894 kubelet[1504]: I0209 18:55:58.608864    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:55:58.609959 kubelet[1504]: E0209 18:55:58.609931    1504 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.91"
Feb  9 18:55:58.610253 kubelet[1504]: E0209 18:55:58.610151    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 58, 608787633, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636814ed6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:58.611108 kubelet[1504]: E0209 18:55:58.611024    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 58, 608798704, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636816865" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:58.615840 kubelet[1504]: W0209 18:55:58.615812    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:55:58.615904 kubelet[1504]: E0209 18:55:58.615847    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:55:58.682593 kubelet[1504]: W0209 18:55:58.682567    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:55:58.682593 kubelet[1504]: E0209 18:55:58.682584    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:55:58.699525 kubelet[1504]: E0209 18:55:58.699441    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 55, 58, 608825033, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b63681783e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:55:59.098483 kubelet[1504]: E0209 18:55:59.098429    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:00.053319 kubelet[1504]: W0209 18:56:00.053269    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:56:00.053319 kubelet[1504]: E0209 18:56:00.053318    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:56:00.098540 kubelet[1504]: E0209 18:56:00.098514    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:00.129649 kubelet[1504]: E0209 18:56:00.129613    1504 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.91" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:56:00.132363 kubelet[1504]: W0209 18:56:00.132346    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:56:00.132425 kubelet[1504]: E0209 18:56:00.132372    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:56:00.211515 kubelet[1504]: I0209 18:56:00.211477    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:56:00.212700 kubelet[1504]: E0209 18:56:00.212615    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 56, 0, 211429887, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636814ed6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:56:00.212840 kubelet[1504]: E0209 18:56:00.212714    1504 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.91"
Feb  9 18:56:00.213670 kubelet[1504]: E0209 18:56:00.213613    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 56, 0, 211440838, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636816865" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:56:00.214343 kubelet[1504]: E0209 18:56:00.214296    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 56, 0, 211444935, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b63681783e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:56:01.099038 kubelet[1504]: E0209 18:56:01.098976    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:01.175135 kubelet[1504]: W0209 18:56:01.175092    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:56:01.175135 kubelet[1504]: E0209 18:56:01.175130    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb  9 18:56:01.871373 kubelet[1504]: W0209 18:56:01.871327    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:56:01.871373 kubelet[1504]: E0209 18:56:01.871369    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:56:02.099902 kubelet[1504]: E0209 18:56:02.099808    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:03.100379 kubelet[1504]: E0209 18:56:03.100327    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:03.333323 kubelet[1504]: E0209 18:56:03.333276    1504 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.91" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease"
Feb  9 18:56:03.414433 kubelet[1504]: I0209 18:56:03.414321    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:56:03.415734 kubelet[1504]: E0209 18:56:03.415713    1504 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.91"
Feb  9 18:56:03.415799 kubelet[1504]: E0209 18:56:03.415725    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636814ed6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.91 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139275478, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 56, 3, 414273117, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636814ed6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:56:03.416781 kubelet[1504]: E0209 18:56:03.416727    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b636816865", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.91 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139282021, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 56, 3, 414281874, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b636816865" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:56:03.417499 kubelet[1504]: E0209 18:56:03.417429    1504 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.91.17b246b63681783e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.91", UID:"10.0.0.91", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.91 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.91"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 55, 57, 139286078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 56, 3, 414284348, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.91.17b246b63681783e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Feb  9 18:56:03.659904 kubelet[1504]: W0209 18:56:03.659866    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:56:03.659904 kubelet[1504]: E0209 18:56:03.659904    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.91" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb  9 18:56:04.100835 kubelet[1504]: E0209 18:56:04.100778    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:05.038329 kubelet[1504]: W0209 18:56:05.038274    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:56:05.038329 kubelet[1504]: E0209 18:56:05.038322    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb  9 18:56:05.101904 kubelet[1504]: E0209 18:56:05.101854    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:05.629017 kubelet[1504]: W0209 18:56:05.628973    1504 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:56:05.629017 kubelet[1504]: E0209 18:56:05.629009    1504 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb  9 18:56:06.102120 kubelet[1504]: E0209 18:56:06.102064    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:07.090124 kubelet[1504]: I0209 18:56:07.090063    1504 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb  9 18:56:07.102356 kubelet[1504]: E0209 18:56:07.102337    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:07.150425 kubelet[1504]: E0209 18:56:07.150389    1504 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.91\" not found"
Feb  9 18:56:07.494958 kubelet[1504]: E0209 18:56:07.494919    1504 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.91" not found
Feb  9 18:56:08.103064 kubelet[1504]: E0209 18:56:08.103022    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:08.727357 kubelet[1504]: E0209 18:56:08.727325    1504 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.91" not found
Feb  9 18:56:09.104095 kubelet[1504]: E0209 18:56:09.103972    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:09.736836 kubelet[1504]: E0209 18:56:09.736788    1504 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.91\" not found" node="10.0.0.91"
Feb  9 18:56:09.816878 kubelet[1504]: I0209 18:56:09.816853    1504 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.91"
Feb  9 18:56:10.104636 kubelet[1504]: E0209 18:56:10.104512    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:10.128625 kubelet[1504]: I0209 18:56:10.128584    1504 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.91"
Feb  9 18:56:10.134993 kubelet[1504]: E0209 18:56:10.134967    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.236129 kubelet[1504]: E0209 18:56:10.236080    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.336621 kubelet[1504]: E0209 18:56:10.336577    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.437399 kubelet[1504]: E0209 18:56:10.437361    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.537938 kubelet[1504]: E0209 18:56:10.537893    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.547760 sudo[1307]: pam_unix(sudo:session): session closed for user root
Feb  9 18:56:10.549283 sshd[1301]: pam_unix(sshd:session): session closed for user core
Feb  9 18:56:10.551738 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:33428.service: Deactivated successfully.
Feb  9 18:56:10.552870 systemd[1]: session-5.scope: Deactivated successfully.
Feb  9 18:56:10.553357 systemd-logind[1178]: Session 5 logged out. Waiting for processes to exit.
Feb  9 18:56:10.554033 systemd-logind[1178]: Removed session 5.
Feb  9 18:56:10.638649 kubelet[1504]: E0209 18:56:10.638598    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.739655 kubelet[1504]: E0209 18:56:10.739538    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.840228 kubelet[1504]: E0209 18:56:10.840181    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:10.940943 kubelet[1504]: E0209 18:56:10.940873    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.041579 kubelet[1504]: E0209 18:56:11.041417    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.105213 kubelet[1504]: E0209 18:56:11.105120    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:11.142394 kubelet[1504]: E0209 18:56:11.142331    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.243254 kubelet[1504]: E0209 18:56:11.243192    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.343640 kubelet[1504]: E0209 18:56:11.343516    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.444003 kubelet[1504]: E0209 18:56:11.443971    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.544579 kubelet[1504]: E0209 18:56:11.544526    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.645556 kubelet[1504]: E0209 18:56:11.645433    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.746337 kubelet[1504]: E0209 18:56:11.746275    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.846948 kubelet[1504]: E0209 18:56:11.846900    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:11.947446 kubelet[1504]: E0209 18:56:11.947409    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.048018 kubelet[1504]: E0209 18:56:12.047974    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.105560 kubelet[1504]: E0209 18:56:12.105539    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:12.148758 kubelet[1504]: E0209 18:56:12.148738    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.249353 kubelet[1504]: E0209 18:56:12.249246    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.349492 kubelet[1504]: E0209 18:56:12.349431    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.450032 kubelet[1504]: E0209 18:56:12.449979    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.550612 kubelet[1504]: E0209 18:56:12.550484    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.651253 kubelet[1504]: E0209 18:56:12.651201    1504 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.91\" not found"
Feb  9 18:56:12.752091 kubelet[1504]: I0209 18:56:12.752056    1504 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb  9 18:56:12.752523 env[1192]: time="2024-02-09T18:56:12.752480013Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb  9 18:56:12.752882 kubelet[1504]: I0209 18:56:12.752667    1504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb  9 18:56:13.105960 kubelet[1504]: E0209 18:56:13.105910    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:13.105960 kubelet[1504]: I0209 18:56:13.105926    1504 apiserver.go:52] "Watching apiserver"
Feb  9 18:56:13.108710 kubelet[1504]: I0209 18:56:13.108678    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:56:13.108832 kubelet[1504]: I0209 18:56:13.108800    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:56:13.202456 kubelet[1504]: I0209 18:56:13.202407    1504 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  9 18:56:13.287427 kubelet[1504]: I0209 18:56:13.287389    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-bpf-maps\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287427 kubelet[1504]: I0209 18:56:13.287428    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-kernel\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287600 kubelet[1504]: I0209 18:56:13.287456    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5-kube-proxy\") pod \"kube-proxy-59qhl\" (UID: \"f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5\") " pod="kube-system/kube-proxy-59qhl"
Feb  9 18:56:13.287600 kubelet[1504]: I0209 18:56:13.287563    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5-lib-modules\") pod \"kube-proxy-59qhl\" (UID: \"f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5\") " pod="kube-system/kube-proxy-59qhl"
Feb  9 18:56:13.287682 kubelet[1504]: I0209 18:56:13.287629    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-hostproc\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287682 kubelet[1504]: I0209 18:56:13.287656    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-cgroup\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287682 kubelet[1504]: I0209 18:56:13.287672    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cni-path\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287759 kubelet[1504]: I0209 18:56:13.287690    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-etc-cni-netd\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287759 kubelet[1504]: I0209 18:56:13.287710    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-xtables-lock\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287759 kubelet[1504]: I0209 18:56:13.287737    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-config-path\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287899 kubelet[1504]: I0209 18:56:13.287871    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-net\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287934 kubelet[1504]: I0209 18:56:13.287906    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfck\" (UniqueName: \"kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-kube-api-access-dcfck\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.287959 kubelet[1504]: I0209 18:56:13.287941    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5-xtables-lock\") pod \"kube-proxy-59qhl\" (UID: \"f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5\") " pod="kube-system/kube-proxy-59qhl"
Feb  9 18:56:13.288016 kubelet[1504]: I0209 18:56:13.287979    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgxh\" (UniqueName: \"kubernetes.io/projected/f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5-kube-api-access-crgxh\") pod \"kube-proxy-59qhl\" (UID: \"f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5\") " pod="kube-system/kube-proxy-59qhl"
Feb  9 18:56:13.288224 kubelet[1504]: I0209 18:56:13.288039    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-run\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.288224 kubelet[1504]: I0209 18:56:13.288074    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-lib-modules\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.288224 kubelet[1504]: I0209 18:56:13.288105    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc790184-c850-4d0d-bdea-7693a34410bf-clustermesh-secrets\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.288224 kubelet[1504]: I0209 18:56:13.288161    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-hubble-tls\") pod \"cilium-n8tw9\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") " pod="kube-system/cilium-n8tw9"
Feb  9 18:56:13.288224 kubelet[1504]: I0209 18:56:13.288192    1504 reconciler.go:41] "Reconciler: start to sync state"
Feb  9 18:56:13.413851 kubelet[1504]: E0209 18:56:13.413741    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:13.414642 env[1192]: time="2024-02-09T18:56:13.414584098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8tw9,Uid:dc790184-c850-4d0d-bdea-7693a34410bf,Namespace:kube-system,Attempt:0,}"
Feb  9 18:56:13.712498 kubelet[1504]: E0209 18:56:13.712467    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:13.713074 env[1192]: time="2024-02-09T18:56:13.713032318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59qhl,Uid:f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5,Namespace:kube-system,Attempt:0,}"
Feb  9 18:56:14.107177 kubelet[1504]: E0209 18:56:14.107065    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:14.572273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465931050.mount: Deactivated successfully.
Feb  9 18:56:14.982164 env[1192]: time="2024-02-09T18:56:14.982111832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.107865 kubelet[1504]: E0209 18:56:15.107827    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:15.122523 env[1192]: time="2024-02-09T18:56:15.122484100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.198684 env[1192]: time="2024-02-09T18:56:15.198653698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.318645 env[1192]: time="2024-02-09T18:56:15.318566685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.399767 env[1192]: time="2024-02-09T18:56:15.399745445Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.478760 env[1192]: time="2024-02-09T18:56:15.478719030Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.506460 env[1192]: time="2024-02-09T18:56:15.506433976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.528183 env[1192]: time="2024-02-09T18:56:15.528149973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:15.560149 env[1192]: time="2024-02-09T18:56:15.560065415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:56:15.560149 env[1192]: time="2024-02-09T18:56:15.560123744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:56:15.560149 env[1192]: time="2024-02-09T18:56:15.560136899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:56:15.560371 env[1192]: time="2024-02-09T18:56:15.560330482Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55 pid=1597 runtime=io.containerd.runc.v2
Feb  9 18:56:15.565086 env[1192]: time="2024-02-09T18:56:15.564715474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:56:15.565086 env[1192]: time="2024-02-09T18:56:15.564859844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:56:15.565086 env[1192]: time="2024-02-09T18:56:15.564872718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:56:15.565244 env[1192]: time="2024-02-09T18:56:15.565121615Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ec80805ca4aacfa8e1cf3a70cd51d493a609f59a51c11e2fadb528c9b8a6c5e pid=1610 runtime=io.containerd.runc.v2
Feb  9 18:56:15.596943 systemd[1]: run-containerd-runc-k8s.io-7ec80805ca4aacfa8e1cf3a70cd51d493a609f59a51c11e2fadb528c9b8a6c5e-runc.6NFbHf.mount: Deactivated successfully.
Feb  9 18:56:15.626291 env[1192]: time="2024-02-09T18:56:15.626243438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59qhl,Uid:f24ad12a-8d00-4bfc-b17e-eb9eca6d6eb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ec80805ca4aacfa8e1cf3a70cd51d493a609f59a51c11e2fadb528c9b8a6c5e\""
Feb  9 18:56:15.627138 kubelet[1504]: E0209 18:56:15.627113    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:15.627903 env[1192]: time="2024-02-09T18:56:15.627871631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n8tw9,Uid:dc790184-c850-4d0d-bdea-7693a34410bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\""
Feb  9 18:56:15.628004 env[1192]: time="2024-02-09T18:56:15.627980365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\""
Feb  9 18:56:15.628400 kubelet[1504]: E0209 18:56:15.628373    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:16.108592 kubelet[1504]: E0209 18:56:16.108551    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:16.978520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891639425.mount: Deactivated successfully.
Feb  9 18:56:17.097165 kubelet[1504]: E0209 18:56:17.097088    1504 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:17.109464 kubelet[1504]: E0209 18:56:17.109401    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:17.631620 env[1192]: time="2024-02-09T18:56:17.631565697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:17.635889 env[1192]: time="2024-02-09T18:56:17.635838008Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:17.637627 env[1192]: time="2024-02-09T18:56:17.637594782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:17.639249 env[1192]: time="2024-02-09T18:56:17.639228475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:17.639554 env[1192]: time="2024-02-09T18:56:17.639532184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\""
Feb  9 18:56:17.640365 env[1192]: time="2024-02-09T18:56:17.640342033Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb  9 18:56:17.641280 env[1192]: time="2024-02-09T18:56:17.641254524Z" level=info msg="CreateContainer within sandbox \"7ec80805ca4aacfa8e1cf3a70cd51d493a609f59a51c11e2fadb528c9b8a6c5e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb  9 18:56:17.654454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3554476566.mount: Deactivated successfully.
Feb  9 18:56:17.657916 env[1192]: time="2024-02-09T18:56:17.657871090Z" level=info msg="CreateContainer within sandbox \"7ec80805ca4aacfa8e1cf3a70cd51d493a609f59a51c11e2fadb528c9b8a6c5e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03fc66da8cbfb8b41becd3ccfec511b5bd833786394810089fb79bbb61128837\""
Feb  9 18:56:17.658575 env[1192]: time="2024-02-09T18:56:17.658551245Z" level=info msg="StartContainer for \"03fc66da8cbfb8b41becd3ccfec511b5bd833786394810089fb79bbb61128837\""
Feb  9 18:56:17.854096 env[1192]: time="2024-02-09T18:56:17.854033344Z" level=info msg="StartContainer for \"03fc66da8cbfb8b41becd3ccfec511b5bd833786394810089fb79bbb61128837\" returns successfully"
Feb  9 18:56:18.110466 kubelet[1504]: E0209 18:56:18.110422    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:18.265950 kubelet[1504]: E0209 18:56:18.265912    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:18.305406 kubelet[1504]: I0209 18:56:18.305351    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-59qhl" podStartSLOduration=-9.223372028549494e+09 pod.CreationTimestamp="2024-02-09 18:56:10 +0000 UTC" firstStartedPulling="2024-02-09 18:56:15.627571939 +0000 UTC m=+19.090322410" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:56:18.305029267 +0000 UTC m=+21.767779768" watchObservedRunningTime="2024-02-09 18:56:18.305281904 +0000 UTC m=+21.768032375"
Feb  9 18:56:19.110678 kubelet[1504]: E0209 18:56:19.110586    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:19.266836 kubelet[1504]: E0209 18:56:19.266785    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:20.111094 kubelet[1504]: E0209 18:56:20.111049    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:21.112073 kubelet[1504]: E0209 18:56:21.112041    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:22.113193 kubelet[1504]: E0209 18:56:22.113129    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:23.113724 kubelet[1504]: E0209 18:56:23.113683    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:24.114182 kubelet[1504]: E0209 18:56:24.114091    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:24.190436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075329308.mount: Deactivated successfully.
Feb  9 18:56:25.114674 kubelet[1504]: E0209 18:56:25.114631    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:26.114912 kubelet[1504]: E0209 18:56:26.114879    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:27.115445 kubelet[1504]: E0209 18:56:27.115402    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:28.116189 kubelet[1504]: E0209 18:56:28.116139    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:29.116967 kubelet[1504]: E0209 18:56:29.116909    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:29.821161 env[1192]: time="2024-02-09T18:56:29.821093388Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:29.824839 env[1192]: time="2024-02-09T18:56:29.824768754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:29.826347 env[1192]: time="2024-02-09T18:56:29.826309733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:29.826862 env[1192]: time="2024-02-09T18:56:29.826828961Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb  9 18:56:29.828468 env[1192]: time="2024-02-09T18:56:29.828433461Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 18:56:29.841783 env[1192]: time="2024-02-09T18:56:29.841741329Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\""
Feb  9 18:56:29.842200 env[1192]: time="2024-02-09T18:56:29.842173921Z" level=info msg="StartContainer for \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\""
Feb  9 18:56:29.915848 env[1192]: time="2024-02-09T18:56:29.914636460Z" level=info msg="StartContainer for \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\" returns successfully"
Feb  9 18:56:30.118109 kubelet[1504]: E0209 18:56:30.117972    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:30.307570 kubelet[1504]: E0209 18:56:30.283938    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:30.545217 env[1192]: time="2024-02-09T18:56:30.545163582Z" level=info msg="shim disconnected" id=f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87
Feb  9 18:56:30.545217 env[1192]: time="2024-02-09T18:56:30.545213035Z" level=warning msg="cleaning up after shim disconnected" id=f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87 namespace=k8s.io
Feb  9 18:56:30.545217 env[1192]: time="2024-02-09T18:56:30.545221822Z" level=info msg="cleaning up dead shim"
Feb  9 18:56:30.558275 env[1192]: time="2024-02-09T18:56:30.558213337Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1892 runtime=io.containerd.runc.v2\n"
Feb  9 18:56:30.835788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87-rootfs.mount: Deactivated successfully.
Feb  9 18:56:30.983540 update_engine[1180]: I0209 18:56:30.983479  1180 update_attempter.cc:509] Updating boot flags...
Feb  9 18:56:31.118741 kubelet[1504]: E0209 18:56:31.118620    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:31.286306 kubelet[1504]: E0209 18:56:31.286278    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:31.288398 env[1192]: time="2024-02-09T18:56:31.288358354Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 18:56:31.745912 env[1192]: time="2024-02-09T18:56:31.745851074Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\""
Feb  9 18:56:31.746231 env[1192]: time="2024-02-09T18:56:31.746203933Z" level=info msg="StartContainer for \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\""
Feb  9 18:56:31.798947 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 18:56:31.799173 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 18:56:31.799312 systemd[1]: Stopping systemd-sysctl.service...
Feb  9 18:56:31.800703 systemd[1]: Starting systemd-sysctl.service...
Feb  9 18:56:31.806181 systemd[1]: Finished systemd-sysctl.service.
Feb  9 18:56:31.945259 env[1192]: time="2024-02-09T18:56:31.945207239Z" level=info msg="StartContainer for \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\" returns successfully"
Feb  9 18:56:31.958948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143-rootfs.mount: Deactivated successfully.
Feb  9 18:56:32.119134 kubelet[1504]: E0209 18:56:32.119026    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:32.195957 env[1192]: time="2024-02-09T18:56:32.195911062Z" level=info msg="shim disconnected" id=dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143
Feb  9 18:56:32.196069 env[1192]: time="2024-02-09T18:56:32.195957380Z" level=warning msg="cleaning up after shim disconnected" id=dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143 namespace=k8s.io
Feb  9 18:56:32.196069 env[1192]: time="2024-02-09T18:56:32.195972338Z" level=info msg="cleaning up dead shim"
Feb  9 18:56:32.201833 env[1192]: time="2024-02-09T18:56:32.201781976Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1972 runtime=io.containerd.runc.v2\n"
Feb  9 18:56:32.322020 kubelet[1504]: E0209 18:56:32.321999    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:32.323330 env[1192]: time="2024-02-09T18:56:32.323298339Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 18:56:32.844914 env[1192]: time="2024-02-09T18:56:32.844867290Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\""
Feb  9 18:56:32.845356 env[1192]: time="2024-02-09T18:56:32.845305932Z" level=info msg="StartContainer for \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\""
Feb  9 18:56:32.895679 env[1192]: time="2024-02-09T18:56:32.895615228Z" level=info msg="StartContainer for \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\" returns successfully"
Feb  9 18:56:32.910036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d-rootfs.mount: Deactivated successfully.
Feb  9 18:56:32.916732 env[1192]: time="2024-02-09T18:56:32.916688261Z" level=info msg="shim disconnected" id=6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d
Feb  9 18:56:32.916732 env[1192]: time="2024-02-09T18:56:32.916726354Z" level=warning msg="cleaning up after shim disconnected" id=6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d namespace=k8s.io
Feb  9 18:56:32.916732 env[1192]: time="2024-02-09T18:56:32.916737865Z" level=info msg="cleaning up dead shim"
Feb  9 18:56:32.922559 env[1192]: time="2024-02-09T18:56:32.922515071Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2028 runtime=io.containerd.runc.v2\n"
Feb  9 18:56:33.119562 kubelet[1504]: E0209 18:56:33.119427    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:33.325313 kubelet[1504]: E0209 18:56:33.325280    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:33.326998 env[1192]: time="2024-02-09T18:56:33.326962321Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 18:56:33.594348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711550230.mount: Deactivated successfully.
Feb  9 18:56:33.735191 env[1192]: time="2024-02-09T18:56:33.735130364Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\""
Feb  9 18:56:33.735645 env[1192]: time="2024-02-09T18:56:33.735599823Z" level=info msg="StartContainer for \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\""
Feb  9 18:56:33.830833 env[1192]: time="2024-02-09T18:56:33.830754898Z" level=info msg="StartContainer for \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\" returns successfully"
Feb  9 18:56:33.895599 env[1192]: time="2024-02-09T18:56:33.895492009Z" level=info msg="shim disconnected" id=b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24
Feb  9 18:56:33.895599 env[1192]: time="2024-02-09T18:56:33.895537646Z" level=warning msg="cleaning up after shim disconnected" id=b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24 namespace=k8s.io
Feb  9 18:56:33.895599 env[1192]: time="2024-02-09T18:56:33.895545581Z" level=info msg="cleaning up dead shim"
Feb  9 18:56:33.901114 env[1192]: time="2024-02-09T18:56:33.901064038Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2083 runtime=io.containerd.runc.v2\n"
Feb  9 18:56:34.120159 kubelet[1504]: E0209 18:56:34.120111    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:34.327800 kubelet[1504]: E0209 18:56:34.327770    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:34.329643 env[1192]: time="2024-02-09T18:56:34.329601042Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 18:56:34.401850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671317188.mount: Deactivated successfully.
Feb  9 18:56:34.459116 env[1192]: time="2024-02-09T18:56:34.459051776Z" level=info msg="CreateContainer within sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\""
Feb  9 18:56:34.459535 env[1192]: time="2024-02-09T18:56:34.459511688Z" level=info msg="StartContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\""
Feb  9 18:56:34.533691 env[1192]: time="2024-02-09T18:56:34.533631697Z" level=info msg="StartContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" returns successfully"
Feb  9 18:56:34.742008 kubelet[1504]: I0209 18:56:34.741973    1504 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb  9 18:56:34.965843 kernel: Initializing XFRM netlink socket
Feb  9 18:56:35.121020 kubelet[1504]: E0209 18:56:35.120887    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:35.332447 kubelet[1504]: E0209 18:56:35.332419    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:35.344238 kubelet[1504]: I0209 18:56:35.344201    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-n8tw9" podStartSLOduration=-9.223372011510614e+09 pod.CreationTimestamp="2024-02-09 18:56:10 +0000 UTC" firstStartedPulling="2024-02-09 18:56:15.628647737 +0000 UTC m=+19.091398208" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:56:35.34401411 +0000 UTC m=+38.806764601" watchObservedRunningTime="2024-02-09 18:56:35.344160476 +0000 UTC m=+38.806910947"
Feb  9 18:56:36.121511 kubelet[1504]: E0209 18:56:36.121438    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:36.333613 kubelet[1504]: E0209 18:56:36.333573    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:36.579470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb  9 18:56:36.579007 systemd-networkd[1075]: cilium_host: Link UP
Feb  9 18:56:36.579108 systemd-networkd[1075]: cilium_net: Link UP
Feb  9 18:56:36.579111 systemd-networkd[1075]: cilium_net: Gained carrier
Feb  9 18:56:36.579224 systemd-networkd[1075]: cilium_host: Gained carrier
Feb  9 18:56:36.579346 systemd-networkd[1075]: cilium_host: Gained IPv6LL
Feb  9 18:56:36.648651 systemd-networkd[1075]: cilium_vxlan: Link UP
Feb  9 18:56:36.648659 systemd-networkd[1075]: cilium_vxlan: Gained carrier
Feb  9 18:56:36.770947 systemd-networkd[1075]: cilium_net: Gained IPv6LL
Feb  9 18:56:36.870854 kernel: NET: Registered PF_ALG protocol family
Feb  9 18:56:37.097228 kubelet[1504]: E0209 18:56:37.097176    1504 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:37.122452 kubelet[1504]: E0209 18:56:37.122405    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:37.335391 kubelet[1504]: E0209 18:56:37.335354    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:37.393860 systemd-networkd[1075]: lxc_health: Link UP
Feb  9 18:56:37.402845 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 18:56:37.402838 systemd-networkd[1075]: lxc_health: Gained carrier
Feb  9 18:56:37.493349 kubelet[1504]: I0209 18:56:37.493301    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:56:37.632586 kubelet[1504]: I0209 18:56:37.632529    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t762l\" (UniqueName: \"kubernetes.io/projected/ae71d365-7cee-4ace-b80b-de194438b0fb-kube-api-access-t762l\") pod \"nginx-deployment-8ffc5cf85-6z6wp\" (UID: \"ae71d365-7cee-4ace-b80b-de194438b0fb\") " pod="default/nginx-deployment-8ffc5cf85-6z6wp"
Feb  9 18:56:37.797541 env[1192]: time="2024-02-09T18:56:37.797492206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6z6wp,Uid:ae71d365-7cee-4ace-b80b-de194438b0fb,Namespace:default,Attempt:0,}"
Feb  9 18:56:37.858963 systemd-networkd[1075]: cilium_vxlan: Gained IPv6LL
Feb  9 18:56:38.121781 systemd-networkd[1075]: lxcddb18ab9a049: Link UP
Feb  9 18:56:38.123286 kubelet[1504]: E0209 18:56:38.123236    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:38.131849 kernel: eth0: renamed from tmp64321
Feb  9 18:56:38.157668 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 18:56:38.157738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcddb18ab9a049: link becomes ready
Feb  9 18:56:38.157066 systemd-networkd[1075]: lxcddb18ab9a049: Gained carrier
Feb  9 18:56:38.336656 kubelet[1504]: E0209 18:56:38.336627    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:39.123669 kubelet[1504]: E0209 18:56:39.123598    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:39.203002 systemd-networkd[1075]: lxc_health: Gained IPv6LL
Feb  9 18:56:39.338538 kubelet[1504]: E0209 18:56:39.338500    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:39.714967 systemd-networkd[1075]: lxcddb18ab9a049: Gained IPv6LL
Feb  9 18:56:40.124683 kubelet[1504]: E0209 18:56:40.124558    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:40.340616 kubelet[1504]: E0209 18:56:40.340584    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:56:41.125654 kubelet[1504]: E0209 18:56:41.125602    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:41.686908 env[1192]: time="2024-02-09T18:56:41.686832363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:56:41.686908 env[1192]: time="2024-02-09T18:56:41.686872118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:56:41.686908 env[1192]: time="2024-02-09T18:56:41.686882288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:56:41.687321 env[1192]: time="2024-02-09T18:56:41.687010499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64321eaee34a0e214c45695e362a7349d5de50471f3ac30d6bb6aa90aa6436f0 pid=2628 runtime=io.containerd.runc.v2
Feb  9 18:56:41.755380 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 18:56:41.779455 env[1192]: time="2024-02-09T18:56:41.779414523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6z6wp,Uid:ae71d365-7cee-4ace-b80b-de194438b0fb,Namespace:default,Attempt:0,} returns sandbox id \"64321eaee34a0e214c45695e362a7349d5de50471f3ac30d6bb6aa90aa6436f0\""
Feb  9 18:56:41.781197 env[1192]: time="2024-02-09T18:56:41.781168302Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb  9 18:56:42.126233 kubelet[1504]: E0209 18:56:42.126112    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:43.126417 kubelet[1504]: E0209 18:56:43.126362    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:44.127439 kubelet[1504]: E0209 18:56:44.127376    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:45.128574 kubelet[1504]: E0209 18:56:45.128505    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:45.574348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount612884449.mount: Deactivated successfully.
Feb  9 18:56:46.129596 kubelet[1504]: E0209 18:56:46.129535    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:46.843197 env[1192]: time="2024-02-09T18:56:46.843142495Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:46.844984 env[1192]: time="2024-02-09T18:56:46.844955110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:46.847435 env[1192]: time="2024-02-09T18:56:46.847413791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:46.848955 env[1192]: time="2024-02-09T18:56:46.848923344Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:46.849411 env[1192]: time="2024-02-09T18:56:46.849388480Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb  9 18:56:46.850772 env[1192]: time="2024-02-09T18:56:46.850740878Z" level=info msg="CreateContainer within sandbox \"64321eaee34a0e214c45695e362a7349d5de50471f3ac30d6bb6aa90aa6436f0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb  9 18:56:46.860667 env[1192]: time="2024-02-09T18:56:46.860623564Z" level=info msg="CreateContainer within sandbox \"64321eaee34a0e214c45695e362a7349d5de50471f3ac30d6bb6aa90aa6436f0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5d43e90388808693f89bec29e599db02c831fe8e40799344657be3439121d7cf\""
Feb  9 18:56:46.861046 env[1192]: time="2024-02-09T18:56:46.861021613Z" level=info msg="StartContainer for \"5d43e90388808693f89bec29e599db02c831fe8e40799344657be3439121d7cf\""
Feb  9 18:56:46.938632 env[1192]: time="2024-02-09T18:56:46.938574958Z" level=info msg="StartContainer for \"5d43e90388808693f89bec29e599db02c831fe8e40799344657be3439121d7cf\" returns successfully"
Feb  9 18:56:47.130085 kubelet[1504]: E0209 18:56:47.129957    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:47.359599 kubelet[1504]: I0209 18:56:47.359573    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-6z6wp" podStartSLOduration=-9.223372026495234e+09 pod.CreationTimestamp="2024-02-09 18:56:37 +0000 UTC" firstStartedPulling="2024-02-09 18:56:41.780780311 +0000 UTC m=+45.243530782" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:56:47.359364634 +0000 UTC m=+50.822115105" watchObservedRunningTime="2024-02-09 18:56:47.359542719 +0000 UTC m=+50.822293210"
Feb  9 18:56:48.130185 kubelet[1504]: E0209 18:56:48.130128    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:49.130290 kubelet[1504]: E0209 18:56:49.130235    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:49.225524 kubelet[1504]: I0209 18:56:49.225495    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:56:49.395179 kubelet[1504]: I0209 18:56:49.395066    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9b7cb58a-3d73-4970-82bb-6563c256b2c9-data\") pod \"nfs-server-provisioner-0\" (UID: \"9b7cb58a-3d73-4970-82bb-6563c256b2c9\") " pod="default/nfs-server-provisioner-0"
Feb  9 18:56:49.395179 kubelet[1504]: I0209 18:56:49.395116    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85zq\" (UniqueName: \"kubernetes.io/projected/9b7cb58a-3d73-4970-82bb-6563c256b2c9-kube-api-access-h85zq\") pod \"nfs-server-provisioner-0\" (UID: \"9b7cb58a-3d73-4970-82bb-6563c256b2c9\") " pod="default/nfs-server-provisioner-0"
Feb  9 18:56:49.528158 env[1192]: time="2024-02-09T18:56:49.528111553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9b7cb58a-3d73-4970-82bb-6563c256b2c9,Namespace:default,Attempt:0,}"
Feb  9 18:56:49.998735 systemd-networkd[1075]: lxca2e4ec6f84b2: Link UP
Feb  9 18:56:50.007851 kernel: eth0: renamed from tmpce070
Feb  9 18:56:50.013488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 18:56:50.013532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca2e4ec6f84b2: link becomes ready
Feb  9 18:56:50.013599 systemd-networkd[1075]: lxca2e4ec6f84b2: Gained carrier
Feb  9 18:56:50.131348 kubelet[1504]: E0209 18:56:50.131273    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:50.255334 env[1192]: time="2024-02-09T18:56:50.255142786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:56:50.255334 env[1192]: time="2024-02-09T18:56:50.255188512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:56:50.255334 env[1192]: time="2024-02-09T18:56:50.255198221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:56:50.255565 env[1192]: time="2024-02-09T18:56:50.255435577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce070594c2d437dbfaee8621a93820bb61d45fee5d7cb2594fbc1614db78da8d pid=2808 runtime=io.containerd.runc.v2
Feb  9 18:56:50.298734 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 18:56:50.329809 env[1192]: time="2024-02-09T18:56:50.329748962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9b7cb58a-3d73-4970-82bb-6563c256b2c9,Namespace:default,Attempt:0,} returns sandbox id \"ce070594c2d437dbfaee8621a93820bb61d45fee5d7cb2594fbc1614db78da8d\""
Feb  9 18:56:50.331268 env[1192]: time="2024-02-09T18:56:50.331207587Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb  9 18:56:51.131720 kubelet[1504]: E0209 18:56:51.131669    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:51.939014 systemd-networkd[1075]: lxca2e4ec6f84b2: Gained IPv6LL
Feb  9 18:56:52.132009 kubelet[1504]: E0209 18:56:52.131949    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:53.133020 kubelet[1504]: E0209 18:56:53.132960    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:53.379285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847998024.mount: Deactivated successfully.
Feb  9 18:56:54.133381 kubelet[1504]: E0209 18:56:54.133332    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:55.133504 kubelet[1504]: E0209 18:56:55.133449    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:56.133766 kubelet[1504]: E0209 18:56:56.133718    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:56.799020 env[1192]: time="2024-02-09T18:56:56.798948660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:56.800555 env[1192]: time="2024-02-09T18:56:56.800528709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:56.802349 env[1192]: time="2024-02-09T18:56:56.802302303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:56.803769 env[1192]: time="2024-02-09T18:56:56.803738954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:56:56.804568 env[1192]: time="2024-02-09T18:56:56.804531514Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb  9 18:56:56.806631 env[1192]: time="2024-02-09T18:56:56.806594741Z" level=info msg="CreateContainer within sandbox \"ce070594c2d437dbfaee8621a93820bb61d45fee5d7cb2594fbc1614db78da8d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb  9 18:56:56.817376 env[1192]: time="2024-02-09T18:56:56.817335339Z" level=info msg="CreateContainer within sandbox \"ce070594c2d437dbfaee8621a93820bb61d45fee5d7cb2594fbc1614db78da8d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b303955d4b91592e3e7cfa648f62dcdc0055e1b3810f6b974c4674420142a6fd\""
Feb  9 18:56:56.817803 env[1192]: time="2024-02-09T18:56:56.817777880Z" level=info msg="StartContainer for \"b303955d4b91592e3e7cfa648f62dcdc0055e1b3810f6b974c4674420142a6fd\""
Feb  9 18:56:56.858550 env[1192]: time="2024-02-09T18:56:56.858508044Z" level=info msg="StartContainer for \"b303955d4b91592e3e7cfa648f62dcdc0055e1b3810f6b974c4674420142a6fd\" returns successfully"
Feb  9 18:56:57.098168 kubelet[1504]: E0209 18:56:57.098020    1504 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:57.134391 kubelet[1504]: E0209 18:56:57.134335    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:57.387063 kubelet[1504]: I0209 18:56:57.386932    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372028467882e+09 pod.CreationTimestamp="2024-02-09 18:56:49 +0000 UTC" firstStartedPulling="2024-02-09 18:56:50.33094364 +0000 UTC m=+53.793694101" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:56:57.386597197 +0000 UTC m=+60.849347698" watchObservedRunningTime="2024-02-09 18:56:57.386893154 +0000 UTC m=+60.849643625"
Feb  9 18:56:58.134996 kubelet[1504]: E0209 18:56:58.134925    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:56:59.135505 kubelet[1504]: E0209 18:56:59.135447    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:00.135637 kubelet[1504]: E0209 18:57:00.135584    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:01.136161 kubelet[1504]: E0209 18:57:01.136108    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:02.136770 kubelet[1504]: E0209 18:57:02.136728    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:03.137760 kubelet[1504]: E0209 18:57:03.137695    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:04.138438 kubelet[1504]: E0209 18:57:04.138378    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:05.138732 kubelet[1504]: E0209 18:57:05.138682    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:06.139128 kubelet[1504]: E0209 18:57:06.139046    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:06.226584 kubelet[1504]: I0209 18:57:06.226526    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:57:06.385549 kubelet[1504]: I0209 18:57:06.385483    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6397a635-5f91-4281-866c-d778471998ca\" (UniqueName: \"kubernetes.io/nfs/e8de5442-92ce-4a23-ac39-8879c0b20849-pvc-6397a635-5f91-4281-866c-d778471998ca\") pod \"test-pod-1\" (UID: \"e8de5442-92ce-4a23-ac39-8879c0b20849\") " pod="default/test-pod-1"
Feb  9 18:57:06.385549 kubelet[1504]: I0209 18:57:06.385553    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jvg\" (UniqueName: \"kubernetes.io/projected/e8de5442-92ce-4a23-ac39-8879c0b20849-kube-api-access-v7jvg\") pod \"test-pod-1\" (UID: \"e8de5442-92ce-4a23-ac39-8879c0b20849\") " pod="default/test-pod-1"
Feb  9 18:57:06.507846 kernel: FS-Cache: Loaded
Feb  9 18:57:06.541286 kernel: RPC: Registered named UNIX socket transport module.
Feb  9 18:57:06.541344 kernel: RPC: Registered udp transport module.
Feb  9 18:57:06.541363 kernel: RPC: Registered tcp transport module.
Feb  9 18:57:06.542370 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb  9 18:57:06.579846 kernel: FS-Cache: Netfs 'nfs' registered for caching
Feb  9 18:57:06.760201 kernel: NFS: Registering the id_resolver key type
Feb  9 18:57:06.760360 kernel: Key type id_resolver registered
Feb  9 18:57:06.760379 kernel: Key type id_legacy registered
Feb  9 18:57:06.780544 nfsidmap[2949]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb  9 18:57:06.783324 nfsidmap[2952]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb  9 18:57:06.830643 env[1192]: time="2024-02-09T18:57:06.830587633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e8de5442-92ce-4a23-ac39-8879c0b20849,Namespace:default,Attempt:0,}"
Feb  9 18:57:06.857869 systemd-networkd[1075]: lxc1ff2c4bff8e7: Link UP
Feb  9 18:57:06.864716 kernel: eth0: renamed from tmpae6c3
Feb  9 18:57:06.871510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Feb  9 18:57:06.871558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1ff2c4bff8e7: link becomes ready
Feb  9 18:57:06.871669 systemd-networkd[1075]: lxc1ff2c4bff8e7: Gained carrier
Feb  9 18:57:07.139899 kubelet[1504]: E0209 18:57:07.139752    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:07.717797 env[1192]: time="2024-02-09T18:57:07.717728770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:57:07.717797 env[1192]: time="2024-02-09T18:57:07.717766551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:57:07.717797 env[1192]: time="2024-02-09T18:57:07.717776199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:57:07.718048 env[1192]: time="2024-02-09T18:57:07.717995992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae6c3417d8aebaf0467df4d6598896cb2effeb6d94efacf6b088f03c287647be pid=2986 runtime=io.containerd.runc.v2
Feb  9 18:57:07.736354 systemd-resolved[1125]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb  9 18:57:07.762155 env[1192]: time="2024-02-09T18:57:07.762090418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e8de5442-92ce-4a23-ac39-8879c0b20849,Namespace:default,Attempt:0,} returns sandbox id \"ae6c3417d8aebaf0467df4d6598896cb2effeb6d94efacf6b088f03c287647be\""
Feb  9 18:57:07.763570 env[1192]: time="2024-02-09T18:57:07.763548735Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb  9 18:57:08.140336 kubelet[1504]: E0209 18:57:08.140198    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:08.450943 systemd-networkd[1075]: lxc1ff2c4bff8e7: Gained IPv6LL
Feb  9 18:57:08.583216 env[1192]: time="2024-02-09T18:57:08.583139307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:08.588341 env[1192]: time="2024-02-09T18:57:08.588277091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:08.591503 env[1192]: time="2024-02-09T18:57:08.591457141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:08.594999 env[1192]: time="2024-02-09T18:57:08.594961358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:08.595629 env[1192]: time="2024-02-09T18:57:08.595588065Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\""
Feb  9 18:57:08.597174 env[1192]: time="2024-02-09T18:57:08.597133937Z" level=info msg="CreateContainer within sandbox \"ae6c3417d8aebaf0467df4d6598896cb2effeb6d94efacf6b088f03c287647be\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb  9 18:57:08.625565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964583978.mount: Deactivated successfully.
Feb  9 18:57:08.628504 env[1192]: time="2024-02-09T18:57:08.628431556Z" level=info msg="CreateContainer within sandbox \"ae6c3417d8aebaf0467df4d6598896cb2effeb6d94efacf6b088f03c287647be\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a30803c0ece44d834d1afcf37b57b09031bedbb041a8e2708d74ab6998663b68\""
Feb  9 18:57:08.629139 env[1192]: time="2024-02-09T18:57:08.629093629Z" level=info msg="StartContainer for \"a30803c0ece44d834d1afcf37b57b09031bedbb041a8e2708d74ab6998663b68\""
Feb  9 18:57:08.680463 env[1192]: time="2024-02-09T18:57:08.680416628Z" level=info msg="StartContainer for \"a30803c0ece44d834d1afcf37b57b09031bedbb041a8e2708d74ab6998663b68\" returns successfully"
Feb  9 18:57:09.141018 kubelet[1504]: E0209 18:57:09.140956    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:09.459657 kubelet[1504]: I0209 18:57:09.459617    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337201639519e+09 pod.CreationTimestamp="2024-02-09 18:56:49 +0000 UTC" firstStartedPulling="2024-02-09 18:57:07.763256096 +0000 UTC m=+71.226006567" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:57:09.459426654 +0000 UTC m=+72.922177145" watchObservedRunningTime="2024-02-09 18:57:09.459586794 +0000 UTC m=+72.922337265"
Feb  9 18:57:10.141964 kubelet[1504]: E0209 18:57:10.141894    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:11.143050 kubelet[1504]: E0209 18:57:11.142949    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:11.743684 env[1192]: time="2024-02-09T18:57:11.743615567Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 18:57:11.748945 env[1192]: time="2024-02-09T18:57:11.748915754Z" level=info msg="StopContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" with timeout 1 (s)"
Feb  9 18:57:11.749210 env[1192]: time="2024-02-09T18:57:11.749165844Z" level=info msg="Stop container \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" with signal terminated"
Feb  9 18:57:11.754720 systemd-networkd[1075]: lxc_health: Link DOWN
Feb  9 18:57:11.754728 systemd-networkd[1075]: lxc_health: Lost carrier
Feb  9 18:57:11.798194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444-rootfs.mount: Deactivated successfully.
Feb  9 18:57:11.886666 env[1192]: time="2024-02-09T18:57:11.886599118Z" level=info msg="shim disconnected" id=12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444
Feb  9 18:57:11.886666 env[1192]: time="2024-02-09T18:57:11.886654613Z" level=warning msg="cleaning up after shim disconnected" id=12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444 namespace=k8s.io
Feb  9 18:57:11.886666 env[1192]: time="2024-02-09T18:57:11.886663509Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:11.893288 env[1192]: time="2024-02-09T18:57:11.893240101Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3113 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:11.920067 env[1192]: time="2024-02-09T18:57:11.920030660Z" level=info msg="StopContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" returns successfully"
Feb  9 18:57:11.920592 env[1192]: time="2024-02-09T18:57:11.920568038Z" level=info msg="StopPodSandbox for \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\""
Feb  9 18:57:11.920644 env[1192]: time="2024-02-09T18:57:11.920635074Z" level=info msg="Container to stop \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:57:11.920671 env[1192]: time="2024-02-09T18:57:11.920647607Z" level=info msg="Container to stop \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:57:11.920671 env[1192]: time="2024-02-09T18:57:11.920658097Z" level=info msg="Container to stop \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:57:11.920720 env[1192]: time="2024-02-09T18:57:11.920668456Z" level=info msg="Container to stop \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:57:11.920720 env[1192]: time="2024-02-09T18:57:11.920676562Z" level=info msg="Container to stop \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:57:11.922272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55-shm.mount: Deactivated successfully.
Feb  9 18:57:11.940609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55-rootfs.mount: Deactivated successfully.
Feb  9 18:57:12.034219 env[1192]: time="2024-02-09T18:57:12.034058876Z" level=info msg="shim disconnected" id=3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55
Feb  9 18:57:12.034219 env[1192]: time="2024-02-09T18:57:12.034135850Z" level=warning msg="cleaning up after shim disconnected" id=3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55 namespace=k8s.io
Feb  9 18:57:12.034219 env[1192]: time="2024-02-09T18:57:12.034152301Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:12.040940 env[1192]: time="2024-02-09T18:57:12.040880488Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3146 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:12.041504 env[1192]: time="2024-02-09T18:57:12.041472278Z" level=info msg="TearDown network for sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" successfully"
Feb  9 18:57:12.041607 env[1192]: time="2024-02-09T18:57:12.041582365Z" level=info msg="StopPodSandbox for \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" returns successfully"
Feb  9 18:57:12.116958 kubelet[1504]: I0209 18:57:12.116908    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-hostproc\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.116958 kubelet[1504]: I0209 18:57:12.116962    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-net\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117226 kubelet[1504]: I0209 18:57:12.116982    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-lib-modules\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117226 kubelet[1504]: I0209 18:57:12.117007    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-hubble-tls\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117226 kubelet[1504]: I0209 18:57:12.117024    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-kernel\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117226 kubelet[1504]: I0209 18:57:12.117042    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-cgroup\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117226 kubelet[1504]: I0209 18:57:12.117058    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-xtables-lock\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117226 kubelet[1504]: I0209 18:57:12.117077    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-run\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117427 kubelet[1504]: I0209 18:57:12.117104    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc790184-c850-4d0d-bdea-7693a34410bf-clustermesh-secrets\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117427 kubelet[1504]: I0209 18:57:12.117091    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117427 kubelet[1504]: I0209 18:57:12.117124    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcfck\" (UniqueName: \"kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-kube-api-access-dcfck\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117427 kubelet[1504]: I0209 18:57:12.117141    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-bpf-maps\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117427 kubelet[1504]: I0209 18:57:12.117156    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-etc-cni-netd\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117594 kubelet[1504]: I0209 18:57:12.117163    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117594 kubelet[1504]: I0209 18:57:12.117173    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cni-path\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117594 kubelet[1504]: I0209 18:57:12.117183    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117594 kubelet[1504]: I0209 18:57:12.117194    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-config-path\") pod \"dc790184-c850-4d0d-bdea-7693a34410bf\" (UID: \"dc790184-c850-4d0d-bdea-7693a34410bf\") "
Feb  9 18:57:12.117594 kubelet[1504]: I0209 18:57:12.117199    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117761 kubelet[1504]: I0209 18:57:12.117217    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117761 kubelet[1504]: I0209 18:57:12.117231    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117761 kubelet[1504]: I0209 18:57:12.117245    1504 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-net\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.117761 kubelet[1504]: I0209 18:57:12.117251    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117761 kubelet[1504]: I0209 18:57:12.117257    1504 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-lib-modules\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.117761 kubelet[1504]: I0209 18:57:12.117266    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-run\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.117991 kubelet[1504]: I0209 18:57:12.117301    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117991 kubelet[1504]: I0209 18:57:12.117316    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117991 kubelet[1504]: I0209 18:57:12.117022    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:12.117991 kubelet[1504]: W0209 18:57:12.117504    1504 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/dc790184-c850-4d0d-bdea-7693a34410bf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 18:57:12.119150 kubelet[1504]: I0209 18:57:12.119128    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 18:57:12.120826 systemd[1]: var-lib-kubelet-pods-dc790184\x2dc850\x2d4d0d\x2dbdea\x2d7693a34410bf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 18:57:12.121941 kubelet[1504]: I0209 18:57:12.121769    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-kube-api-access-dcfck" (OuterVolumeSpecName: "kube-api-access-dcfck") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "kube-api-access-dcfck". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:57:12.121941 kubelet[1504]: I0209 18:57:12.121863    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:57:12.121941 kubelet[1504]: I0209 18:57:12.121872    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc790184-c850-4d0d-bdea-7693a34410bf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc790184-c850-4d0d-bdea-7693a34410bf" (UID: "dc790184-c850-4d0d-bdea-7693a34410bf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 18:57:12.144084 kubelet[1504]: E0209 18:57:12.144035    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:12.163782 kubelet[1504]: E0209 18:57:12.163745    1504 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 18:57:12.218291 kubelet[1504]: I0209 18:57:12.218240    1504 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-hostproc\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218291 kubelet[1504]: I0209 18:57:12.218286    1504 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-hubble-tls\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218306    1504 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-host-proc-sys-kernel\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218319    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-cgroup\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218332    1504 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-xtables-lock\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218345    1504 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc790184-c850-4d0d-bdea-7693a34410bf-clustermesh-secrets\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218358    1504 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-bpf-maps\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218370    1504 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-etc-cni-netd\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218381    1504 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc790184-c850-4d0d-bdea-7693a34410bf-cni-path\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218483 kubelet[1504]: I0209 18:57:12.218394    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc790184-c850-4d0d-bdea-7693a34410bf-cilium-config-path\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.218659 kubelet[1504]: I0209 18:57:12.218409    1504 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-dcfck\" (UniqueName: \"kubernetes.io/projected/dc790184-c850-4d0d-bdea-7693a34410bf-kube-api-access-dcfck\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:12.409030 kubelet[1504]: I0209 18:57:12.408925    1504 scope.go:115] "RemoveContainer" containerID="12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444"
Feb  9 18:57:12.411276 env[1192]: time="2024-02-09T18:57:12.411229822Z" level=info msg="RemoveContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\""
Feb  9 18:57:12.416384 env[1192]: time="2024-02-09T18:57:12.416333620Z" level=info msg="RemoveContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" returns successfully"
Feb  9 18:57:12.416598 kubelet[1504]: I0209 18:57:12.416576    1504 scope.go:115] "RemoveContainer" containerID="b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24"
Feb  9 18:57:12.417558 env[1192]: time="2024-02-09T18:57:12.417533402Z" level=info msg="RemoveContainer for \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\""
Feb  9 18:57:12.419829 env[1192]: time="2024-02-09T18:57:12.419773205Z" level=info msg="RemoveContainer for \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\" returns successfully"
Feb  9 18:57:12.419932 kubelet[1504]: I0209 18:57:12.419900    1504 scope.go:115] "RemoveContainer" containerID="6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d"
Feb  9 18:57:12.420896 env[1192]: time="2024-02-09T18:57:12.420843103Z" level=info msg="RemoveContainer for \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\""
Feb  9 18:57:12.423486 env[1192]: time="2024-02-09T18:57:12.423453802Z" level=info msg="RemoveContainer for \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\" returns successfully"
Feb  9 18:57:12.423594 kubelet[1504]: I0209 18:57:12.423574    1504 scope.go:115] "RemoveContainer" containerID="dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143"
Feb  9 18:57:12.424543 env[1192]: time="2024-02-09T18:57:12.424510735Z" level=info msg="RemoveContainer for \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\""
Feb  9 18:57:12.428895 env[1192]: time="2024-02-09T18:57:12.427493312Z" level=info msg="RemoveContainer for \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\" returns successfully"
Feb  9 18:57:12.429054 kubelet[1504]: I0209 18:57:12.427808    1504 scope.go:115] "RemoveContainer" containerID="f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87"
Feb  9 18:57:12.431636 env[1192]: time="2024-02-09T18:57:12.431600220Z" level=info msg="RemoveContainer for \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\""
Feb  9 18:57:12.434172 env[1192]: time="2024-02-09T18:57:12.434144625Z" level=info msg="RemoveContainer for \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\" returns successfully"
Feb  9 18:57:12.434325 kubelet[1504]: I0209 18:57:12.434295    1504 scope.go:115] "RemoveContainer" containerID="12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444"
Feb  9 18:57:12.434589 env[1192]: time="2024-02-09T18:57:12.434497066Z" level=error msg="ContainerStatus for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\": not found"
Feb  9 18:57:12.434701 kubelet[1504]: E0209 18:57:12.434685    1504 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\": not found" containerID="12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444"
Feb  9 18:57:12.434740 kubelet[1504]: I0209 18:57:12.434720    1504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444} err="failed to get container status \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\": rpc error: code = NotFound desc = an error occurred when try to find container \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\": not found"
Feb  9 18:57:12.434740 kubelet[1504]: I0209 18:57:12.434731    1504 scope.go:115] "RemoveContainer" containerID="b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24"
Feb  9 18:57:12.434932 env[1192]: time="2024-02-09T18:57:12.434881086Z" level=error msg="ContainerStatus for \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\": not found"
Feb  9 18:57:12.435017 kubelet[1504]: E0209 18:57:12.435002    1504 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\": not found" containerID="b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24"
Feb  9 18:57:12.435045 kubelet[1504]: I0209 18:57:12.435026    1504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24} err="failed to get container status \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7da30fa1c53d3edb13c60bb673e987669cc64101a55bf028e75f2b42e8d0e24\": not found"
Feb  9 18:57:12.435045 kubelet[1504]: I0209 18:57:12.435035    1504 scope.go:115] "RemoveContainer" containerID="6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d"
Feb  9 18:57:12.435244 env[1192]: time="2024-02-09T18:57:12.435187722Z" level=error msg="ContainerStatus for \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\": not found"
Feb  9 18:57:12.435347 kubelet[1504]: E0209 18:57:12.435332    1504 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\": not found" containerID="6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d"
Feb  9 18:57:12.435393 kubelet[1504]: I0209 18:57:12.435363    1504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d} err="failed to get container status \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e32ec9969ee84ad23e1c2520c3eeb8503b7a1fbbf6a429d4034ca924bbceb5d\": not found"
Feb  9 18:57:12.435393 kubelet[1504]: I0209 18:57:12.435374    1504 scope.go:115] "RemoveContainer" containerID="dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143"
Feb  9 18:57:12.435557 env[1192]: time="2024-02-09T18:57:12.435516108Z" level=error msg="ContainerStatus for \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\": not found"
Feb  9 18:57:12.435653 kubelet[1504]: E0209 18:57:12.435641    1504 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\": not found" containerID="dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143"
Feb  9 18:57:12.435702 kubelet[1504]: I0209 18:57:12.435659    1504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143} err="failed to get container status \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfb93d6b105ad97e2083910779b99b54cee701dd87fc140d86b9fe64d3a21143\": not found"
Feb  9 18:57:12.435702 kubelet[1504]: I0209 18:57:12.435675    1504 scope.go:115] "RemoveContainer" containerID="f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87"
Feb  9 18:57:12.435847 env[1192]: time="2024-02-09T18:57:12.435791916Z" level=error msg="ContainerStatus for \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\": not found"
Feb  9 18:57:12.435942 kubelet[1504]: E0209 18:57:12.435925    1504 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\": not found" containerID="f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87"
Feb  9 18:57:12.435942 kubelet[1504]: I0209 18:57:12.435945    1504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87} err="failed to get container status \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\": rpc error: code = NotFound desc = an error occurred when try to find container \"f66b008f2db23f0553285deb3f1f5831ea74d660ecae251af0e87cddaf253c87\": not found"
Feb  9 18:57:12.695408 systemd[1]: var-lib-kubelet-pods-dc790184\x2dc850\x2d4d0d\x2dbdea\x2d7693a34410bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddcfck.mount: Deactivated successfully.
Feb  9 18:57:12.695575 systemd[1]: var-lib-kubelet-pods-dc790184\x2dc850\x2d4d0d\x2dbdea\x2d7693a34410bf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 18:57:13.144523 kubelet[1504]: E0209 18:57:13.144365    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:13.234180 env[1192]: time="2024-02-09T18:57:13.234136883Z" level=info msg="StopContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" with timeout 1 (s)"
Feb  9 18:57:13.234536 env[1192]: time="2024-02-09T18:57:13.234181768Z" level=error msg="StopContainer for \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\": not found"
Feb  9 18:57:13.234581 kubelet[1504]: E0209 18:57:13.234351    1504 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444\": not found" containerID="12d8b546bde45ce2d6d9a66ecc60397048478224aa836ace1c4b3f606200b444"
Feb  9 18:57:13.234666 env[1192]: time="2024-02-09T18:57:13.234620410Z" level=info msg="StopPodSandbox for \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\""
Feb  9 18:57:13.234779 env[1192]: time="2024-02-09T18:57:13.234725117Z" level=info msg="TearDown network for sandbox \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" successfully"
Feb  9 18:57:13.234808 env[1192]: time="2024-02-09T18:57:13.234776504Z" level=info msg="StopPodSandbox for \"3be1e6b26388b2601b9d896f8e06bb2c78e99447b605b673b00a0d55c5f27b55\" returns successfully"
Feb  9 18:57:13.234939 kubelet[1504]: I0209 18:57:13.234921    1504 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=dc790184-c850-4d0d-bdea-7693a34410bf path="/var/lib/kubelet/pods/dc790184-c850-4d0d-bdea-7693a34410bf/volumes"
Feb  9 18:57:14.145487 kubelet[1504]: E0209 18:57:14.145369    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:14.538636 kubelet[1504]: I0209 18:57:14.538596    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:57:14.538867 kubelet[1504]: E0209 18:57:14.538667    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc790184-c850-4d0d-bdea-7693a34410bf" containerName="mount-bpf-fs"
Feb  9 18:57:14.538867 kubelet[1504]: E0209 18:57:14.538678    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc790184-c850-4d0d-bdea-7693a34410bf" containerName="apply-sysctl-overwrites"
Feb  9 18:57:14.538867 kubelet[1504]: E0209 18:57:14.538686    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc790184-c850-4d0d-bdea-7693a34410bf" containerName="mount-cgroup"
Feb  9 18:57:14.538867 kubelet[1504]: E0209 18:57:14.538693    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc790184-c850-4d0d-bdea-7693a34410bf" containerName="clean-cilium-state"
Feb  9 18:57:14.538867 kubelet[1504]: E0209 18:57:14.538701    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc790184-c850-4d0d-bdea-7693a34410bf" containerName="cilium-agent"
Feb  9 18:57:14.538867 kubelet[1504]: I0209 18:57:14.538718    1504 memory_manager.go:346] "RemoveStaleState removing state" podUID="dc790184-c850-4d0d-bdea-7693a34410bf" containerName="cilium-agent"
Feb  9 18:57:14.550041 kubelet[1504]: I0209 18:57:14.550012    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:57:14.631984 kubelet[1504]: I0209 18:57:14.631925    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-hostproc\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.631984 kubelet[1504]: I0209 18:57:14.631975    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cni-path\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632215 kubelet[1504]: I0209 18:57:14.632024    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-kernel\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632215 kubelet[1504]: I0209 18:57:14.632074    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-hubble-tls\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632215 kubelet[1504]: I0209 18:57:14.632115    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-cgroup\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632298 kubelet[1504]: I0209 18:57:14.632257    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-etc-cni-netd\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632321 kubelet[1504]: I0209 18:57:14.632307    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-xtables-lock\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632361 kubelet[1504]: I0209 18:57:14.632340    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-config-path\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632387 kubelet[1504]: I0209 18:57:14.632380    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-ipsec-secrets\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632421 kubelet[1504]: I0209 18:57:14.632408    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-net\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632464 kubelet[1504]: I0209 18:57:14.632453    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-run\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632493 kubelet[1504]: I0209 18:57:14.632474    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-bpf-maps\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632545 kubelet[1504]: I0209 18:57:14.632530    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-lib-modules\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632591 kubelet[1504]: I0209 18:57:14.632577    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-clustermesh-secrets\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632620 kubelet[1504]: I0209 18:57:14.632605    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkttj\" (UniqueName: \"kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-kube-api-access-pkttj\") pod \"cilium-n44kq\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") " pod="kube-system/cilium-n44kq"
Feb  9 18:57:14.632660 kubelet[1504]: I0209 18:57:14.632648    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f831c71-dcb7-4e70-bec4-ef4e5c0149c0-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-8vcj8\" (UID: \"5f831c71-dcb7-4e70-bec4-ef4e5c0149c0\") " pod="kube-system/cilium-operator-f59cbd8c6-8vcj8"
Feb  9 18:57:14.632686 kubelet[1504]: I0209 18:57:14.632679    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chh59\" (UniqueName: \"kubernetes.io/projected/5f831c71-dcb7-4e70-bec4-ef4e5c0149c0-kube-api-access-chh59\") pod \"cilium-operator-f59cbd8c6-8vcj8\" (UID: \"5f831c71-dcb7-4e70-bec4-ef4e5c0149c0\") " pod="kube-system/cilium-operator-f59cbd8c6-8vcj8"
Feb  9 18:57:14.841919 kubelet[1504]: E0209 18:57:14.841795    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:14.842465 env[1192]: time="2024-02-09T18:57:14.842393436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-8vcj8,Uid:5f831c71-dcb7-4e70-bec4-ef4e5c0149c0,Namespace:kube-system,Attempt:0,}"
Feb  9 18:57:14.852328 kubelet[1504]: E0209 18:57:14.852290    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:14.853027 env[1192]: time="2024-02-09T18:57:14.852973419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n44kq,Uid:41153817-c055-4f1d-b5bd-0d9220ee55e5,Namespace:kube-system,Attempt:0,}"
Feb  9 18:57:14.856182 env[1192]: time="2024-02-09T18:57:14.856110836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:57:14.856182 env[1192]: time="2024-02-09T18:57:14.856160268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:57:14.856182 env[1192]: time="2024-02-09T18:57:14.856172672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:57:14.856460 env[1192]: time="2024-02-09T18:57:14.856282077Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e183c80f3c35a30547564d7bca61f6d26d75681d99f1416cb08472d2f081c2a pid=3175 runtime=io.containerd.runc.v2
Feb  9 18:57:14.864266 env[1192]: time="2024-02-09T18:57:14.864202680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:57:14.864266 env[1192]: time="2024-02-09T18:57:14.864267061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:57:14.864443 env[1192]: time="2024-02-09T18:57:14.864288952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:57:14.864443 env[1192]: time="2024-02-09T18:57:14.864423263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e pid=3199 runtime=io.containerd.runc.v2
Feb  9 18:57:14.906856 env[1192]: time="2024-02-09T18:57:14.906158907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n44kq,Uid:41153817-c055-4f1d-b5bd-0d9220ee55e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e\""
Feb  9 18:57:14.907010 kubelet[1504]: E0209 18:57:14.906862    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:14.908934 env[1192]: time="2024-02-09T18:57:14.908894971Z" level=info msg="CreateContainer within sandbox \"f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 18:57:14.914933 env[1192]: time="2024-02-09T18:57:14.914883879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-8vcj8,Uid:5f831c71-dcb7-4e70-bec4-ef4e5c0149c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e183c80f3c35a30547564d7bca61f6d26d75681d99f1416cb08472d2f081c2a\""
Feb  9 18:57:14.915450 kubelet[1504]: E0209 18:57:14.915433    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:14.916434 env[1192]: time="2024-02-09T18:57:14.916387380Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb  9 18:57:14.923995 env[1192]: time="2024-02-09T18:57:14.923940834Z" level=info msg="CreateContainer within sandbox \"f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe\""
Feb  9 18:57:14.924878 env[1192]: time="2024-02-09T18:57:14.924841794Z" level=info msg="StartContainer for \"dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe\""
Feb  9 18:57:14.971123 env[1192]: time="2024-02-09T18:57:14.971071761Z" level=info msg="StartContainer for \"dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe\" returns successfully"
Feb  9 18:57:15.000078 env[1192]: time="2024-02-09T18:57:15.000027063Z" level=info msg="shim disconnected" id=dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe
Feb  9 18:57:15.000078 env[1192]: time="2024-02-09T18:57:15.000082828Z" level=warning msg="cleaning up after shim disconnected" id=dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe namespace=k8s.io
Feb  9 18:57:15.000277 env[1192]: time="2024-02-09T18:57:15.000092085Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:15.007109 env[1192]: time="2024-02-09T18:57:15.007038219Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3300 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:15.145654 kubelet[1504]: E0209 18:57:15.145505    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:15.416107 env[1192]: time="2024-02-09T18:57:15.415991381Z" level=info msg="StopPodSandbox for \"f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e\""
Feb  9 18:57:15.416107 env[1192]: time="2024-02-09T18:57:15.416044382Z" level=info msg="Container to stop \"dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 18:57:15.687458 env[1192]: time="2024-02-09T18:57:15.687392552Z" level=info msg="shim disconnected" id=f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e
Feb  9 18:57:15.687458 env[1192]: time="2024-02-09T18:57:15.687450039Z" level=warning msg="cleaning up after shim disconnected" id=f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e namespace=k8s.io
Feb  9 18:57:15.687668 env[1192]: time="2024-02-09T18:57:15.687463194Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:15.693831 env[1192]: time="2024-02-09T18:57:15.693781218Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3333 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:15.694064 env[1192]: time="2024-02-09T18:57:15.694022933Z" level=info msg="TearDown network for sandbox \"f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e\" successfully"
Feb  9 18:57:15.694064 env[1192]: time="2024-02-09T18:57:15.694054241Z" level=info msg="StopPodSandbox for \"f7b708e138bb03c8e66c16a11c08f03007b46cd4b8a9851e8d644661b578507e\" returns successfully"
Feb  9 18:57:15.840010 kubelet[1504]: I0209 18:57:15.839959    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-etc-cni-netd\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840010 kubelet[1504]: I0209 18:57:15.840013    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-lib-modules\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840197 kubelet[1504]: I0209 18:57:15.840040    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-kernel\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840197 kubelet[1504]: I0209 18:57:15.840060    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-cgroup\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840197 kubelet[1504]: I0209 18:57:15.840050    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840197 kubelet[1504]: I0209 18:57:15.840107    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840197 kubelet[1504]: I0209 18:57:15.840113    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-hubble-tls\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840324 kubelet[1504]: I0209 18:57:15.840155    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840324 kubelet[1504]: I0209 18:57:15.840168    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-net\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840324 kubelet[1504]: I0209 18:57:15.840191    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840324 kubelet[1504]: I0209 18:57:15.840206    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkttj\" (UniqueName: \"kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-kube-api-access-pkttj\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840324 kubelet[1504]: I0209 18:57:15.840233    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-hostproc\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840437 kubelet[1504]: I0209 18:57:15.840258    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-run\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840437 kubelet[1504]: I0209 18:57:15.840282    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-bpf-maps\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840437 kubelet[1504]: I0209 18:57:15.840301    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cni-path\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840437 kubelet[1504]: I0209 18:57:15.840327    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-xtables-lock\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840437 kubelet[1504]: I0209 18:57:15.840361    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-config-path\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840437 kubelet[1504]: I0209 18:57:15.840389    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-ipsec-secrets\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840570 kubelet[1504]: I0209 18:57:15.840417    1504 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-clustermesh-secrets\") pod \"41153817-c055-4f1d-b5bd-0d9220ee55e5\" (UID: \"41153817-c055-4f1d-b5bd-0d9220ee55e5\") "
Feb  9 18:57:15.840570 kubelet[1504]: I0209 18:57:15.840449    1504 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-etc-cni-netd\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.840570 kubelet[1504]: I0209 18:57:15.840462    1504 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-lib-modules\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.840570 kubelet[1504]: I0209 18:57:15.840475    1504 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-kernel\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.840570 kubelet[1504]: I0209 18:57:15.840490    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-cgroup\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.840690 kubelet[1504]: I0209 18:57:15.840596    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840690 kubelet[1504]: I0209 18:57:15.840624    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840839 kubelet[1504]: I0209 18:57:15.840785    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840839 kubelet[1504]: I0209 18:57:15.840827    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.840915 kubelet[1504]: I0209 18:57:15.840855    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.841226 kubelet[1504]: I0209 18:57:15.840972    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 18:57:15.841226 kubelet[1504]: W0209 18:57:15.840963    1504 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/41153817-c055-4f1d-b5bd-0d9220ee55e5/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 18:57:15.844649 kubelet[1504]: I0209 18:57:15.843499    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:57:15.844649 kubelet[1504]: I0209 18:57:15.843579    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 18:57:15.844649 kubelet[1504]: I0209 18:57:15.844508    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 18:57:15.844179 systemd[1]: var-lib-kubelet-pods-41153817\x2dc055\x2d4f1d\x2db5bd\x2d0d9220ee55e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 18:57:15.845425 kubelet[1504]: I0209 18:57:15.845400    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-kube-api-access-pkttj" (OuterVolumeSpecName: "kube-api-access-pkttj") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "kube-api-access-pkttj". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 18:57:15.845782 systemd[1]: var-lib-kubelet-pods-41153817\x2dc055\x2d4f1d\x2db5bd\x2d0d9220ee55e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpkttj.mount: Deactivated successfully.
Feb  9 18:57:15.845905 systemd[1]: var-lib-kubelet-pods-41153817\x2dc055\x2d4f1d\x2db5bd\x2d0d9220ee55e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 18:57:15.845985 systemd[1]: var-lib-kubelet-pods-41153817\x2dc055\x2d4f1d\x2db5bd\x2d0d9220ee55e5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb  9 18:57:15.846480 kubelet[1504]: I0209 18:57:15.846304    1504 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41153817-c055-4f1d-b5bd-0d9220ee55e5" (UID: "41153817-c055-4f1d-b5bd-0d9220ee55e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941016    1504 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-hostproc\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941047    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-run\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941056    1504 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-bpf-maps\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941066    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-config-path\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941077    1504 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-cilium-ipsec-secrets\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941085    1504 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41153817-c055-4f1d-b5bd-0d9220ee55e5-clustermesh-secrets\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941093    1504 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-cni-path\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941114 kubelet[1504]: I0209 18:57:15.941101    1504 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-xtables-lock\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941430 kubelet[1504]: I0209 18:57:15.941109    1504 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-hubble-tls\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941430 kubelet[1504]: I0209 18:57:15.941116    1504 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41153817-c055-4f1d-b5bd-0d9220ee55e5-host-proc-sys-net\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:15.941430 kubelet[1504]: I0209 18:57:15.941127    1504 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-pkttj\" (UniqueName: \"kubernetes.io/projected/41153817-c055-4f1d-b5bd-0d9220ee55e5-kube-api-access-pkttj\") on node \"10.0.0.91\" DevicePath \"\""
Feb  9 18:57:16.146098 kubelet[1504]: E0209 18:57:16.146051    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:16.421637 kubelet[1504]: I0209 18:57:16.421541    1504 scope.go:115] "RemoveContainer" containerID="dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe"
Feb  9 18:57:16.422759 env[1192]: time="2024-02-09T18:57:16.422725555Z" level=info msg="RemoveContainer for \"dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe\""
Feb  9 18:57:16.478029 kubelet[1504]: I0209 18:57:16.477994    1504 topology_manager.go:210] "Topology Admit Handler"
Feb  9 18:57:16.478029 kubelet[1504]: E0209 18:57:16.478036    1504 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41153817-c055-4f1d-b5bd-0d9220ee55e5" containerName="mount-cgroup"
Feb  9 18:57:16.478236 kubelet[1504]: I0209 18:57:16.478055    1504 memory_manager.go:346] "RemoveStaleState removing state" podUID="41153817-c055-4f1d-b5bd-0d9220ee55e5" containerName="mount-cgroup"
Feb  9 18:57:16.481185 kubelet[1504]: W0209 18:57:16.481164    1504 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.91" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.91' and this object
Feb  9 18:57:16.481185 kubelet[1504]: E0209 18:57:16.481185    1504 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.91" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.91' and this object
Feb  9 18:57:16.481461 kubelet[1504]: W0209 18:57:16.481432    1504 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.91" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.91' and this object
Feb  9 18:57:16.481461 kubelet[1504]: E0209 18:57:16.481442    1504 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.91" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.91' and this object
Feb  9 18:57:16.481461 kubelet[1504]: W0209 18:57:16.481445    1504 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.91" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.91' and this object
Feb  9 18:57:16.481461 kubelet[1504]: E0209 18:57:16.481462    1504 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.91" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.91' and this object
Feb  9 18:57:16.492554 env[1192]: time="2024-02-09T18:57:16.492440557Z" level=info msg="RemoveContainer for \"dc840ede9f7f2e6f3a881515539a0fd43d739cb9f186664f44a3e6fd155f89fe\" returns successfully"
Feb  9 18:57:16.516308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380341076.mount: Deactivated successfully.
Feb  9 18:57:16.544386 kubelet[1504]: I0209 18:57:16.544346    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-host-proc-sys-net\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544386 kubelet[1504]: I0209 18:57:16.544388    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-hostproc\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544605 kubelet[1504]: I0209 18:57:16.544409    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-etc-cni-netd\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544635 kubelet[1504]: I0209 18:57:16.544583    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-bpf-maps\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544696 kubelet[1504]: I0209 18:57:16.544672    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-cni-path\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544738 kubelet[1504]: I0209 18:57:16.544725    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-xtables-lock\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544767 kubelet[1504]: I0209 18:57:16.544762    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15fa4858-04b5-425c-a154-d7903adb28e0-clustermesh-secrets\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544804 kubelet[1504]: I0209 18:57:16.544791    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15fa4858-04b5-425c-a154-d7903adb28e0-cilium-config-path\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544882 kubelet[1504]: I0209 18:57:16.544864    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/15fa4858-04b5-425c-a154-d7903adb28e0-cilium-ipsec-secrets\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.544968 kubelet[1504]: I0209 18:57:16.544953    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78qrs\" (UniqueName: \"kubernetes.io/projected/15fa4858-04b5-425c-a154-d7903adb28e0-kube-api-access-78qrs\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.545017 kubelet[1504]: I0209 18:57:16.545000    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-lib-modules\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.545047 kubelet[1504]: I0209 18:57:16.545034    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-host-proc-sys-kernel\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.545092 kubelet[1504]: I0209 18:57:16.545079    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-cilium-run\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.545134 kubelet[1504]: I0209 18:57:16.545119    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15fa4858-04b5-425c-a154-d7903adb28e0-cilium-cgroup\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:16.545163 kubelet[1504]: I0209 18:57:16.545152    1504 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15fa4858-04b5-425c-a154-d7903adb28e0-hubble-tls\") pod \"cilium-xztfw\" (UID: \"15fa4858-04b5-425c-a154-d7903adb28e0\") " pod="kube-system/cilium-xztfw"
Feb  9 18:57:17.098068 kubelet[1504]: E0209 18:57:17.098007    1504 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:17.116078 env[1192]: time="2024-02-09T18:57:17.116031211Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:17.117705 env[1192]: time="2024-02-09T18:57:17.117665989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:17.119270 env[1192]: time="2024-02-09T18:57:17.119209545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 18:57:17.119807 env[1192]: time="2024-02-09T18:57:17.119771599Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb  9 18:57:17.121395 env[1192]: time="2024-02-09T18:57:17.121359227Z" level=info msg="CreateContainer within sandbox \"5e183c80f3c35a30547564d7bca61f6d26d75681d99f1416cb08472d2f081c2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb  9 18:57:17.130779 env[1192]: time="2024-02-09T18:57:17.130721272Z" level=info msg="CreateContainer within sandbox \"5e183c80f3c35a30547564d7bca61f6d26d75681d99f1416cb08472d2f081c2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3a91a542f5a2456e5965bbe2662a15fe293e82a42ff9a117e9e8319f273becca\""
Feb  9 18:57:17.131332 env[1192]: time="2024-02-09T18:57:17.131272837Z" level=info msg="StartContainer for \"3a91a542f5a2456e5965bbe2662a15fe293e82a42ff9a117e9e8319f273becca\""
Feb  9 18:57:17.149489 kubelet[1504]: E0209 18:57:17.146874    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:17.165678 kubelet[1504]: E0209 18:57:17.165648    1504 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 18:57:17.596480 env[1192]: time="2024-02-09T18:57:17.596409433Z" level=info msg="StartContainer for \"3a91a542f5a2456e5965bbe2662a15fe293e82a42ff9a117e9e8319f273becca\" returns successfully"
Feb  9 18:57:17.598218 kubelet[1504]: I0209 18:57:17.597996    1504 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=41153817-c055-4f1d-b5bd-0d9220ee55e5 path="/var/lib/kubelet/pods/41153817-c055-4f1d-b5bd-0d9220ee55e5/volumes"
Feb  9 18:57:17.599851 kubelet[1504]: E0209 18:57:17.599809    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:17.647246 kubelet[1504]: E0209 18:57:17.647196    1504 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition
Feb  9 18:57:17.647246 kubelet[1504]: E0209 18:57:17.647234    1504 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-xztfw: failed to sync secret cache: timed out waiting for the condition
Feb  9 18:57:17.647409 kubelet[1504]: E0209 18:57:17.647350    1504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/15fa4858-04b5-425c-a154-d7903adb28e0-hubble-tls podName:15fa4858-04b5-425c-a154-d7903adb28e0 nodeName:}" failed. No retries permitted until 2024-02-09 18:57:18.147324538 +0000 UTC m=+81.610075009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/15fa4858-04b5-425c-a154-d7903adb28e0-hubble-tls") pod "cilium-xztfw" (UID: "15fa4858-04b5-425c-a154-d7903adb28e0") : failed to sync secret cache: timed out waiting for the condition
Feb  9 18:57:17.744956 kubelet[1504]: I0209 18:57:17.744919    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-8vcj8" podStartSLOduration=-9.22337203311004e+09 pod.CreationTimestamp="2024-02-09 18:57:14 +0000 UTC" firstStartedPulling="2024-02-09 18:57:14.916150887 +0000 UTC m=+78.378901358" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:57:17.744478647 +0000 UTC m=+81.207229128" watchObservedRunningTime="2024-02-09 18:57:17.744735178 +0000 UTC m=+81.207485649"
Feb  9 18:57:18.147734 kubelet[1504]: E0209 18:57:18.147671    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:18.281432 kubelet[1504]: E0209 18:57:18.281388    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:18.281992 env[1192]: time="2024-02-09T18:57:18.281956378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xztfw,Uid:15fa4858-04b5-425c-a154-d7903adb28e0,Namespace:kube-system,Attempt:0,}"
Feb  9 18:57:18.296903 env[1192]: time="2024-02-09T18:57:18.296793023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 18:57:18.297084 env[1192]: time="2024-02-09T18:57:18.296876650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 18:57:18.297084 env[1192]: time="2024-02-09T18:57:18.296896587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 18:57:18.297084 env[1192]: time="2024-02-09T18:57:18.297065073Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0 pid=3403 runtime=io.containerd.runc.v2
Feb  9 18:57:18.329804 env[1192]: time="2024-02-09T18:57:18.329756926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xztfw,Uid:15fa4858-04b5-425c-a154-d7903adb28e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\""
Feb  9 18:57:18.330311 kubelet[1504]: E0209 18:57:18.330292    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:18.332226 env[1192]: time="2024-02-09T18:57:18.332196143Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 18:57:18.346441 env[1192]: time="2024-02-09T18:57:18.346386254Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abf70fe8ebf915f1bfa531d9e1cb04dd82103c6b2feef47a1a402bb1c1b1f6f7\""
Feb  9 18:57:18.347029 env[1192]: time="2024-02-09T18:57:18.346999575Z" level=info msg="StartContainer for \"abf70fe8ebf915f1bfa531d9e1cb04dd82103c6b2feef47a1a402bb1c1b1f6f7\""
Feb  9 18:57:18.410102 env[1192]: time="2024-02-09T18:57:18.409954534Z" level=info msg="StartContainer for \"abf70fe8ebf915f1bfa531d9e1cb04dd82103c6b2feef47a1a402bb1c1b1f6f7\" returns successfully"
Feb  9 18:57:18.603054 kubelet[1504]: E0209 18:57:18.603023    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:18.603288 kubelet[1504]: E0209 18:57:18.603140    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:18.644060 env[1192]: time="2024-02-09T18:57:18.644005678Z" level=info msg="shim disconnected" id=abf70fe8ebf915f1bfa531d9e1cb04dd82103c6b2feef47a1a402bb1c1b1f6f7
Feb  9 18:57:18.644060 env[1192]: time="2024-02-09T18:57:18.644049109Z" level=warning msg="cleaning up after shim disconnected" id=abf70fe8ebf915f1bfa531d9e1cb04dd82103c6b2feef47a1a402bb1c1b1f6f7 namespace=k8s.io
Feb  9 18:57:18.644060 env[1192]: time="2024-02-09T18:57:18.644057836Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:18.650849 env[1192]: time="2024-02-09T18:57:18.650789245Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3487 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:19.148113 kubelet[1504]: E0209 18:57:19.148065    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:19.605623 kubelet[1504]: E0209 18:57:19.605593    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:19.607116 env[1192]: time="2024-02-09T18:57:19.607083036Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 18:57:19.619414 env[1192]: time="2024-02-09T18:57:19.619375918Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53ea5450b93bffe9a70e468f78a6bc0b7e2ed9849b2025465688250272ddd138\""
Feb  9 18:57:19.619807 env[1192]: time="2024-02-09T18:57:19.619770899Z" level=info msg="StartContainer for \"53ea5450b93bffe9a70e468f78a6bc0b7e2ed9849b2025465688250272ddd138\""
Feb  9 18:57:19.663540 env[1192]: time="2024-02-09T18:57:19.663466503Z" level=info msg="StartContainer for \"53ea5450b93bffe9a70e468f78a6bc0b7e2ed9849b2025465688250272ddd138\" returns successfully"
Feb  9 18:57:19.683169 env[1192]: time="2024-02-09T18:57:19.683109013Z" level=info msg="shim disconnected" id=53ea5450b93bffe9a70e468f78a6bc0b7e2ed9849b2025465688250272ddd138
Feb  9 18:57:19.683169 env[1192]: time="2024-02-09T18:57:19.683155170Z" level=warning msg="cleaning up after shim disconnected" id=53ea5450b93bffe9a70e468f78a6bc0b7e2ed9849b2025465688250272ddd138 namespace=k8s.io
Feb  9 18:57:19.683169 env[1192]: time="2024-02-09T18:57:19.683163365Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:19.690345 env[1192]: time="2024-02-09T18:57:19.690290115Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3549 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:19.749774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53ea5450b93bffe9a70e468f78a6bc0b7e2ed9849b2025465688250272ddd138-rootfs.mount: Deactivated successfully.
Feb  9 18:57:20.148615 kubelet[1504]: E0209 18:57:20.148543    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:20.609358 kubelet[1504]: E0209 18:57:20.609330    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:20.610858 env[1192]: time="2024-02-09T18:57:20.610812155Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 18:57:20.627370 env[1192]: time="2024-02-09T18:57:20.627317508Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9c091d5eafc4d340395727a2951952e27e46d90a133c4aaec9712e07f7852a02\""
Feb  9 18:57:20.627840 env[1192]: time="2024-02-09T18:57:20.627776750Z" level=info msg="StartContainer for \"9c091d5eafc4d340395727a2951952e27e46d90a133c4aaec9712e07f7852a02\""
Feb  9 18:57:20.671153 env[1192]: time="2024-02-09T18:57:20.671105514Z" level=info msg="StartContainer for \"9c091d5eafc4d340395727a2951952e27e46d90a133c4aaec9712e07f7852a02\" returns successfully"
Feb  9 18:57:20.689160 env[1192]: time="2024-02-09T18:57:20.689094460Z" level=info msg="shim disconnected" id=9c091d5eafc4d340395727a2951952e27e46d90a133c4aaec9712e07f7852a02
Feb  9 18:57:20.689160 env[1192]: time="2024-02-09T18:57:20.689147970Z" level=warning msg="cleaning up after shim disconnected" id=9c091d5eafc4d340395727a2951952e27e46d90a133c4aaec9712e07f7852a02 namespace=k8s.io
Feb  9 18:57:20.689160 env[1192]: time="2024-02-09T18:57:20.689156516Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:20.695489 env[1192]: time="2024-02-09T18:57:20.695454070Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3604 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:20.750147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c091d5eafc4d340395727a2951952e27e46d90a133c4aaec9712e07f7852a02-rootfs.mount: Deactivated successfully.
Feb  9 18:57:21.149571 kubelet[1504]: E0209 18:57:21.149509    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:21.521926 kubelet[1504]: I0209 18:57:21.521888    1504 setters.go:548] "Node became not ready" node="10.0.0.91" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:57:21.521785401 +0000 UTC m=+84.984535872 LastTransitionTime:2024-02-09 18:57:21.521785401 +0000 UTC m=+84.984535872 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
Feb  9 18:57:21.613983 kubelet[1504]: E0209 18:57:21.613949    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:21.616030 env[1192]: time="2024-02-09T18:57:21.615984029Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 18:57:21.629458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887369711.mount: Deactivated successfully.
Feb  9 18:57:21.629793 env[1192]: time="2024-02-09T18:57:21.629744963Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a70b50faa3e4960a19d637716ad40cf082804606b0d0af0576e938befaace8cf\""
Feb  9 18:57:21.630297 env[1192]: time="2024-02-09T18:57:21.630253205Z" level=info msg="StartContainer for \"a70b50faa3e4960a19d637716ad40cf082804606b0d0af0576e938befaace8cf\""
Feb  9 18:57:21.669603 env[1192]: time="2024-02-09T18:57:21.669535937Z" level=info msg="StartContainer for \"a70b50faa3e4960a19d637716ad40cf082804606b0d0af0576e938befaace8cf\" returns successfully"
Feb  9 18:57:21.686024 env[1192]: time="2024-02-09T18:57:21.685967502Z" level=info msg="shim disconnected" id=a70b50faa3e4960a19d637716ad40cf082804606b0d0af0576e938befaace8cf
Feb  9 18:57:21.686024 env[1192]: time="2024-02-09T18:57:21.686020632Z" level=warning msg="cleaning up after shim disconnected" id=a70b50faa3e4960a19d637716ad40cf082804606b0d0af0576e938befaace8cf namespace=k8s.io
Feb  9 18:57:21.686024 env[1192]: time="2024-02-09T18:57:21.686030170Z" level=info msg="cleaning up dead shim"
Feb  9 18:57:21.692628 env[1192]: time="2024-02-09T18:57:21.692589285Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3658 runtime=io.containerd.runc.v2\n"
Feb  9 18:57:21.749718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a70b50faa3e4960a19d637716ad40cf082804606b0d0af0576e938befaace8cf-rootfs.mount: Deactivated successfully.
Feb  9 18:57:22.150117 kubelet[1504]: E0209 18:57:22.150056    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:22.166894 kubelet[1504]: E0209 18:57:22.166845    1504 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 18:57:22.619335 kubelet[1504]: E0209 18:57:22.619307    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:22.621263 env[1192]: time="2024-02-09T18:57:22.621219038Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 18:57:22.634358 env[1192]: time="2024-02-09T18:57:22.634316105Z" level=info msg="CreateContainer within sandbox \"9bca7dc126c85d0663d59a2f77816b09d350016d00c18d5d0d0c11c8bb9d5ab0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b201cf7fe7c606591d50ae596c71ef9e4a07775d1854a6aa7960073a776d8dc7\""
Feb  9 18:57:22.634994 env[1192]: time="2024-02-09T18:57:22.634945190Z" level=info msg="StartContainer for \"b201cf7fe7c606591d50ae596c71ef9e4a07775d1854a6aa7960073a776d8dc7\""
Feb  9 18:57:22.678196 env[1192]: time="2024-02-09T18:57:22.678144522Z" level=info msg="StartContainer for \"b201cf7fe7c606591d50ae596c71ef9e4a07775d1854a6aa7960073a776d8dc7\" returns successfully"
Feb  9 18:57:22.749797 systemd[1]: run-containerd-runc-k8s.io-b201cf7fe7c606591d50ae596c71ef9e4a07775d1854a6aa7960073a776d8dc7-runc.7aQ13H.mount: Deactivated successfully.
Feb  9 18:57:22.929845 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb  9 18:57:23.151191 kubelet[1504]: E0209 18:57:23.151142    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:23.233555 kubelet[1504]: E0209 18:57:23.233425    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:23.623685 kubelet[1504]: E0209 18:57:23.623572    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:23.635182 kubelet[1504]: I0209 18:57:23.635144    1504 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xztfw" podStartSLOduration=7.635103622 pod.CreationTimestamp="2024-02-09 18:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:57:23.634852819 +0000 UTC m=+87.097603290" watchObservedRunningTime="2024-02-09 18:57:23.635103622 +0000 UTC m=+87.097854094"
Feb  9 18:57:24.151827 kubelet[1504]: E0209 18:57:24.151761    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:24.626407 kubelet[1504]: E0209 18:57:24.626346    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:25.152444 kubelet[1504]: E0209 18:57:25.152403    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:25.456615 systemd-networkd[1075]: lxc_health: Link UP
Feb  9 18:57:25.474898 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 18:57:25.474955 systemd-networkd[1075]: lxc_health: Gained carrier
Feb  9 18:57:25.628214 kubelet[1504]: E0209 18:57:25.628148    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:26.152614 kubelet[1504]: E0209 18:57:26.152550    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:26.630305 kubelet[1504]: E0209 18:57:26.630262    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:27.153042 kubelet[1504]: E0209 18:57:27.153007    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:27.267072 systemd-networkd[1075]: lxc_health: Gained IPv6LL
Feb  9 18:57:27.631327 kubelet[1504]: E0209 18:57:27.631293    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:28.153774 kubelet[1504]: E0209 18:57:28.153715    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:28.632771 kubelet[1504]: E0209 18:57:28.632743    1504 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb  9 18:57:29.153892 kubelet[1504]: E0209 18:57:29.153829    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:30.154488 kubelet[1504]: E0209 18:57:30.154438    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:31.155540 kubelet[1504]: E0209 18:57:31.155497    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:32.156061 kubelet[1504]: E0209 18:57:32.155990    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb  9 18:57:33.156204 kubelet[1504]: E0209 18:57:33.156153    1504 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"