Feb  9 08:54:25.033293 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024
Feb  9 08:54:25.033334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  9 08:54:25.033354 kernel: BIOS-provided physical RAM map:
Feb  9 08:54:25.033363 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb  9 08:54:25.033373 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb  9 08:54:25.033382 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb  9 08:54:25.033394 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable
Feb  9 08:54:25.033406 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved
Feb  9 08:54:25.033422 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb  9 08:54:25.033432 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb  9 08:54:25.033443 kernel: NX (Execute Disable) protection: active
Feb  9 08:54:25.033454 kernel: SMBIOS 2.8 present.
Feb  9 08:54:25.033464 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017
Feb  9 08:54:25.033475 kernel: Hypervisor detected: KVM
Feb  9 08:54:25.033487 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb  9 08:54:25.033501 kernel: kvm-clock: cpu 0, msr 61faa001, primary cpu clock
Feb  9 08:54:25.033511 kernel: kvm-clock: using sched offset of 3660576907 cycles
Feb  9 08:54:25.033523 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb  9 08:54:25.033533 kernel: tsc: Detected 2494.138 MHz processor
Feb  9 08:54:25.033543 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb  9 08:54:25.033554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb  9 08:54:25.033564 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000
Feb  9 08:54:25.033575 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb  9 08:54:25.033589 kernel: ACPI: Early table checksum verification disabled
Feb  9 08:54:25.033600 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS )
Feb  9 08:54:25.033611 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033621 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033632 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033642 kernel: ACPI: FACS 0x000000007FFE0000 000040
Feb  9 08:54:25.033652 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033662 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033672 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033686 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb  9 08:54:25.033696 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd]
Feb  9 08:54:25.033707 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769]
Feb  9 08:54:25.033716 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f]
Feb  9 08:54:25.033727 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d]
Feb  9 08:54:25.033737 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895]
Feb  9 08:54:25.033747 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d]
Feb  9 08:54:25.033758 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985]
Feb  9 08:54:25.033778 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb  9 08:54:25.033788 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb  9 08:54:25.033800 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Feb  9 08:54:25.033811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff]
Feb  9 08:54:25.033822 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff]
Feb  9 08:54:25.033834 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff]
Feb  9 08:54:25.033849 kernel: Zone ranges:
Feb  9 08:54:25.033860 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb  9 08:54:25.033871 kernel:   DMA32    [mem 0x0000000001000000-0x000000007ffd7fff]
Feb  9 08:54:25.033882 kernel:   Normal   empty
Feb  9 08:54:25.033893 kernel: Movable zone start for each node
Feb  9 08:54:25.033904 kernel: Early memory node ranges
Feb  9 08:54:25.033915 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb  9 08:54:25.033926 kernel:   node   0: [mem 0x0000000000100000-0x000000007ffd7fff]
Feb  9 08:54:25.033937 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff]
Feb  9 08:54:25.033951 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb  9 08:54:25.033962 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb  9 08:54:25.033992 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges
Feb  9 08:54:25.034003 kernel: ACPI: PM-Timer IO Port: 0x608
Feb  9 08:54:25.034015 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb  9 08:54:25.034027 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb  9 08:54:25.034039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb  9 08:54:25.034049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb  9 08:54:25.034060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb  9 08:54:25.034077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb  9 08:54:25.034088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb  9 08:54:25.034099 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb  9 08:54:25.034111 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb  9 08:54:25.034122 kernel: TSC deadline timer available
Feb  9 08:54:25.034133 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb  9 08:54:25.034145 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices
Feb  9 08:54:25.034156 kernel: Booting paravirtualized kernel on KVM
Feb  9 08:54:25.034167 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb  9 08:54:25.034182 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1
Feb  9 08:54:25.034193 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576
Feb  9 08:54:25.034204 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152
Feb  9 08:54:25.034215 kernel: pcpu-alloc: [0] 0 1 
Feb  9 08:54:25.034226 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0
Feb  9 08:54:25.034237 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb  9 08:54:25.034247 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515800
Feb  9 08:54:25.034259 kernel: Policy zone: DMA32
Feb  9 08:54:25.034272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  9 08:54:25.034287 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb  9 08:54:25.034297 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb  9 08:54:25.034308 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb  9 08:54:25.034319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb  9 08:54:25.034330 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved)
Feb  9 08:54:25.034341 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb  9 08:54:25.034353 kernel: Kernel/User page tables isolation: enabled
Feb  9 08:54:25.034363 kernel: ftrace: allocating 34475 entries in 135 pages
Feb  9 08:54:25.034377 kernel: ftrace: allocated 135 pages with 4 groups
Feb  9 08:54:25.034388 kernel: rcu: Hierarchical RCU implementation.
Feb  9 08:54:25.034401 kernel: rcu:         RCU event tracing is enabled.
Feb  9 08:54:25.034412 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb  9 08:54:25.034423 kernel:         Rude variant of Tasks RCU enabled.
Feb  9 08:54:25.034435 kernel:         Tracing variant of Tasks RCU enabled.
Feb  9 08:54:25.034447 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb  9 08:54:25.034458 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb  9 08:54:25.034470 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb  9 08:54:25.034485 kernel: random: crng init done
Feb  9 08:54:25.034496 kernel: Console: colour VGA+ 80x25
Feb  9 08:54:25.034508 kernel: printk: console [tty0] enabled
Feb  9 08:54:25.034520 kernel: printk: console [ttyS0] enabled
Feb  9 08:54:25.034531 kernel: ACPI: Core revision 20210730
Feb  9 08:54:25.034543 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns
Feb  9 08:54:25.034554 kernel: APIC: Switch to symmetric I/O mode setup
Feb  9 08:54:25.034566 kernel: x2apic enabled
Feb  9 08:54:25.034577 kernel: Switched APIC routing to physical x2apic.
Feb  9 08:54:25.034592 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Feb  9 08:54:25.034604 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns
Feb  9 08:54:25.034647 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138)
Feb  9 08:54:25.034660 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
Feb  9 08:54:25.034671 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
Feb  9 08:54:25.034683 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb  9 08:54:25.034695 kernel: Spectre V2 : Mitigation: Retpolines
Feb  9 08:54:25.034707 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb  9 08:54:25.034737 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb  9 08:54:25.034755 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Feb  9 08:54:25.034780 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb  9 08:54:25.034795 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Feb  9 08:54:25.034811 kernel: MDS: Mitigation: Clear CPU buffers
Feb  9 08:54:25.034824 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb  9 08:54:25.034837 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb  9 08:54:25.034850 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb  9 08:54:25.034864 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb  9 08:54:25.034878 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb  9 08:54:25.034893 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb  9 08:54:25.034912 kernel: Freeing SMP alternatives memory: 32K
Feb  9 08:54:25.034926 kernel: pid_max: default: 32768 minimum: 301
Feb  9 08:54:25.034938 kernel: LSM: Security Framework initializing
Feb  9 08:54:25.034950 kernel: SELinux:  Initializing.
Feb  9 08:54:25.034962 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb  9 08:54:25.034976 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb  9 08:54:25.034988 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2)
Feb  9 08:54:25.035025 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only.
Feb  9 08:54:25.035038 kernel: signal: max sigframe size: 1776
Feb  9 08:54:25.035051 kernel: rcu: Hierarchical SRCU implementation.
Feb  9 08:54:25.035065 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb  9 08:54:25.035078 kernel: smp: Bringing up secondary CPUs ...
Feb  9 08:54:25.035092 kernel: x86: Booting SMP configuration:
Feb  9 08:54:25.035105 kernel: .... node  #0, CPUs:      #1
Feb  9 08:54:25.035118 kernel: kvm-clock: cpu 1, msr 61faa041, secondary cpu clock
Feb  9 08:54:25.035131 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0
Feb  9 08:54:25.035148 kernel: smp: Brought up 1 node, 2 CPUs
Feb  9 08:54:25.035161 kernel: smpboot: Max logical packages: 1
Feb  9 08:54:25.035175 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS)
Feb  9 08:54:25.035188 kernel: devtmpfs: initialized
Feb  9 08:54:25.035201 kernel: x86/mm: Memory block size: 128MB
Feb  9 08:54:25.035215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb  9 08:54:25.035229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb  9 08:54:25.035242 kernel: pinctrl core: initialized pinctrl subsystem
Feb  9 08:54:25.035256 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb  9 08:54:25.035274 kernel: audit: initializing netlink subsys (disabled)
Feb  9 08:54:25.035287 kernel: audit: type=2000 audit(1707468864.944:1): state=initialized audit_enabled=0 res=1
Feb  9 08:54:25.035301 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb  9 08:54:25.035315 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb  9 08:54:25.035328 kernel: cpuidle: using governor menu
Feb  9 08:54:25.035342 kernel: ACPI: bus type PCI registered
Feb  9 08:54:25.035354 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb  9 08:54:25.035366 kernel: dca service started, version 1.12.1
Feb  9 08:54:25.035379 kernel: PCI: Using configuration type 1 for base access
Feb  9 08:54:25.035396 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb  9 08:54:25.035408 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb  9 08:54:25.035421 kernel: ACPI: Added _OSI(Module Device)
Feb  9 08:54:25.035433 kernel: ACPI: Added _OSI(Processor Device)
Feb  9 08:54:25.035445 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb  9 08:54:25.035458 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb  9 08:54:25.035471 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb  9 08:54:25.035483 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb  9 08:54:25.035497 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb  9 08:54:25.035515 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb  9 08:54:25.035528 kernel: ACPI: Interpreter enabled
Feb  9 08:54:25.035541 kernel: ACPI: PM: (supports S0 S5)
Feb  9 08:54:25.035553 kernel: ACPI: Using IOAPIC for interrupt routing
Feb  9 08:54:25.035566 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb  9 08:54:25.035579 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb  9 08:54:25.035592 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb  9 08:54:25.035889 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb  9 08:54:25.036050 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
Feb  9 08:54:25.036069 kernel: acpiphp: Slot [3] registered
Feb  9 08:54:25.036082 kernel: acpiphp: Slot [4] registered
Feb  9 08:54:25.036096 kernel: acpiphp: Slot [5] registered
Feb  9 08:54:25.036110 kernel: acpiphp: Slot [6] registered
Feb  9 08:54:25.036123 kernel: acpiphp: Slot [7] registered
Feb  9 08:54:25.036137 kernel: acpiphp: Slot [8] registered
Feb  9 08:54:25.036151 kernel: acpiphp: Slot [9] registered
Feb  9 08:54:25.036170 kernel: acpiphp: Slot [10] registered
Feb  9 08:54:25.036183 kernel: acpiphp: Slot [11] registered
Feb  9 08:54:25.036196 kernel: acpiphp: Slot [12] registered
Feb  9 08:54:25.036211 kernel: acpiphp: Slot [13] registered
Feb  9 08:54:25.036225 kernel: acpiphp: Slot [14] registered
Feb  9 08:54:25.036238 kernel: acpiphp: Slot [15] registered
Feb  9 08:54:25.036252 kernel: acpiphp: Slot [16] registered
Feb  9 08:54:25.036266 kernel: acpiphp: Slot [17] registered
Feb  9 08:54:25.036280 kernel: acpiphp: Slot [18] registered
Feb  9 08:54:25.036294 kernel: acpiphp: Slot [19] registered
Feb  9 08:54:25.036310 kernel: acpiphp: Slot [20] registered
Feb  9 08:54:25.036322 kernel: acpiphp: Slot [21] registered
Feb  9 08:54:25.036335 kernel: acpiphp: Slot [22] registered
Feb  9 08:54:25.036347 kernel: acpiphp: Slot [23] registered
Feb  9 08:54:25.036359 kernel: acpiphp: Slot [24] registered
Feb  9 08:54:25.036372 kernel: acpiphp: Slot [25] registered
Feb  9 08:54:25.036385 kernel: acpiphp: Slot [26] registered
Feb  9 08:54:25.036398 kernel: acpiphp: Slot [27] registered
Feb  9 08:54:25.036411 kernel: acpiphp: Slot [28] registered
Feb  9 08:54:25.036427 kernel: acpiphp: Slot [29] registered
Feb  9 08:54:25.036441 kernel: acpiphp: Slot [30] registered
Feb  9 08:54:25.036454 kernel: acpiphp: Slot [31] registered
Feb  9 08:54:25.036467 kernel: PCI host bridge to bus 0000:00
Feb  9 08:54:25.036633 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb  9 08:54:25.036754 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb  9 08:54:25.036866 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb  9 08:54:25.036990 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb  9 08:54:25.037111 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window]
Feb  9 08:54:25.037234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb  9 08:54:25.037388 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb  9 08:54:25.037524 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb  9 08:54:25.037657 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Feb  9 08:54:25.037780 kernel: pci 0000:00:01.1: reg 0x20: [io  0xc1e0-0xc1ef]
Feb  9 08:54:25.037910 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Feb  9 08:54:25.038043 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Feb  9 08:54:25.038162 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Feb  9 08:54:25.038288 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Feb  9 08:54:25.038449 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Feb  9 08:54:25.038586 kernel: pci 0000:00:01.2: reg 0x20: [io  0xc180-0xc19f]
Feb  9 08:54:25.038947 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb  9 08:54:25.039112 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb  9 08:54:25.039245 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb  9 08:54:25.039405 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Feb  9 08:54:25.039546 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Feb  9 08:54:25.039700 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Feb  9 08:54:25.039824 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff]
Feb  9 08:54:25.039958 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb  9 08:54:25.046245 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb  9 08:54:25.046440 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Feb  9 08:54:25.046590 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc1a0-0xc1bf]
Feb  9 08:54:25.046801 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff]
Feb  9 08:54:25.048450 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Feb  9 08:54:25.048657 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Feb  9 08:54:25.048807 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc1c0-0xc1df]
Feb  9 08:54:25.048957 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff]
Feb  9 08:54:25.049137 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb  9 08:54:25.049309 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000
Feb  9 08:54:25.049447 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc100-0xc13f]
Feb  9 08:54:25.049578 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff]
Feb  9 08:54:25.049717 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb  9 08:54:25.049886 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000
Feb  9 08:54:25.050068 kernel: pci 0000:00:06.0: reg 0x10: [io  0xc000-0xc07f]
Feb  9 08:54:25.050207 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff]
Feb  9 08:54:25.050336 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Feb  9 08:54:25.050476 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000
Feb  9 08:54:25.050612 kernel: pci 0000:00:07.0: reg 0x10: [io  0xc080-0xc0ff]
Feb  9 08:54:25.050777 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff]
Feb  9 08:54:25.050927 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref]
Feb  9 08:54:25.051093 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00
Feb  9 08:54:25.051233 kernel: pci 0000:00:08.0: reg 0x10: [io  0xc140-0xc17f]
Feb  9 08:54:25.051373 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref]
Feb  9 08:54:25.051391 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb  9 08:54:25.051404 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb  9 08:54:25.051417 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb  9 08:54:25.051435 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb  9 08:54:25.051448 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb  9 08:54:25.051460 kernel: iommu: Default domain type: Translated 
Feb  9 08:54:25.051472 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb  9 08:54:25.051627 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb  9 08:54:25.051772 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb  9 08:54:25.051912 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb  9 08:54:25.051931 kernel: vgaarb: loaded
Feb  9 08:54:25.051952 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb  9 08:54:25.051965 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb  9 08:54:25.051993 kernel: PTP clock support registered
Feb  9 08:54:25.052006 kernel: PCI: Using ACPI for IRQ routing
Feb  9 08:54:25.052019 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb  9 08:54:25.052031 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb  9 08:54:25.052044 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff]
Feb  9 08:54:25.052055 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0
Feb  9 08:54:25.052068 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter
Feb  9 08:54:25.052086 kernel: clocksource: Switched to clocksource kvm-clock
Feb  9 08:54:25.052102 kernel: VFS: Disk quotas dquot_6.6.0
Feb  9 08:54:25.052115 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb  9 08:54:25.052129 kernel: pnp: PnP ACPI init
Feb  9 08:54:25.052144 kernel: pnp: PnP ACPI: found 4 devices
Feb  9 08:54:25.052156 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb  9 08:54:25.052169 kernel: NET: Registered PF_INET protocol family
Feb  9 08:54:25.052182 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb  9 08:54:25.052195 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb  9 08:54:25.052214 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb  9 08:54:25.052227 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb  9 08:54:25.052241 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Feb  9 08:54:25.052254 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb  9 08:54:25.052269 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb  9 08:54:25.052283 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb  9 08:54:25.052300 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb  9 08:54:25.052319 kernel: NET: Registered PF_XDP protocol family
Feb  9 08:54:25.052510 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb  9 08:54:25.052652 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb  9 08:54:25.052785 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb  9 08:54:25.052911 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb  9 08:54:25.053062 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window]
Feb  9 08:54:25.053201 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb  9 08:54:25.053337 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb  9 08:54:25.053469 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds
Feb  9 08:54:25.053485 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb  9 08:54:25.053622 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 45150 usecs
Feb  9 08:54:25.053638 kernel: PCI: CLS 0 bytes, default 64
Feb  9 08:54:25.053650 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb  9 08:54:25.053663 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns
Feb  9 08:54:25.053675 kernel: Initialise system trusted keyrings
Feb  9 08:54:25.053687 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb  9 08:54:25.053699 kernel: Key type asymmetric registered
Feb  9 08:54:25.053711 kernel: Asymmetric key parser 'x509' registered
Feb  9 08:54:25.053723 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Feb  9 08:54:25.053739 kernel: io scheduler mq-deadline registered
Feb  9 08:54:25.053751 kernel: io scheduler kyber registered
Feb  9 08:54:25.053763 kernel: io scheduler bfq registered
Feb  9 08:54:25.053775 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb  9 08:54:25.053787 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb  9 08:54:25.053798 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb  9 08:54:25.053810 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb  9 08:54:25.053821 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb  9 08:54:25.053833 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb  9 08:54:25.053849 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb  9 08:54:25.053861 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb  9 08:54:25.053872 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb  9 08:54:25.060173 kernel: rtc_cmos 00:03: RTC can wake from S4
Feb  9 08:54:25.060227 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb  9 08:54:25.060385 kernel: rtc_cmos 00:03: registered as rtc0
Feb  9 08:54:25.060518 kernel: rtc_cmos 00:03: setting system clock to 2024-02-09T08:54:24 UTC (1707468864)
Feb  9 08:54:25.060657 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Feb  9 08:54:25.060676 kernel: intel_pstate: CPU model not supported
Feb  9 08:54:25.060690 kernel: NET: Registered PF_INET6 protocol family
Feb  9 08:54:25.060702 kernel: Segment Routing with IPv6
Feb  9 08:54:25.060716 kernel: In-situ OAM (IOAM) with IPv6
Feb  9 08:54:25.060729 kernel: NET: Registered PF_PACKET protocol family
Feb  9 08:54:25.060742 kernel: Key type dns_resolver registered
Feb  9 08:54:25.060757 kernel: IPI shorthand broadcast: enabled
Feb  9 08:54:25.060771 kernel: sched_clock: Marking stable (709249275, 94466377)->(905125704, -101410052)
Feb  9 08:54:25.060791 kernel: registered taskstats version 1
Feb  9 08:54:25.060805 kernel: Loading compiled-in X.509 certificates
Feb  9 08:54:25.060818 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6'
Feb  9 08:54:25.060832 kernel: Key type .fscrypt registered
Feb  9 08:54:25.060845 kernel: Key type fscrypt-provisioning registered
Feb  9 08:54:25.060857 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb  9 08:54:25.060871 kernel: ima: Allocated hash algorithm: sha1
Feb  9 08:54:25.060883 kernel: ima: No architecture policies found
Feb  9 08:54:25.060895 kernel: Freeing unused kernel image (initmem) memory: 45496K
Feb  9 08:54:25.060915 kernel: Write protecting the kernel read-only data: 28672k
Feb  9 08:54:25.060929 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb  9 08:54:25.060943 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K
Feb  9 08:54:25.060955 kernel: Run /init as init process
Feb  9 08:54:25.060968 kernel:   with arguments:
Feb  9 08:54:25.061033 kernel:     /init
Feb  9 08:54:25.061073 kernel:   with environment:
Feb  9 08:54:25.061088 kernel:     HOME=/
Feb  9 08:54:25.061102 kernel:     TERM=linux
Feb  9 08:54:25.061120 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb  9 08:54:25.061141 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 08:54:25.061160 systemd[1]: Detected virtualization kvm.
Feb  9 08:54:25.061177 systemd[1]: Detected architecture x86-64.
Feb  9 08:54:25.061192 systemd[1]: Running in initrd.
Feb  9 08:54:25.061207 systemd[1]: No hostname configured, using default hostname.
Feb  9 08:54:25.061221 systemd[1]: Hostname set to <localhost>.
Feb  9 08:54:25.061240 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 08:54:25.061255 systemd[1]: Queued start job for default target initrd.target.
Feb  9 08:54:25.061269 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 08:54:25.061284 systemd[1]: Reached target cryptsetup.target.
Feb  9 08:54:25.061298 systemd[1]: Reached target paths.target.
Feb  9 08:54:25.061311 systemd[1]: Reached target slices.target.
Feb  9 08:54:25.061327 systemd[1]: Reached target swap.target.
Feb  9 08:54:25.061338 systemd[1]: Reached target timers.target.
Feb  9 08:54:25.061355 systemd[1]: Listening on iscsid.socket.
Feb  9 08:54:25.061368 systemd[1]: Listening on iscsiuio.socket.
Feb  9 08:54:25.061381 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 08:54:25.061395 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 08:54:25.061409 systemd[1]: Listening on systemd-journald.socket.
Feb  9 08:54:25.061421 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 08:54:25.061436 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 08:54:25.061449 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 08:54:25.061465 systemd[1]: Reached target sockets.target.
Feb  9 08:54:25.061483 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 08:54:25.061498 systemd[1]: Finished network-cleanup.service.
Feb  9 08:54:25.061517 systemd[1]: Starting systemd-fsck-usr.service...
Feb  9 08:54:25.061532 systemd[1]: Starting systemd-journald.service...
Feb  9 08:54:25.061547 systemd[1]: Starting systemd-modules-load.service...
Feb  9 08:54:25.061563 systemd[1]: Starting systemd-resolved.service...
Feb  9 08:54:25.061579 systemd[1]: Starting systemd-vconsole-setup.service...
Feb  9 08:54:25.061592 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 08:54:25.061606 kernel: audit: type=1130 audit(1707468865.026:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.061621 systemd[1]: Finished systemd-fsck-usr.service.
Feb  9 08:54:25.061643 systemd-journald[183]: Journal started
Feb  9 08:54:25.061762 systemd-journald[183]: Runtime Journal (/run/log/journal/5f2ee9765da640d6871e98cb8169dd42) is 4.9M, max 39.5M, 34.5M free.
Feb  9 08:54:25.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.054172 systemd-modules-load[184]: Inserted module 'overlay'
Feb  9 08:54:25.100183 systemd[1]: Started systemd-journald.service.
Feb  9 08:54:25.100224 kernel: audit: type=1130 audit(1707468865.094:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.076533 systemd-resolved[185]: Positive Trust Anchors:
Feb  9 08:54:25.109356 kernel: audit: type=1130 audit(1707468865.099:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.109394 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb  9 08:54:25.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.076546 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 08:54:25.112844 kernel: audit: type=1130 audit(1707468865.109:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.076593 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 08:54:25.118166 kernel: audit: type=1130 audit(1707468865.112:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.118198 kernel: Bridge firewalling registered
Feb  9 08:54:25.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.080639 systemd-resolved[185]: Defaulting to hostname 'linux'.
Feb  9 08:54:25.100706 systemd[1]: Started systemd-resolved.service.
Feb  9 08:54:25.110130 systemd[1]: Finished systemd-vconsole-setup.service.
Feb  9 08:54:25.113525 systemd[1]: Reached target nss-lookup.target.
Feb  9 08:54:25.113720 systemd-modules-load[184]: Inserted module 'br_netfilter'
Feb  9 08:54:25.119627 systemd[1]: Starting dracut-cmdline-ask.service...
Feb  9 08:54:25.124798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 08:54:25.140816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 08:54:25.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.145000 kernel: audit: type=1130 audit(1707468865.140:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.145039 kernel: SCSI subsystem initialized
Feb  9 08:54:25.146584 systemd[1]: Finished dracut-cmdline-ask.service.
Feb  9 08:54:25.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.149997 kernel: audit: type=1130 audit(1707468865.146:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.151219 systemd[1]: Starting dracut-cmdline.service...
Feb  9 08:54:25.158048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb  9 08:54:25.158106 kernel: device-mapper: uevent: version 1.0.3
Feb  9 08:54:25.158120 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Feb  9 08:54:25.162169 systemd-modules-load[184]: Inserted module 'dm_multipath'
Feb  9 08:54:25.164144 systemd[1]: Finished systemd-modules-load.service.
Feb  9 08:54:25.174850 kernel: audit: type=1130 audit(1707468865.169:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.181428 dracut-cmdline[201]: dracut-dracut-053
Feb  9 08:54:25.174107 systemd[1]: Starting systemd-sysctl.service...
Feb  9 08:54:25.183226 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9
Feb  9 08:54:25.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.185448 systemd[1]: Finished systemd-sysctl.service.
Feb  9 08:54:25.189012 kernel: audit: type=1130 audit(1707468865.184:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.269065 kernel: Loading iSCSI transport class v2.0-870.
Feb  9 08:54:25.284002 kernel: iscsi: registered transport (tcp)
Feb  9 08:54:25.310310 kernel: iscsi: registered transport (qla4xxx)
Feb  9 08:54:25.310408 kernel: QLogic iSCSI HBA Driver
Feb  9 08:54:25.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.356761 systemd[1]: Finished dracut-cmdline.service.
Feb  9 08:54:25.358724 systemd[1]: Starting dracut-pre-udev.service...
Feb  9 08:54:25.419061 kernel: raid6: avx2x4   gen() 16554 MB/s
Feb  9 08:54:25.436049 kernel: raid6: avx2x4   xor()  7138 MB/s
Feb  9 08:54:25.453051 kernel: raid6: avx2x2   gen() 16220 MB/s
Feb  9 08:54:25.470070 kernel: raid6: avx2x2   xor() 19296 MB/s
Feb  9 08:54:25.487048 kernel: raid6: avx2x1   gen() 11882 MB/s
Feb  9 08:54:25.504051 kernel: raid6: avx2x1   xor() 14970 MB/s
Feb  9 08:54:25.521053 kernel: raid6: sse2x4   gen() 12158 MB/s
Feb  9 08:54:25.538043 kernel: raid6: sse2x4   xor()  7052 MB/s
Feb  9 08:54:25.555056 kernel: raid6: sse2x2   gen() 11960 MB/s
Feb  9 08:54:25.572076 kernel: raid6: sse2x2   xor()  7362 MB/s
Feb  9 08:54:25.589057 kernel: raid6: sse2x1   gen()  7686 MB/s
Feb  9 08:54:25.607171 kernel: raid6: sse2x1   xor()  4973 MB/s
Feb  9 08:54:25.607246 kernel: raid6: using algorithm avx2x4 gen() 16554 MB/s
Feb  9 08:54:25.607258 kernel: raid6: .... xor() 7138 MB/s, rmw enabled
Feb  9 08:54:25.608083 kernel: raid6: using avx2x2 recovery algorithm
Feb  9 08:54:25.624021 kernel: xor: automatically using best checksumming function   avx       
Feb  9 08:54:25.730022 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Feb  9 08:54:25.742568 systemd[1]: Finished dracut-pre-udev.service.
Feb  9 08:54:25.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.742000 audit: BPF prog-id=7 op=LOAD
Feb  9 08:54:25.742000 audit: BPF prog-id=8 op=LOAD
Feb  9 08:54:25.744258 systemd[1]: Starting systemd-udevd.service...
Feb  9 08:54:25.761306 systemd-udevd[384]: Using default interface naming scheme 'v252'.
Feb  9 08:54:25.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.768030 systemd[1]: Started systemd-udevd.service.
Feb  9 08:54:25.769678 systemd[1]: Starting dracut-pre-trigger.service...
Feb  9 08:54:25.786101 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation
Feb  9 08:54:25.823376 systemd[1]: Finished dracut-pre-trigger.service.
Feb  9 08:54:25.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.824771 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 08:54:25.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:25.890343 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 08:54:25.946575 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB)
Feb  9 08:54:25.950010 kernel: scsi host0: Virtio SCSI HBA
Feb  9 08:54:25.960281 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb  9 08:54:25.960352 kernel: GPT:9289727 != 125829119
Feb  9 08:54:25.960372 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb  9 08:54:25.960389 kernel: GPT:9289727 != 125829119
Feb  9 08:54:25.960405 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb  9 08:54:25.960422 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 08:54:25.973018 kernel: cryptd: max_cpu_qlen set to 1000
Feb  9 08:54:25.992008 kernel: AVX2 version of gcm_enc/dec engaged.
Feb  9 08:54:25.992085 kernel: AES CTR mode by8 optimization enabled
Feb  9 08:54:26.018460 kernel: virtio_blk virtio5: [vdb] 1000 512-byte logical blocks (512 kB/500 KiB)
Feb  9 08:54:26.039001 kernel: libata version 3.00 loaded.
Feb  9 08:54:26.049003 kernel: ACPI: bus type USB registered
Feb  9 08:54:26.049059 kernel: usbcore: registered new interface driver usbfs
Feb  9 08:54:26.049072 kernel: usbcore: registered new interface driver hub
Feb  9 08:54:26.049084 kernel: usbcore: registered new device driver usb
Feb  9 08:54:26.056016 kernel: ata_piix 0000:00:01.1: version 2.13
Feb  9 08:54:26.060516 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Feb  9 08:54:26.061002 kernel: ehci-pci: EHCI PCI platform driver
Feb  9 08:54:26.082002 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442)
Feb  9 08:54:26.088350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Feb  9 08:54:26.112517 kernel: uhci_hcd: USB Universal Host Controller Interface driver
Feb  9 08:54:26.112546 kernel: scsi host1: ata_piix
Feb  9 08:54:26.116184 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Feb  9 08:54:26.117298 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Feb  9 08:54:26.124240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Feb  9 08:54:26.133915 kernel: scsi host2: ata_piix
Feb  9 08:54:26.134233 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14
Feb  9 08:54:26.134262 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb  9 08:54:26.134408 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15
Feb  9 08:54:26.134427 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb  9 08:54:26.136002 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb  9 08:54:26.136243 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180
Feb  9 08:54:26.137585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 08:54:26.139826 kernel: hub 1-0:1.0: USB hub found
Feb  9 08:54:26.140086 kernel: hub 1-0:1.0: 2 ports detected
Feb  9 08:54:26.142023 systemd[1]: Starting disk-uuid.service...
Feb  9 08:54:26.148436 disk-uuid[496]: Primary Header is updated.
Feb  9 08:54:26.148436 disk-uuid[496]: Secondary Entries is updated.
Feb  9 08:54:26.148436 disk-uuid[496]: Secondary Header is updated.
Feb  9 08:54:26.158008 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 08:54:26.168018 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 08:54:27.165572 disk-uuid[502]: The operation has completed successfully.
Feb  9 08:54:27.166272 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb  9 08:54:27.238064 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb  9 08:54:27.239109 systemd[1]: Finished disk-uuid.service.
Feb  9 08:54:27.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.241655 systemd[1]: Starting verity-setup.service...
Feb  9 08:54:27.266001 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb  9 08:54:27.338075 systemd[1]: Found device dev-mapper-usr.device.
Feb  9 08:54:27.341138 systemd[1]: Mounting sysusr-usr.mount...
Feb  9 08:54:27.344095 systemd[1]: Finished verity-setup.service.
Feb  9 08:54:27.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.438271 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Feb  9 08:54:27.439087 systemd[1]: Mounted sysusr-usr.mount.
Feb  9 08:54:27.439687 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met.
Feb  9 08:54:27.440718 systemd[1]: Starting ignition-setup.service...
Feb  9 08:54:27.444257 systemd[1]: Starting parse-ip-for-networkd.service...
Feb  9 08:54:27.456147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  9 08:54:27.456218 kernel: BTRFS info (device vda6): using free space tree
Feb  9 08:54:27.456245 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 08:54:27.482842 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb  9 08:54:27.496584 systemd[1]: Finished ignition-setup.service.
Feb  9 08:54:27.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.503413 systemd[1]: Starting ignition-fetch-offline.service...
Feb  9 08:54:27.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.630000 audit: BPF prog-id=9 op=LOAD
Feb  9 08:54:27.629629 systemd[1]: Finished parse-ip-for-networkd.service.
Feb  9 08:54:27.632928 systemd[1]: Starting systemd-networkd.service...
Feb  9 08:54:27.685036 systemd-networkd[688]: lo: Link UP
Feb  9 08:54:27.685064 systemd-networkd[688]: lo: Gained carrier
Feb  9 08:54:27.685822 systemd-networkd[688]: Enumeration completed
Feb  9 08:54:27.686252 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb  9 08:54:27.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.687341 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network.
Feb  9 08:54:27.701989 ignition[616]: Ignition 2.14.0
Feb  9 08:54:27.689433 systemd-networkd[688]: eth1: Link UP
Feb  9 08:54:27.702006 ignition[616]: Stage: fetch-offline
Feb  9 08:54:27.689441 systemd-networkd[688]: eth1: Gained carrier
Feb  9 08:54:27.702126 ignition[616]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:27.690605 systemd[1]: Started systemd-networkd.service.
Feb  9 08:54:27.702164 ignition[616]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:27.693827 systemd[1]: Reached target network.target.
Feb  9 08:54:27.713184 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:27.699850 systemd[1]: Starting iscsiuio.service...
Feb  9 08:54:27.713402 ignition[616]: parsed url from cmdline: ""
Feb  9 08:54:27.707645 systemd-networkd[688]: eth0: Link UP
Feb  9 08:54:27.713407 ignition[616]: no config URL provided
Feb  9 08:54:27.707653 systemd-networkd[688]: eth0: Gained carrier
Feb  9 08:54:27.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.713416 ignition[616]: reading system config file "/usr/lib/ignition/user.ign"
Feb  9 08:54:27.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.718109 systemd[1]: Finished ignition-fetch-offline.service.
Feb  9 08:54:27.713428 ignition[616]: no config at "/usr/lib/ignition/user.ign"
Feb  9 08:54:27.720754 systemd[1]: Starting ignition-fetch.service...
Feb  9 08:54:27.713436 ignition[616]: failed to fetch config: resource requires networking
Feb  9 08:54:27.732226 systemd-networkd[688]: eth0: DHCPv4 address 164.90.156.194/20, gateway 164.90.144.1 acquired from 169.254.169.253
Feb  9 08:54:27.713643 ignition[616]: Ignition finished successfully
Feb  9 08:54:27.732308 systemd[1]: Started iscsiuio.service.
Feb  9 08:54:27.734754 systemd[1]: Starting iscsid.service...
Feb  9 08:54:27.738539 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253
Feb  9 08:54:27.748442 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 08:54:27.748442 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb  9 08:54:27.748442 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb  9 08:54:27.748442 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb  9 08:54:27.748442 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb  9 08:54:27.748442 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb  9 08:54:27.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.749051 systemd[1]: Started iscsid.service.
Feb  9 08:54:27.751806 systemd[1]: Starting dracut-initqueue.service...
Feb  9 08:54:27.767141 ignition[692]: Ignition 2.14.0
Feb  9 08:54:27.767160 ignition[692]: Stage: fetch
Feb  9 08:54:27.767422 ignition[692]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:27.767454 ignition[692]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:27.771552 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:27.771800 ignition[692]: parsed url from cmdline: ""
Feb  9 08:54:27.771809 ignition[692]: no config URL provided
Feb  9 08:54:27.771820 ignition[692]: reading system config file "/usr/lib/ignition/user.ign"
Feb  9 08:54:27.771840 ignition[692]: no config at "/usr/lib/ignition/user.ign"
Feb  9 08:54:27.771900 ignition[692]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1
Feb  9 08:54:27.787676 systemd[1]: Finished dracut-initqueue.service.
Feb  9 08:54:27.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.790283 systemd[1]: Reached target remote-fs-pre.target.
Feb  9 08:54:27.791691 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 08:54:27.792436 systemd[1]: Reached target remote-fs.target.
Feb  9 08:54:27.796803 systemd[1]: Starting dracut-pre-mount.service...
Feb  9 08:54:27.813791 systemd[1]: Finished dracut-pre-mount.service.
Feb  9 08:54:27.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.822517 ignition[692]: GET result: OK
Feb  9 08:54:27.822862 ignition[692]: parsing config with SHA512: fdcb2b1de49256e781952a506c7f016d2328e86e94abf2db3c2dc0d6dfcb19513f19216205a0429a515feedb3ec4a3134f6cf786f4c964a115b479db7e451690
Feb  9 08:54:27.885810 unknown[692]: fetched base config from "system"
Feb  9 08:54:27.886456 unknown[692]: fetched base config from "system"
Feb  9 08:54:27.886930 unknown[692]: fetched user config from "digitalocean"
Feb  9 08:54:27.888124 ignition[692]: fetch: fetch complete
Feb  9 08:54:27.888589 ignition[692]: fetch: fetch passed
Feb  9 08:54:27.889115 ignition[692]: Ignition finished successfully
Feb  9 08:54:27.891620 systemd[1]: Finished ignition-fetch.service.
Feb  9 08:54:27.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.893280 systemd[1]: Starting ignition-kargs.service...
Feb  9 08:54:27.912129 ignition[713]: Ignition 2.14.0
Feb  9 08:54:27.912143 ignition[713]: Stage: kargs
Feb  9 08:54:27.912276 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:27.912302 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:27.914135 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:27.916865 ignition[713]: kargs: kargs passed
Feb  9 08:54:27.917078 ignition[713]: Ignition finished successfully
Feb  9 08:54:27.919489 systemd[1]: Finished ignition-kargs.service.
Feb  9 08:54:27.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.921351 systemd[1]: Starting ignition-disks.service...
Feb  9 08:54:27.933581 ignition[719]: Ignition 2.14.0
Feb  9 08:54:27.934480 ignition[719]: Stage: disks
Feb  9 08:54:27.935188 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:27.935843 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:27.937918 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:27.941460 ignition[719]: disks: disks passed
Feb  9 08:54:27.942564 ignition[719]: Ignition finished successfully
Feb  9 08:54:27.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.943948 systemd[1]: Finished ignition-disks.service.
Feb  9 08:54:27.944488 systemd[1]: Reached target initrd-root-device.target.
Feb  9 08:54:27.944872 systemd[1]: Reached target local-fs-pre.target.
Feb  9 08:54:27.945288 systemd[1]: Reached target local-fs.target.
Feb  9 08:54:27.945606 systemd[1]: Reached target sysinit.target.
Feb  9 08:54:27.945890 systemd[1]: Reached target basic.target.
Feb  9 08:54:27.947531 systemd[1]: Starting systemd-fsck-root.service...
Feb  9 08:54:27.969817 systemd-fsck[727]: ROOT: clean, 602/553520 files, 56014/553472 blocks
Feb  9 08:54:27.973728 systemd[1]: Finished systemd-fsck-root.service.
Feb  9 08:54:27.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:27.975940 systemd[1]: Mounting sysroot.mount...
Feb  9 08:54:27.990032 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Feb  9 08:54:27.991179 systemd[1]: Mounted sysroot.mount.
Feb  9 08:54:27.992382 systemd[1]: Reached target initrd-root-fs.target.
Feb  9 08:54:27.994854 systemd[1]: Mounting sysroot-usr.mount...
Feb  9 08:54:27.996898 systemd[1]: Starting flatcar-digitalocean-network.service...
Feb  9 08:54:27.999732 systemd[1]: Starting flatcar-metadata-hostname.service...
Feb  9 08:54:28.000728 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb  9 08:54:28.001969 systemd[1]: Reached target ignition-diskful.target.
Feb  9 08:54:28.007014 systemd[1]: Mounted sysroot-usr.mount.
Feb  9 08:54:28.010396 systemd[1]: Starting initrd-setup-root.service...
Feb  9 08:54:28.021071 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory
Feb  9 08:54:28.038236 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory
Feb  9 08:54:28.053377 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory
Feb  9 08:54:28.067238 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory
Feb  9 08:54:28.164855 systemd[1]: Finished initrd-setup-root.service.
Feb  9 08:54:28.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:28.167371 systemd[1]: Starting ignition-mount.service...
Feb  9 08:54:28.169163 systemd[1]: Starting sysroot-boot.service...
Feb  9 08:54:28.183297 coreos-metadata[734]: Feb 09 08:54:28.183 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb  9 08:54:28.189856 bash[784]: umount: /sysroot/usr/share/oem: not mounted.
Feb  9 08:54:28.197310 coreos-metadata[734]: Feb 09 08:54:28.197 INFO Fetch successful
Feb  9 08:54:28.209925 ignition[786]: INFO     : Ignition 2.14.0
Feb  9 08:54:28.210848 coreos-metadata[734]: Feb 09 08:54:28.210 INFO wrote hostname ci-3510.3.2-e-7e5a76b0b8 to /sysroot/etc/hostname
Feb  9 08:54:28.211406 systemd[1]: Finished flatcar-metadata-hostname.service.
Feb  9 08:54:28.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:28.212693 ignition[786]: INFO     : Stage: mount
Feb  9 08:54:28.213570 ignition[786]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:28.215001 ignition[786]: DEBUG    : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:28.220007 ignition[786]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:28.222401 ignition[786]: INFO     : mount: mount passed
Feb  9 08:54:28.223068 ignition[786]: INFO     : Ignition finished successfully
Feb  9 08:54:28.225270 systemd[1]: Finished ignition-mount.service.
Feb  9 08:54:28.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:28.226314 coreos-metadata[733]: Feb 09 08:54:28.225 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb  9 08:54:28.235876 systemd[1]: Finished sysroot-boot.service.
Feb  9 08:54:28.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:28.240299 coreos-metadata[733]: Feb 09 08:54:28.240 INFO Fetch successful
Feb  9 08:54:28.246125 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully.
Feb  9 08:54:28.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:28.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:28.246245 systemd[1]: Finished flatcar-digitalocean-network.service.
Feb  9 08:54:28.368010 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Feb  9 08:54:28.378030 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (793)
Feb  9 08:54:28.389278 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm
Feb  9 08:54:28.389394 kernel: BTRFS info (device vda6): using free space tree
Feb  9 08:54:28.389410 kernel: BTRFS info (device vda6): has skinny extents
Feb  9 08:54:28.397507 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Feb  9 08:54:28.399899 systemd[1]: Starting ignition-files.service...
Feb  9 08:54:28.424500 ignition[813]: INFO     : Ignition 2.14.0
Feb  9 08:54:28.424500 ignition[813]: INFO     : Stage: files
Feb  9 08:54:28.424500 ignition[813]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:28.424500 ignition[813]: DEBUG    : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:28.427557 ignition[813]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:28.431365 ignition[813]: DEBUG    : files: compiled without relabeling support, skipping
Feb  9 08:54:28.432668 ignition[813]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb  9 08:54:28.432668 ignition[813]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb  9 08:54:28.439257 ignition[813]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb  9 08:54:28.440182 ignition[813]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb  9 08:54:28.441458 unknown[813]: wrote ssh authorized keys file for user: core
Feb  9 08:54:28.442352 ignition[813]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb  9 08:54:28.443797 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb  9 08:54:28.444699 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Feb  9 08:54:28.473595 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb  9 08:54:28.537192 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb  9 08:54:28.538229 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Feb  9 08:54:28.539055 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1
Feb  9 08:54:29.032400 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb  9 08:54:29.240113 ignition[813]: DEBUG    : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d
Feb  9 08:54:29.241331 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz"
Feb  9 08:54:29.241331 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz"
Feb  9 08:54:29.241331 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1
Feb  9 08:54:29.265775 systemd-networkd[688]: eth0: Gained IPv6LL
Feb  9 08:54:29.527131 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb  9 08:54:29.649707 systemd-networkd[688]: eth1: Gained IPv6LL
Feb  9 08:54:29.651083 ignition[813]: DEBUG    : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449
Feb  9 08:54:29.651083 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz"
Feb  9 08:54:29.651083 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  9 08:54:29.653854 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb  9 08:54:29.653854 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/opt/bin/kubeadm"
Feb  9 08:54:29.653854 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1
Feb  9 08:54:29.798181 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(7): GET result: OK
Feb  9 08:54:30.138386 ignition[813]: DEBUG    : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660
Feb  9 08:54:30.138386 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm"
Feb  9 08:54:30.138386 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/opt/bin/kubectl"
Feb  9 08:54:30.141267 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1
Feb  9 08:54:30.183909 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(8): GET result: OK
Feb  9 08:54:30.418345 ignition[813]: DEBUG    : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628
Feb  9 08:54:30.418345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl"
Feb  9 08:54:30.418345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/opt/bin/kubelet"
Feb  9 08:54:30.421900 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1
Feb  9 08:54:30.465017 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(9): GET result: OK
Feb  9 08:54:31.063654 ignition[813]: DEBUG    : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b
Feb  9 08:54:31.065376 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet"
Feb  9 08:54:31.065376 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/docker/daemon.json"
Feb  9 08:54:31.065376 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json"
Feb  9 08:54:31.065376 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb  9 08:54:31.065376 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Feb  9 08:54:31.515105 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb  9 08:54:31.610188 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb  9 08:54:31.610188 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/home/core/install.sh"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb  9 08:54:31.612345 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb  9 08:54:31.622251 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 08:54:31.622251 ignition[813]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb  9 08:54:31.622251 ignition[813]: INFO     : files: op(11): [started]  processing unit "coreos-metadata-sshkeys@.service"
Feb  9 08:54:31.622251 ignition[813]: INFO     : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service"
Feb  9 08:54:31.622251 ignition[813]: INFO     : files: op(12): [started]  processing unit "prepare-cni-plugins.service"
Feb  9 08:54:31.622251 ignition[813]: INFO     : files: op(12): op(13): [started]  writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(12): [finished] processing unit "prepare-cni-plugins.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(14): [started]  processing unit "prepare-critools.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(14): op(15): [started]  writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(14): [finished] processing unit "prepare-critools.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(16): [started]  processing unit "prepare-helm.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(16): op(17): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(16): [finished] processing unit "prepare-helm.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(18): [started]  processing unit "containerd.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(18): op(19): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(18): [finished] processing unit "containerd.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(1a): [started]  setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service "
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(1b): [started]  setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 08:54:31.633241 ignition[813]: INFO     : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service"
Feb  9 08:54:31.685910 kernel: kauditd_printk_skb: 29 callbacks suppressed
Feb  9 08:54:31.685939 kernel: audit: type=1130 audit(1707468871.660:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: op(1c): [started]  setting preset to enabled for "prepare-critools.service"
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: op(1c): [finished] setting preset to enabled for "prepare-critools.service"
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: op(1d): [started]  setting preset to enabled for "prepare-helm.service"
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: op(1d): [finished] setting preset to enabled for "prepare-helm.service"
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: createResultFile: createFiles: op(1e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb  9 08:54:31.686055 ignition[813]: INFO     : files: files passed
Feb  9 08:54:31.686055 ignition[813]: INFO     : Ignition finished successfully
Feb  9 08:54:31.704855 kernel: audit: type=1130 audit(1707468871.687:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.704911 kernel: audit: type=1130 audit(1707468871.690:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.704935 kernel: audit: type=1131 audit(1707468871.690:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.650244 systemd[1]: Finished ignition-files.service.
Feb  9 08:54:31.664101 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Feb  9 08:54:31.707171 initrd-setup-root-after-ignition[838]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb  9 08:54:31.682099 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Feb  9 08:54:31.683293 systemd[1]: Starting ignition-quench.service...
Feb  9 08:54:31.688028 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Feb  9 08:54:31.688837 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb  9 08:54:31.688924 systemd[1]: Finished ignition-quench.service.
Feb  9 08:54:31.692152 systemd[1]: Reached target ignition-complete.target.
Feb  9 08:54:31.703034 systemd[1]: Starting initrd-parse-etc.service...
Feb  9 08:54:31.727107 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb  9 08:54:31.728031 systemd[1]: Finished initrd-parse-etc.service.
Feb  9 08:54:31.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.730252 systemd[1]: Reached target initrd-fs.target.
Feb  9 08:54:31.738782 kernel: audit: type=1130 audit(1707468871.728:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.738828 kernel: audit: type=1131 audit(1707468871.728:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.735950 systemd[1]: Reached target initrd.target.
Feb  9 08:54:31.738259 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Feb  9 08:54:31.739847 systemd[1]: Starting dracut-pre-pivot.service...
Feb  9 08:54:31.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.757930 systemd[1]: Finished dracut-pre-pivot.service.
Feb  9 08:54:31.762025 kernel: audit: type=1130 audit(1707468871.757:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.762391 systemd[1]: Starting initrd-cleanup.service...
Feb  9 08:54:31.775965 systemd[1]: Stopped target nss-lookup.target.
Feb  9 08:54:31.777012 systemd[1]: Stopped target remote-cryptsetup.target.
Feb  9 08:54:31.777990 systemd[1]: Stopped target timers.target.
Feb  9 08:54:31.779017 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb  9 08:54:31.779607 systemd[1]: Stopped dracut-pre-pivot.service.
Feb  9 08:54:31.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.780687 systemd[1]: Stopped target initrd.target.
Feb  9 08:54:31.784162 kernel: audit: type=1131 audit(1707468871.779:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.784685 systemd[1]: Stopped target basic.target.
Feb  9 08:54:31.785304 systemd[1]: Stopped target ignition-complete.target.
Feb  9 08:54:31.786159 systemd[1]: Stopped target ignition-diskful.target.
Feb  9 08:54:31.787161 systemd[1]: Stopped target initrd-root-device.target.
Feb  9 08:54:31.787835 systemd[1]: Stopped target remote-fs.target.
Feb  9 08:54:31.788485 systemd[1]: Stopped target remote-fs-pre.target.
Feb  9 08:54:31.789144 systemd[1]: Stopped target sysinit.target.
Feb  9 08:54:31.789847 systemd[1]: Stopped target local-fs.target.
Feb  9 08:54:31.790442 systemd[1]: Stopped target local-fs-pre.target.
Feb  9 08:54:31.791236 systemd[1]: Stopped target swap.target.
Feb  9 08:54:31.791868 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb  9 08:54:31.796251 kernel: audit: type=1131 audit(1707468871.791:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.791999 systemd[1]: Stopped dracut-pre-mount.service.
Feb  9 08:54:31.792753 systemd[1]: Stopped target cryptsetup.target.
Feb  9 08:54:31.800851 kernel: audit: type=1131 audit(1707468871.796:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.796683 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb  9 08:54:31.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.796871 systemd[1]: Stopped dracut-initqueue.service.
Feb  9 08:54:31.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.797759 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb  9 08:54:31.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.797897 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Feb  9 08:54:31.801410 systemd[1]: ignition-files.service: Deactivated successfully.
Feb  9 08:54:31.801546 systemd[1]: Stopped ignition-files.service.
Feb  9 08:54:31.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.802253 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully.
Feb  9 08:54:31.802388 systemd[1]: Stopped flatcar-metadata-hostname.service.
Feb  9 08:54:31.804663 systemd[1]: Stopping ignition-mount.service...
Feb  9 08:54:31.805216 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb  9 08:54:31.805388 systemd[1]: Stopped kmod-static-nodes.service.
Feb  9 08:54:31.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.820489 ignition[851]: INFO     : Ignition 2.14.0
Feb  9 08:54:31.820489 ignition[851]: INFO     : Stage: umount
Feb  9 08:54:31.820489 ignition[851]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Feb  9 08:54:31.820489 ignition[851]: DEBUG    : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c
Feb  9 08:54:31.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.807687 systemd[1]: Stopping sysroot-boot.service...
Feb  9 08:54:31.832139 ignition[851]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean"
Feb  9 08:54:31.832139 ignition[851]: INFO     : umount: umount passed
Feb  9 08:54:31.832139 ignition[851]: INFO     : Ignition finished successfully
Feb  9 08:54:31.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.819340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb  9 08:54:31.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.819573 systemd[1]: Stopped systemd-udev-trigger.service.
Feb  9 08:54:31.820461 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb  9 08:54:31.820593 systemd[1]: Stopped dracut-pre-trigger.service.
Feb  9 08:54:31.826158 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb  9 08:54:31.826297 systemd[1]: Finished initrd-cleanup.service.
Feb  9 08:54:31.832273 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb  9 08:54:31.832403 systemd[1]: Stopped ignition-mount.service.
Feb  9 08:54:31.834557 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb  9 08:54:31.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.834737 systemd[1]: Stopped ignition-disks.service.
Feb  9 08:54:31.835168 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb  9 08:54:31.835216 systemd[1]: Stopped ignition-kargs.service.
Feb  9 08:54:31.835839 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb  9 08:54:31.835899 systemd[1]: Stopped ignition-fetch.service.
Feb  9 08:54:31.836389 systemd[1]: Stopped target network.target.
Feb  9 08:54:31.837170 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb  9 08:54:31.837234 systemd[1]: Stopped ignition-fetch-offline.service.
Feb  9 08:54:31.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.838510 systemd[1]: Stopped target paths.target.
Feb  9 08:54:31.849232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb  9 08:54:31.851048 systemd[1]: Stopped systemd-ask-password-console.path.
Feb  9 08:54:31.867000 audit: BPF prog-id=6 op=UNLOAD
Feb  9 08:54:31.851435 systemd[1]: Stopped target slices.target.
Feb  9 08:54:31.852346 systemd[1]: Stopped target sockets.target.
Feb  9 08:54:31.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.853240 systemd[1]: iscsid.socket: Deactivated successfully.
Feb  9 08:54:31.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.853274 systemd[1]: Closed iscsid.socket.
Feb  9 08:54:31.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.854122 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb  9 08:54:31.854162 systemd[1]: Closed iscsiuio.socket.
Feb  9 08:54:31.855109 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb  9 08:54:31.855163 systemd[1]: Stopped ignition-setup.service.
Feb  9 08:54:31.856086 systemd[1]: Stopping systemd-networkd.service...
Feb  9 08:54:31.857368 systemd[1]: Stopping systemd-resolved.service...
Feb  9 08:54:31.859634 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb  9 08:54:31.860119 systemd-networkd[688]: eth1: DHCPv6 lease lost
Feb  9 08:54:31.860937 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb  9 08:54:31.862138 systemd[1]: Stopped sysroot-boot.service.
Feb  9 08:54:31.863118 systemd-networkd[688]: eth0: DHCPv6 lease lost
Feb  9 08:54:31.880000 audit: BPF prog-id=9 op=UNLOAD
Feb  9 08:54:31.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.863538 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb  9 08:54:31.863653 systemd[1]: Stopped systemd-resolved.service.
Feb  9 08:54:31.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.864552 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb  9 08:54:31.864602 systemd[1]: Stopped initrd-setup-root.service.
Feb  9 08:54:31.865184 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb  9 08:54:31.865277 systemd[1]: Stopped systemd-networkd.service.
Feb  9 08:54:31.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.866265 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb  9 08:54:31.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.866297 systemd[1]: Closed systemd-networkd.socket.
Feb  9 08:54:31.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.867948 systemd[1]: Stopping network-cleanup.service...
Feb  9 08:54:31.868731 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb  9 08:54:31.868805 systemd[1]: Stopped parse-ip-for-networkd.service.
Feb  9 08:54:31.871658 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 08:54:31.871707 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 08:54:31.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.872393 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb  9 08:54:31.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:31.872435 systemd[1]: Stopped systemd-modules-load.service.
Feb  9 08:54:31.872962 systemd[1]: Stopping systemd-udevd.service...
Feb  9 08:54:31.879760 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb  9 08:54:31.883087 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb  9 08:54:31.883211 systemd[1]: Stopped network-cleanup.service.
Feb  9 08:54:31.884092 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb  9 08:54:31.884270 systemd[1]: Stopped systemd-udevd.service.
Feb  9 08:54:31.885743 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb  9 08:54:31.885824 systemd[1]: Closed systemd-udevd-control.socket.
Feb  9 08:54:31.886871 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb  9 08:54:31.886911 systemd[1]: Closed systemd-udevd-kernel.socket.
Feb  9 08:54:31.887550 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb  9 08:54:31.887598 systemd[1]: Stopped dracut-pre-udev.service.
Feb  9 08:54:31.888480 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb  9 08:54:31.888518 systemd[1]: Stopped dracut-cmdline.service.
Feb  9 08:54:31.889179 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb  9 08:54:31.889215 systemd[1]: Stopped dracut-cmdline-ask.service.
Feb  9 08:54:31.890951 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Feb  9 08:54:31.891457 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb  9 08:54:31.891513 systemd[1]: Stopped systemd-vconsole-setup.service.
Feb  9 08:54:31.901484 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb  9 08:54:31.901589 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Feb  9 08:54:31.902069 systemd[1]: Reached target initrd-switch-root.target.
Feb  9 08:54:31.904230 systemd[1]: Starting initrd-switch-root.service...
Feb  9 08:54:31.918877 systemd[1]: Switching root.
Feb  9 08:54:31.922000 audit: BPF prog-id=5 op=UNLOAD
Feb  9 08:54:31.923000 audit: BPF prog-id=4 op=UNLOAD
Feb  9 08:54:31.923000 audit: BPF prog-id=3 op=UNLOAD
Feb  9 08:54:31.923000 audit: BPF prog-id=8 op=UNLOAD
Feb  9 08:54:31.923000 audit: BPF prog-id=7 op=UNLOAD
Feb  9 08:54:31.942521 iscsid[698]: iscsid shutting down.
Feb  9 08:54:31.943216 systemd-journald[183]: Received SIGTERM from PID 1 (systemd).
Feb  9 08:54:31.943290 systemd-journald[183]: Journal stopped
Feb  9 08:54:35.724590 kernel: SELinux:  Class mctp_socket not defined in policy.
Feb  9 08:54:35.724656 kernel: SELinux:  Class anon_inode not defined in policy.
Feb  9 08:54:35.724670 kernel: SELinux: the above unknown classes and permissions will be allowed
Feb  9 08:54:35.724686 kernel: SELinux:  policy capability network_peer_controls=1
Feb  9 08:54:35.724698 kernel: SELinux:  policy capability open_perms=1
Feb  9 08:54:35.724709 kernel: SELinux:  policy capability extended_socket_class=1
Feb  9 08:54:35.724725 kernel: SELinux:  policy capability always_check_network=0
Feb  9 08:54:35.724737 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb  9 08:54:35.724754 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb  9 08:54:35.724765 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb  9 08:54:35.724779 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb  9 08:54:35.724798 systemd[1]: Successfully loaded SELinux policy in 45.105ms.
Feb  9 08:54:35.724829 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.171ms.
Feb  9 08:54:35.724843 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb  9 08:54:35.724856 systemd[1]: Detected virtualization kvm.
Feb  9 08:54:35.724868 systemd[1]: Detected architecture x86-64.
Feb  9 08:54:35.724880 systemd[1]: Detected first boot.
Feb  9 08:54:35.724893 systemd[1]: Hostname set to <ci-3510.3.2-e-7e5a76b0b8>.
Feb  9 08:54:35.724909 systemd[1]: Initializing machine ID from VM UUID.
Feb  9 08:54:35.724922 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Feb  9 08:54:35.724934 systemd[1]: Populated /etc with preset unit settings.
Feb  9 08:54:35.724947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 08:54:35.724960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 08:54:35.724989 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 08:54:35.729767 systemd[1]: Queued start job for default target multi-user.target.
Feb  9 08:54:35.729806 systemd[1]: Unnecessary job was removed for dev-vda6.device.
Feb  9 08:54:35.729821 systemd[1]: Created slice system-addon\x2dconfig.slice.
Feb  9 08:54:35.729835 systemd[1]: Created slice system-addon\x2drun.slice.
Feb  9 08:54:35.729849 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice.
Feb  9 08:54:35.729863 systemd[1]: Created slice system-getty.slice.
Feb  9 08:54:35.729875 systemd[1]: Created slice system-modprobe.slice.
Feb  9 08:54:35.729888 systemd[1]: Created slice system-serial\x2dgetty.slice.
Feb  9 08:54:35.729901 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Feb  9 08:54:35.729919 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Feb  9 08:54:35.729935 systemd[1]: Created slice user.slice.
Feb  9 08:54:35.729948 systemd[1]: Started systemd-ask-password-console.path.
Feb  9 08:54:35.729961 systemd[1]: Started systemd-ask-password-wall.path.
Feb  9 08:54:35.729998 systemd[1]: Set up automount boot.automount.
Feb  9 08:54:35.730012 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Feb  9 08:54:35.730026 systemd[1]: Reached target integritysetup.target.
Feb  9 08:54:35.730039 systemd[1]: Reached target remote-cryptsetup.target.
Feb  9 08:54:35.730054 systemd[1]: Reached target remote-fs.target.
Feb  9 08:54:35.730066 systemd[1]: Reached target slices.target.
Feb  9 08:54:35.730079 systemd[1]: Reached target swap.target.
Feb  9 08:54:35.730091 systemd[1]: Reached target torcx.target.
Feb  9 08:54:35.730103 systemd[1]: Reached target veritysetup.target.
Feb  9 08:54:35.730116 systemd[1]: Listening on systemd-coredump.socket.
Feb  9 08:54:35.730127 systemd[1]: Listening on systemd-initctl.socket.
Feb  9 08:54:35.730140 systemd[1]: Listening on systemd-journald-audit.socket.
Feb  9 08:54:35.730152 systemd[1]: Listening on systemd-journald-dev-log.socket.
Feb  9 08:54:35.730167 systemd[1]: Listening on systemd-journald.socket.
Feb  9 08:54:35.730179 systemd[1]: Listening on systemd-networkd.socket.
Feb  9 08:54:35.730192 systemd[1]: Listening on systemd-udevd-control.socket.
Feb  9 08:54:35.730206 systemd[1]: Listening on systemd-udevd-kernel.socket.
Feb  9 08:54:35.730220 systemd[1]: Listening on systemd-userdbd.socket.
Feb  9 08:54:35.730232 systemd[1]: Mounting dev-hugepages.mount...
Feb  9 08:54:35.730244 systemd[1]: Mounting dev-mqueue.mount...
Feb  9 08:54:35.730257 systemd[1]: Mounting media.mount...
Feb  9 08:54:35.730271 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 08:54:35.730286 systemd[1]: Mounting sys-kernel-debug.mount...
Feb  9 08:54:35.730299 systemd[1]: Mounting sys-kernel-tracing.mount...
Feb  9 08:54:35.730311 systemd[1]: Mounting tmp.mount...
Feb  9 08:54:35.730324 systemd[1]: Starting flatcar-tmpfiles.service...
Feb  9 08:54:35.730337 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  9 08:54:35.730350 systemd[1]: Starting kmod-static-nodes.service...
Feb  9 08:54:35.730362 systemd[1]: Starting modprobe@configfs.service...
Feb  9 08:54:35.730375 systemd[1]: Starting modprobe@dm_mod.service...
Feb  9 08:54:35.730387 systemd[1]: Starting modprobe@drm.service...
Feb  9 08:54:35.730402 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  9 08:54:35.730414 systemd[1]: Starting modprobe@fuse.service...
Feb  9 08:54:35.730426 systemd[1]: Starting modprobe@loop.service...
Feb  9 08:54:35.730439 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  9 08:54:35.730452 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb  9 08:54:35.730464 systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
Feb  9 08:54:35.730476 systemd[1]: Starting systemd-journald.service...
Feb  9 08:54:35.730488 kernel: fuse: init (API version 7.34)
Feb  9 08:54:35.730501 systemd[1]: Starting systemd-modules-load.service...
Feb  9 08:54:35.730515 kernel: loop: module loaded
Feb  9 08:54:35.730527 systemd[1]: Starting systemd-network-generator.service...
Feb  9 08:54:35.730539 systemd[1]: Starting systemd-remount-fs.service...
Feb  9 08:54:35.730552 systemd[1]: Starting systemd-udev-trigger.service...
Feb  9 08:54:35.730573 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 08:54:35.730586 systemd[1]: Mounted dev-hugepages.mount.
Feb  9 08:54:35.730601 systemd[1]: Mounted dev-mqueue.mount.
Feb  9 08:54:35.730632 systemd[1]: Mounted media.mount.
Feb  9 08:54:35.730651 systemd[1]: Mounted sys-kernel-debug.mount.
Feb  9 08:54:35.730668 systemd[1]: Mounted sys-kernel-tracing.mount.
Feb  9 08:54:35.730686 systemd[1]: Mounted tmp.mount.
Feb  9 08:54:35.730705 systemd[1]: Finished kmod-static-nodes.service.
Feb  9 08:54:35.730721 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb  9 08:54:35.730734 systemd[1]: Finished modprobe@configfs.service.
Feb  9 08:54:35.730747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  9 08:54:35.730764 systemd[1]: Finished modprobe@dm_mod.service.
Feb  9 08:54:35.730794 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb  9 08:54:35.730807 systemd[1]: Finished modprobe@drm.service.
Feb  9 08:54:35.730821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  9 08:54:35.730834 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  9 08:54:35.730846 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb  9 08:54:35.730859 systemd[1]: Finished modprobe@fuse.service.
Feb  9 08:54:35.730872 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  9 08:54:35.730888 systemd[1]: Finished modprobe@loop.service.
Feb  9 08:54:35.730900 systemd[1]: Finished systemd-modules-load.service.
Feb  9 08:54:35.730913 systemd[1]: Finished systemd-network-generator.service.
Feb  9 08:54:35.730933 systemd-journald[991]: Journal started
Feb  9 08:54:35.731016 systemd-journald[991]: Runtime Journal (/run/log/journal/5f2ee9765da640d6871e98cb8169dd42) is 4.9M, max 39.5M, 34.5M free.
Feb  9 08:54:35.542000 audit[1]: AVC avc:  denied  { audit_read } for  pid=1 comm="systemd" capability=37  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Feb  9 08:54:35.542000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1
Feb  9 08:54:35.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.722000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Feb  9 08:54:35.722000 audit[991]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcf96a6d80 a2=4000 a3=7ffcf96a6e1c items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 08:54:35.722000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Feb  9 08:54:35.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.737143 systemd[1]: Started systemd-journald.service.
Feb  9 08:54:35.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.734634 systemd[1]: Finished systemd-remount-fs.service.
Feb  9 08:54:35.735291 systemd[1]: Reached target network-pre.target.
Feb  9 08:54:35.738150 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Feb  9 08:54:35.739791 systemd[1]: Mounting sys-kernel-config.mount...
Feb  9 08:54:35.741100 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  9 08:54:35.743440 systemd[1]: Starting systemd-hwdb-update.service...
Feb  9 08:54:35.747069 systemd[1]: Starting systemd-journal-flush.service...
Feb  9 08:54:35.748221 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  9 08:54:35.752094 systemd[1]: Starting systemd-random-seed.service...
Feb  9 08:54:35.753179 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  9 08:54:35.763907 systemd-journald[991]: Time spent on flushing to /var/log/journal/5f2ee9765da640d6871e98cb8169dd42 is 48.315ms for 1111 entries.
Feb  9 08:54:35.763907 systemd-journald[991]: System Journal (/var/log/journal/5f2ee9765da640d6871e98cb8169dd42) is 8.0M, max 195.6M, 187.6M free.
Feb  9 08:54:35.827261 systemd-journald[991]: Received client request to flush runtime journal.
Feb  9 08:54:35.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.760372 systemd[1]: Starting systemd-sysctl.service...
Feb  9 08:54:35.763333 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Feb  9 08:54:35.765361 systemd[1]: Mounted sys-kernel-config.mount.
Feb  9 08:54:35.784222 systemd[1]: Finished systemd-random-seed.service.
Feb  9 08:54:35.784751 systemd[1]: Reached target first-boot-complete.target.
Feb  9 08:54:35.811813 systemd[1]: Finished systemd-sysctl.service.
Feb  9 08:54:35.826099 systemd[1]: Finished flatcar-tmpfiles.service.
Feb  9 08:54:35.830155 systemd[1]: Starting systemd-sysusers.service...
Feb  9 08:54:35.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.844036 systemd[1]: Finished systemd-journal-flush.service.
Feb  9 08:54:35.872716 systemd[1]: Finished systemd-sysusers.service.
Feb  9 08:54:35.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.875837 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Feb  9 08:54:35.882245 systemd[1]: Finished systemd-udev-trigger.service.
Feb  9 08:54:35.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:35.884630 systemd[1]: Starting systemd-udev-settle.service...
Feb  9 08:54:35.908906 udevadm[1054]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb  9 08:54:35.931676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Feb  9 08:54:35.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.447080 systemd[1]: Finished systemd-hwdb-update.service.
Feb  9 08:54:36.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.449640 systemd[1]: Starting systemd-udevd.service...
Feb  9 08:54:36.482010 systemd-udevd[1058]: Using default interface naming scheme 'v252'.
Feb  9 08:54:36.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.520527 systemd[1]: Started systemd-udevd.service.
Feb  9 08:54:36.525555 systemd[1]: Starting systemd-networkd.service...
Feb  9 08:54:36.536112 systemd[1]: Starting systemd-userdbd.service...
Feb  9 08:54:36.607020 systemd[1]: Found device dev-ttyS0.device.
Feb  9 08:54:36.609382 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 08:54:36.609733 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met.
Feb  9 08:54:36.617336 systemd[1]: Starting modprobe@dm_mod.service...
Feb  9 08:54:36.619477 systemd[1]: Starting modprobe@efi_pstore.service...
Feb  9 08:54:36.621509 systemd[1]: Starting modprobe@loop.service...
Feb  9 08:54:36.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.623179 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb  9 08:54:36.623260 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb  9 08:54:36.623368 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb  9 08:54:36.626428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb  9 08:54:36.626705 systemd[1]: Finished modprobe@dm_mod.service.
Feb  9 08:54:36.627636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb  9 08:54:36.627838 systemd[1]: Finished modprobe@efi_pstore.service.
Feb  9 08:54:36.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.633610 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb  9 08:54:36.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.650916 systemd[1]: Finished modprobe@loop.service.
Feb  9 08:54:36.655780 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb  9 08:54:36.655848 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Feb  9 08:54:36.676056 kernel: kauditd_printk_skb: 84 callbacks suppressed
Feb  9 08:54:36.676159 kernel: audit: type=1130 audit(1707468876.669:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.670054 systemd[1]: Started systemd-userdbd.service.
Feb  9 08:54:36.757298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Feb  9 08:54:36.761462 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Feb  9 08:54:36.768038 kernel: ACPI: button: Power Button [PWRF]
Feb  9 08:54:36.788819 systemd-networkd[1069]: lo: Link UP
Feb  9 08:54:36.788831 systemd-networkd[1069]: lo: Gained carrier
Feb  9 08:54:36.789579 systemd-networkd[1069]: Enumeration completed
Feb  9 08:54:36.795208 kernel: audit: type=1130 audit(1707468876.789:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:36.789689 systemd-networkd[1069]: eth1: Configuring with /run/systemd/network/10-f6:5b:55:4a:6c:5e.network.
Feb  9 08:54:36.789761 systemd[1]: Started systemd-networkd.service.
Feb  9 08:54:36.791537 systemd-networkd[1069]: eth0: Configuring with /run/systemd/network/10-72:11:f5:f9:2f:2e.network.
Feb  9 08:54:36.792337 systemd-networkd[1069]: eth1: Link UP
Feb  9 08:54:36.792342 systemd-networkd[1069]: eth1: Gained carrier
Feb  9 08:54:36.796402 systemd-networkd[1069]: eth0: Link UP
Feb  9 08:54:36.796414 systemd-networkd[1069]: eth0: Gained carrier
Feb  9 08:54:36.809000 audit[1068]: AVC avc:  denied  { confidentiality } for  pid=1068 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  9 08:54:36.843025 kernel: audit: type=1400 audit(1707468876.809:127): avc:  denied  { confidentiality } for  pid=1068 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Feb  9 08:54:36.809000 audit[1068]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e74ee05890 a1=32194 a2=7fe435f92bc5 a3=5 items=108 ppid=1058 pid=1068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 08:54:36.851007 kernel: audit: type=1300 audit(1707468876.809:127): arch=c000003e syscall=175 success=yes exit=0 a0=55e74ee05890 a1=32194 a2=7fe435f92bc5 a3=5 items=108 ppid=1058 pid=1068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 08:54:36.809000 audit: CWD cwd="/"
Feb  9 08:54:36.809000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.857830 kernel: audit: type=1307 audit(1707468876.809:127): cwd="/"
Feb  9 08:54:36.857919 kernel: audit: type=1302 audit(1707468876.809:127): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.857941 kernel: audit: type=1302 audit(1707468876.809:127): item=1 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=1 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=2 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.863816 kernel: audit: type=1302 audit(1707468876.809:127): item=2 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=3 name=(null) inode=14875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=4 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.872028 kernel: audit: type=1302 audit(1707468876.809:127): item=3 name=(null) inode=14875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.872123 kernel: audit: type=1302 audit(1707468876.809:127): item=4 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=5 name=(null) inode=14876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=6 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=7 name=(null) inode=14877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=8 name=(null) inode=14877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=9 name=(null) inode=14878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=10 name=(null) inode=14877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=11 name=(null) inode=14879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=12 name=(null) inode=14877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=13 name=(null) inode=14880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=14 name=(null) inode=14877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=15 name=(null) inode=14881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=16 name=(null) inode=14877 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=17 name=(null) inode=14882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=18 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=19 name=(null) inode=14883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=20 name=(null) inode=14883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=21 name=(null) inode=14884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=22 name=(null) inode=14883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=23 name=(null) inode=14885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=24 name=(null) inode=14883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=25 name=(null) inode=14886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=26 name=(null) inode=14883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=27 name=(null) inode=14887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=28 name=(null) inode=14883 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=29 name=(null) inode=14888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=30 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=31 name=(null) inode=14889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=32 name=(null) inode=14889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=33 name=(null) inode=14890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=34 name=(null) inode=14889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=35 name=(null) inode=14891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=36 name=(null) inode=14889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=37 name=(null) inode=14892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=38 name=(null) inode=14889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=39 name=(null) inode=14893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=40 name=(null) inode=14889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=41 name=(null) inode=14894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=42 name=(null) inode=14874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=43 name=(null) inode=14895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=44 name=(null) inode=14895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=45 name=(null) inode=14896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=46 name=(null) inode=14895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=47 name=(null) inode=14897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=48 name=(null) inode=14895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=49 name=(null) inode=14898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=50 name=(null) inode=14895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=51 name=(null) inode=14899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=52 name=(null) inode=14895 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=53 name=(null) inode=14900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=55 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=56 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=57 name=(null) inode=14902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=58 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=59 name=(null) inode=14903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=60 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=61 name=(null) inode=14904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=62 name=(null) inode=14904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=63 name=(null) inode=14905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=64 name=(null) inode=14904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=65 name=(null) inode=14906 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=66 name=(null) inode=14904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.876015 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb  9 08:54:36.809000 audit: PATH item=67 name=(null) inode=14907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=68 name=(null) inode=14904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=69 name=(null) inode=14908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=70 name=(null) inode=14904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=71 name=(null) inode=14909 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=72 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=73 name=(null) inode=14910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=74 name=(null) inode=14910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=75 name=(null) inode=14911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=76 name=(null) inode=14910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=77 name=(null) inode=14912 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=78 name=(null) inode=14910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=79 name=(null) inode=14913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=80 name=(null) inode=14910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=81 name=(null) inode=14914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=82 name=(null) inode=14910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=83 name=(null) inode=14915 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=84 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=85 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=86 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=87 name=(null) inode=14917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=88 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=89 name=(null) inode=14918 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=90 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=91 name=(null) inode=14919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=92 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=93 name=(null) inode=14920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=94 name=(null) inode=14916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=95 name=(null) inode=14921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=96 name=(null) inode=14901 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=97 name=(null) inode=14922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=98 name=(null) inode=14922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=99 name=(null) inode=14923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=100 name=(null) inode=14922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=101 name=(null) inode=14924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=102 name=(null) inode=14922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=103 name=(null) inode=14925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=104 name=(null) inode=14922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=105 name=(null) inode=14926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=106 name=(null) inode=14922 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PATH item=107 name=(null) inode=14927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Feb  9 08:54:36.809000 audit: PROCTITLE proctitle="(udev-worker)"
Feb  9 08:54:36.889057 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
Feb  9 08:54:36.897037 kernel: mousedev: PS/2 mouse device common for all mice
Feb  9 08:54:36.993004 kernel: EDAC MC: Ver: 3.0.0
Feb  9 08:54:37.012622 systemd[1]: Finished systemd-udev-settle.service.
Feb  9 08:54:37.014938 systemd[1]: Starting lvm2-activation-early.service...
Feb  9 08:54:37.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.041444 lvm[1101]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 08:54:37.077770 systemd[1]: Finished lvm2-activation-early.service.
Feb  9 08:54:37.078582 systemd[1]: Reached target cryptsetup.target.
Feb  9 08:54:37.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.081086 systemd[1]: Starting lvm2-activation.service...
Feb  9 08:54:37.089284 lvm[1103]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb  9 08:54:37.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.119891 systemd[1]: Finished lvm2-activation.service.
Feb  9 08:54:37.120426 systemd[1]: Reached target local-fs-pre.target.
Feb  9 08:54:37.122458 systemd[1]: Mounting media-configdrive.mount...
Feb  9 08:54:37.123444 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb  9 08:54:37.123575 systemd[1]: Reached target machines.target.
Feb  9 08:54:37.125261 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Feb  9 08:54:37.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.139513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Feb  9 08:54:37.145716 kernel: ISO 9660 Extensions: RRIP_1991A
Feb  9 08:54:37.142956 systemd[1]: Mounted media-configdrive.mount.
Feb  9 08:54:37.143422 systemd[1]: Reached target local-fs.target.
Feb  9 08:54:37.145329 systemd[1]: Starting ldconfig.service...
Feb  9 08:54:37.147461 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Feb  9 08:54:37.147510 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 08:54:37.150168 systemd[1]: Starting systemd-boot-update.service...
Feb  9 08:54:37.156533 systemd[1]: Starting systemd-machine-id-commit.service...
Feb  9 08:54:37.159325 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met.
Feb  9 08:54:37.159411 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met.
Feb  9 08:54:37.161081 systemd[1]: Starting systemd-tmpfiles-setup.service...
Feb  9 08:54:37.173940 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl)
Feb  9 08:54:37.175463 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Feb  9 08:54:37.197148 systemd-tmpfiles[1115]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Feb  9 08:54:37.200236 systemd-tmpfiles[1115]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb  9 08:54:37.203612 systemd-tmpfiles[1115]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb  9 08:54:37.230200 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb  9 08:54:37.231169 systemd[1]: Finished systemd-machine-id-commit.service.
Feb  9 08:54:37.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.293618 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31)
Feb  9 08:54:37.293618 systemd-fsck[1119]: /dev/vda1: 789 files, 115332/258078 clusters
Feb  9 08:54:37.297905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Feb  9 08:54:37.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.300242 systemd[1]: Mounting boot.mount...
Feb  9 08:54:37.319614 systemd[1]: Mounted boot.mount.
Feb  9 08:54:37.339169 systemd[1]: Finished systemd-boot-update.service.
Feb  9 08:54:37.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.453918 systemd[1]: Finished systemd-tmpfiles-setup.service.
Feb  9 08:54:37.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.456600 systemd[1]: Starting audit-rules.service...
Feb  9 08:54:37.460243 systemd[1]: Starting clean-ca-certificates.service...
Feb  9 08:54:37.463808 systemd[1]: Starting systemd-journal-catalog-update.service...
Feb  9 08:54:37.473821 systemd[1]: Starting systemd-resolved.service...
Feb  9 08:54:37.482885 systemd[1]: Starting systemd-timesyncd.service...
Feb  9 08:54:37.493729 systemd[1]: Starting systemd-update-utmp.service...
Feb  9 08:54:37.497683 systemd[1]: Finished clean-ca-certificates.service.
Feb  9 08:54:37.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.506000 audit[1137]: SYSTEM_BOOT pid=1137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.519804 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb  9 08:54:37.524889 systemd[1]: Finished systemd-update-utmp.service.
Feb  9 08:54:37.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.555795 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb  9 08:54:37.567274 systemd[1]: Finished ldconfig.service.
Feb  9 08:54:37.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.572694 systemd[1]: Finished systemd-journal-catalog-update.service.
Feb  9 08:54:37.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.575455 systemd[1]: Starting systemd-update-done.service...
Feb  9 08:54:37.597484 systemd[1]: Finished systemd-update-done.service.
Feb  9 08:54:37.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb  9 08:54:37.623000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Feb  9 08:54:37.623000 audit[1152]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe9b558b50 a2=420 a3=0 items=0 ppid=1127 pid=1152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Feb  9 08:54:37.623000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Feb  9 08:54:37.625527 augenrules[1152]: No rules
Feb  9 08:54:37.625695 systemd[1]: Finished audit-rules.service.
Feb  9 08:54:37.661169 systemd[1]: Started systemd-timesyncd.service.
Feb  9 08:54:37.661799 systemd[1]: Reached target time-set.target.
Feb  9 08:54:37.679022 systemd-resolved[1131]: Positive Trust Anchors:
Feb  9 08:54:37.679519 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb  9 08:54:37.679642 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Feb  9 08:54:37.687685 systemd-resolved[1131]: Using system hostname 'ci-3510.3.2-e-7e5a76b0b8'.
Feb  9 08:54:37.691047 systemd[1]: Started systemd-resolved.service.
Feb  9 08:54:37.691676 systemd[1]: Reached target network.target.
Feb  9 08:54:37.692130 systemd[1]: Reached target nss-lookup.target.
Feb  9 08:54:37.692595 systemd[1]: Reached target sysinit.target.
Feb  9 08:54:37.693162 systemd[1]: Started motdgen.path.
Feb  9 08:54:37.693635 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Feb  9 08:54:37.694404 systemd[1]: Started logrotate.timer.
Feb  9 08:54:37.695066 systemd[1]: Started mdadm.timer.
Feb  9 08:54:37.695499 systemd[1]: Started systemd-tmpfiles-clean.timer.
Feb  9 08:54:37.695963 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb  9 08:54:37.696026 systemd[1]: Reached target paths.target.
Feb  9 08:54:37.696450 systemd[1]: Reached target timers.target.
Feb  9 08:54:37.697382 systemd[1]: Listening on dbus.socket.
Feb  9 08:54:37.700003 systemd[1]: Starting docker.socket...
Feb  9 08:54:37.702399 systemd[1]: Listening on sshd.socket.
Feb  9 08:54:37.703352 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 08:54:37.703890 systemd[1]: Listening on docker.socket.
Feb  9 08:54:37.704675 systemd[1]: Reached target sockets.target.
Feb  9 08:54:37.705365 systemd[1]: Reached target basic.target.
Feb  9 08:54:37.706161 systemd[1]: System is tainted: cgroupsv1
Feb  9 08:54:37.706217 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 08:54:37.706249 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Feb  9 08:54:37.708393 systemd[1]: Starting containerd.service...
Feb  9 08:54:37.710824 systemd[1]: Starting coreos-metadata-sshkeys@core.service...
Feb  9 08:54:37.713698 systemd[1]: Starting dbus.service...
Feb  9 08:54:37.717736 systemd[1]: Starting enable-oem-cloudinit.service...
Feb  9 08:54:37.720823 systemd[1]: Starting extend-filesystems.service...
Feb  9 08:54:37.726173 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Feb  9 08:54:37.732306 systemd[1]: Starting motdgen.service...
Feb  9 08:54:37.736373 systemd[1]: Starting prepare-cni-plugins.service...
Feb  9 08:54:37.741377 systemd[1]: Starting prepare-critools.service...
Feb  9 08:54:37.745161 systemd[1]: Starting prepare-helm.service...
Feb  9 08:54:37.749812 systemd[1]: Starting ssh-key-proc-cmdline.service...
Feb  9 08:54:37.755505 systemd[1]: Starting sshd-keygen.service...
Feb  9 08:54:37.769352 systemd[1]: Starting systemd-logind.service...
Feb  9 08:54:37.769966 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb  9 08:54:37.784164 jq[1166]: false
Feb  9 08:54:37.770112 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb  9 08:54:37.774614 systemd[1]: Starting update-engine.service...
Feb  9 08:54:37.778639 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Feb  9 08:54:37.793283 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb  9 08:54:37.793725 systemd[1]: Finished ssh-key-proc-cmdline.service.
Feb  9 08:54:37.811457 jq[1181]: true
Feb  9 08:54:37.799066 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb  9 08:54:37.799402 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Feb  9 08:54:37.836452 tar[1185]: crictl
Feb  9 08:54:37.836952 tar[1184]: ./
Feb  9 08:54:37.836952 tar[1184]: ./macvlan
Feb  9 08:54:37.848727 tar[1187]: linux-amd64/helm
Feb  9 08:54:37.860816 jq[1195]: true
Feb  9 08:54:37.909121 dbus-daemon[1164]: [system] SELinux support is enabled
Feb  9 08:54:37.910355 systemd[1]: Started dbus.service.
Feb  9 08:54:37.914002 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb  9 08:54:37.914065 systemd[1]: Reached target system-config.target.
Feb  9 08:54:37.914682 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb  9 08:54:37.917140 systemd[1]: Starting user-configdrive.service...
Feb  9 08:54:37.947379 systemd[1]: motdgen.service: Deactivated successfully.
Feb  9 08:54:37.947749 systemd[1]: Finished motdgen.service.
Feb  9 08:54:37.995523 extend-filesystems[1169]: Found vda
Feb  9 08:54:37.996663 extend-filesystems[1169]: Found vda1
Feb  9 08:54:37.996663 extend-filesystems[1169]: Found vda2
Feb  9 08:54:37.996663 extend-filesystems[1169]: Found vda3
Feb  9 08:54:37.996663 extend-filesystems[1169]: Found usr
Feb  9 08:54:37.996663 extend-filesystems[1169]: Found vda4
Feb  9 08:54:38.000734 extend-filesystems[1169]: Found vda6
Feb  9 08:54:38.000734 extend-filesystems[1169]: Found vda7
Feb  9 08:54:38.002487 extend-filesystems[1169]: Found vda9
Feb  9 08:54:38.004641 extend-filesystems[1169]: Checking size of /dev/vda9
Feb  9 08:54:38.023332 bash[1233]: Updated "/home/core/.ssh/authorized_keys"
Feb  9 08:54:38.023507 coreos-cloudinit[1221]: 2024/02/09 08:54:38 Checking availability of "cloud-drive"
Feb  9 08:54:38.023507 coreos-cloudinit[1221]: 2024/02/09 08:54:38 Fetching user-data from datasource of type "cloud-drive"
Feb  9 08:54:38.023507 coreos-cloudinit[1221]: 2024/02/09 08:54:38 Attempting to read from "/media/configdrive/openstack/latest/user_data"
Feb  9 08:54:38.023507 coreos-cloudinit[1221]: 2024/02/09 08:54:38 Fetching meta-data from datasource of type "cloud-drive"
Feb  9 08:54:38.023507 coreos-cloudinit[1221]: 2024/02/09 08:54:38 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json"
Feb  9 08:54:38.004895 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Feb  9 08:54:38.059522 coreos-cloudinit[1221]: Detected an Ignition config. Exiting...
Feb  9 08:54:38.061011 systemd[1]: Finished user-configdrive.service.
Feb  9 08:54:38.061681 systemd[1]: Reached target user-config.target.
Feb  9 08:54:38.097485 systemd-networkd[1069]: eth1: Gained IPv6LL
Feb  9 08:54:38.101342 update_engine[1180]: I0209 08:54:38.100768  1180 main.cc:92] Flatcar Update Engine starting
Feb  9 08:54:38.107048 systemd[1]: Started update-engine.service.
Feb  9 08:54:38.107374 update_engine[1180]: I0209 08:54:38.107335  1180 update_check_scheduler.cc:74] Next update check in 6m31s
Feb  9 08:54:38.110112 systemd[1]: Started locksmithd.service.
Feb  9 08:54:38.119142 extend-filesystems[1169]: Resized partition /dev/vda9
Feb  9 08:54:38.122303 extend-filesystems[1241]: resize2fs 1.46.5 (30-Dec-2021)
Feb  9 08:54:38.130009 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks
Feb  9 08:54:38.179897 env[1199]: time="2024-02-09T08:54:38.179820282Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Feb  9 08:54:38.234019 kernel: EXT4-fs (vda9): resized filesystem to 15121403
Feb  9 08:54:38.253168 coreos-metadata[1163]: Feb 09 08:54:38.253 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb  9 08:54:38.268211 systemd-logind[1179]: Watching system buttons on /dev/input/event1 (Power Button)
Feb  9 08:54:38.268248 systemd-logind[1179]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb  9 08:54:38.268540 systemd-logind[1179]: New seat seat0.
Feb  9 08:54:38.275722 extend-filesystems[1241]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb  9 08:54:38.275722 extend-filesystems[1241]: old_desc_blocks = 1, new_desc_blocks = 8
Feb  9 08:54:38.275722 extend-filesystems[1241]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long.
Feb  9 08:54:38.280200 extend-filesystems[1169]: Resized filesystem in /dev/vda9
Feb  9 08:54:38.280200 extend-filesystems[1169]: Found vdb
Feb  9 08:54:38.276246 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb  9 08:54:38.289594 tar[1184]: ./static
Feb  9 08:54:38.289699 coreos-metadata[1163]: Feb 09 08:54:38.279 INFO Fetch successful
Feb  9 08:54:38.276587 systemd[1]: Finished extend-filesystems.service.
Feb  9 08:54:38.280729 systemd[1]: Started systemd-logind.service.
Feb  9 08:54:38.304823 env[1199]: time="2024-02-09T08:54:38.304757674Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb  9 08:54:38.305015 env[1199]: time="2024-02-09T08:54:38.304951290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb  9 08:54:38.308776 unknown[1163]: wrote ssh authorized keys file for user: core
Feb  9 08:54:38.313228 env[1199]: time="2024-02-09T08:54:38.313110902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb  9 08:54:38.313228 env[1199]: time="2024-02-09T08:54:38.313162942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb  9 08:54:38.313578 env[1199]: time="2024-02-09T08:54:38.313545301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 08:54:38.313645 env[1199]: time="2024-02-09T08:54:38.313577861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb  9 08:54:38.313645 env[1199]: time="2024-02-09T08:54:38.313601369Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Feb  9 08:54:38.313645 env[1199]: time="2024-02-09T08:54:38.313616889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb  9 08:54:38.313764 env[1199]: time="2024-02-09T08:54:38.313721823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb  9 08:54:38.320872 env[1199]: time="2024-02-09T08:54:38.320093283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb  9 08:54:38.320872 env[1199]: time="2024-02-09T08:54:38.320440621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb  9 08:54:38.320872 env[1199]: time="2024-02-09T08:54:38.320468554Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb  9 08:54:38.320872 env[1199]: time="2024-02-09T08:54:38.320560286Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Feb  9 08:54:38.320872 env[1199]: time="2024-02-09T08:54:38.320579725Z" level=info msg="metadata content store policy set" policy=shared
Feb  9 08:54:38.339507 env[1199]: time="2024-02-09T08:54:38.339445588Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb  9 08:54:38.339507 env[1199]: time="2024-02-09T08:54:38.339510636Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339534155Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339582665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339608564Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339630072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339650909Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339685681Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339706904Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339729206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.339746 env[1199]: time="2024-02-09T08:54:38.339748059Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.340082 env[1199]: time="2024-02-09T08:54:38.339766993Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb  9 08:54:38.340082 env[1199]: time="2024-02-09T08:54:38.339922236Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb  9 08:54:38.340153 update-ssh-keys[1248]: Updated "/home/core/.ssh/authorized_keys"
Feb  9 08:54:38.340781 systemd[1]: Finished coreos-metadata-sshkeys@core.service.
Feb  9 08:54:38.342549 env[1199]: time="2024-02-09T08:54:38.342502379Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb  9 08:54:38.343130 env[1199]: time="2024-02-09T08:54:38.343095452Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb  9 08:54:38.343213 env[1199]: time="2024-02-09T08:54:38.343149032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343213 env[1199]: time="2024-02-09T08:54:38.343171398Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb  9 08:54:38.343287 env[1199]: time="2024-02-09T08:54:38.343240807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343287 env[1199]: time="2024-02-09T08:54:38.343262274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343369 env[1199]: time="2024-02-09T08:54:38.343288518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343369 env[1199]: time="2024-02-09T08:54:38.343306032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343369 env[1199]: time="2024-02-09T08:54:38.343325285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343369 env[1199]: time="2024-02-09T08:54:38.343344462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343369 env[1199]: time="2024-02-09T08:54:38.343362590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343533 env[1199]: time="2024-02-09T08:54:38.343381428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343533 env[1199]: time="2024-02-09T08:54:38.343401334Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb  9 08:54:38.343614 env[1199]: time="2024-02-09T08:54:38.343593548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343651 env[1199]: time="2024-02-09T08:54:38.343616567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343651 env[1199]: time="2024-02-09T08:54:38.343635845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.343727 env[1199]: time="2024-02-09T08:54:38.343653419Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb  9 08:54:38.343727 env[1199]: time="2024-02-09T08:54:38.343676575Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Feb  9 08:54:38.343727 env[1199]: time="2024-02-09T08:54:38.343693214Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb  9 08:54:38.343727 env[1199]: time="2024-02-09T08:54:38.343719396Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Feb  9 08:54:38.343861 env[1199]: time="2024-02-09T08:54:38.343765674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb  9 08:54:38.348019 systemd[1]: Started containerd.service.
Feb  9 08:54:38.351062 env[1199]: time="2024-02-09T08:54:38.344236210Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb  9 08:54:38.351062 env[1199]: time="2024-02-09T08:54:38.344331500Z" level=info msg="Connect containerd service"
Feb  9 08:54:38.351062 env[1199]: time="2024-02-09T08:54:38.344404566Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb  9 08:54:38.351062 env[1199]: time="2024-02-09T08:54:38.347306053Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 08:54:38.351062 env[1199]: time="2024-02-09T08:54:38.347664270Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb  9 08:54:38.351062 env[1199]: time="2024-02-09T08:54:38.347722558Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb  9 08:54:38.371304 env[1199]: time="2024-02-09T08:54:38.371216542Z" level=info msg="Start subscribing containerd event"
Feb  9 08:54:38.371477 env[1199]: time="2024-02-09T08:54:38.371317468Z" level=info msg="Start recovering state"
Feb  9 08:54:38.371477 env[1199]: time="2024-02-09T08:54:38.371425527Z" level=info msg="Start event monitor"
Feb  9 08:54:38.371477 env[1199]: time="2024-02-09T08:54:38.371450488Z" level=info msg="Start snapshots syncer"
Feb  9 08:54:38.371477 env[1199]: time="2024-02-09T08:54:38.371467968Z" level=info msg="Start cni network conf syncer for default"
Feb  9 08:54:38.371613 env[1199]: time="2024-02-09T08:54:38.371480791Z" level=info msg="Start streaming server"
Feb  9 08:54:38.371613 env[1199]: time="2024-02-09T08:54:38.371599494Z" level=info msg="containerd successfully booted in 0.196441s"
Feb  9 08:54:38.402319 tar[1184]: ./vlan
Feb  9 08:54:38.497565 tar[1184]: ./portmap
Feb  9 08:54:38.546250 systemd-networkd[1069]: eth0: Gained IPv6LL
Feb  9 08:54:38.549485 tar[1184]: ./host-local
Feb  9 08:54:38.601209 tar[1184]: ./vrf
Feb  9 08:54:38.684646 tar[1184]: ./bridge
Feb  9 08:54:38.781166 tar[1184]: ./tuning
Feb  9 08:54:38.853940 tar[1184]: ./firewall
Feb  9 08:54:38.917396 systemd[1]: Created slice system-sshd.slice.
Feb  9 08:54:38.961911 tar[1184]: ./host-device
Feb  9 08:54:39.050057 tar[1184]: ./sbr
Feb  9 08:54:39.132120 tar[1184]: ./loopback
Feb  9 08:54:39.209555 tar[1184]: ./dhcp
Feb  9 08:54:39.420211 tar[1184]: ./ptp
Feb  9 08:54:39.535315 tar[1184]: ./ipvlan
Feb  9 08:54:39.548042 tar[1187]: linux-amd64/LICENSE
Feb  9 08:54:39.551490 tar[1187]: linux-amd64/README.md
Feb  9 08:54:39.568590 systemd[1]: Finished prepare-helm.service.
Feb  9 08:54:39.571735 systemd[1]: Finished prepare-critools.service.
Feb  9 08:54:39.611593 tar[1184]: ./bandwidth
Feb  9 08:54:39.673244 systemd[1]: Finished prepare-cni-plugins.service.
Feb  9 08:54:39.728947 locksmithd[1240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb  9 08:54:39.743346 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb  9 08:54:39.770136 systemd[1]: Finished sshd-keygen.service.
Feb  9 08:54:39.773468 systemd[1]: Starting issuegen.service...
Feb  9 08:54:39.776588 systemd[1]: Started sshd@0-164.90.156.194:22-139.178.89.65:46498.service.
Feb  9 08:54:39.793383 systemd[1]: issuegen.service: Deactivated successfully.
Feb  9 08:54:39.793653 systemd[1]: Finished issuegen.service.
Feb  9 08:54:39.796101 systemd[1]: Starting systemd-user-sessions.service...
Feb  9 08:54:39.806037 systemd[1]: Finished systemd-user-sessions.service.
Feb  9 08:54:39.808184 systemd[1]: Started getty@tty1.service.
Feb  9 08:54:39.810555 systemd[1]: Started serial-getty@ttyS0.service.
Feb  9 08:54:39.811781 systemd[1]: Reached target getty.target.
Feb  9 08:54:39.812510 systemd[1]: Reached target multi-user.target.
Feb  9 08:54:39.814959 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Feb  9 08:54:39.826109 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb  9 08:54:39.826352 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Feb  9 08:54:39.829074 systemd[1]: Startup finished in 8.316s (kernel) + 7.712s (userspace) = 16.029s.
Feb  9 08:54:39.883872 sshd[1278]: Accepted publickey for core from 139.178.89.65 port 46498 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:54:39.887106 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:54:39.898359 systemd[1]: Created slice user-500.slice.
Feb  9 08:54:39.899736 systemd[1]: Starting user-runtime-dir@500.service...
Feb  9 08:54:39.905064 systemd-logind[1179]: New session 1 of user core.
Feb  9 08:54:39.912122 systemd[1]: Finished user-runtime-dir@500.service.
Feb  9 08:54:39.913760 systemd[1]: Starting user@500.service...
Feb  9 08:54:39.918714 (systemd)[1293]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:54:40.024689 systemd[1293]: Queued start job for default target default.target.
Feb  9 08:54:40.025119 systemd[1293]: Reached target paths.target.
Feb  9 08:54:40.025144 systemd[1293]: Reached target sockets.target.
Feb  9 08:54:40.025161 systemd[1293]: Reached target timers.target.
Feb  9 08:54:40.025177 systemd[1293]: Reached target basic.target.
Feb  9 08:54:40.025251 systemd[1293]: Reached target default.target.
Feb  9 08:54:40.025295 systemd[1293]: Startup finished in 98ms.
Feb  9 08:54:40.026083 systemd[1]: Started user@500.service.
Feb  9 08:54:40.027513 systemd[1]: Started session-1.scope.
Feb  9 08:54:40.089481 systemd[1]: Started sshd@1-164.90.156.194:22-139.178.89.65:46514.service.
Feb  9 08:54:40.145743 sshd[1302]: Accepted publickey for core from 139.178.89.65 port 46514 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:54:40.148038 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:54:40.153950 systemd[1]: Started session-2.scope.
Feb  9 08:54:40.154509 systemd-logind[1179]: New session 2 of user core.
Feb  9 08:54:40.222706 sshd[1302]: pam_unix(sshd:session): session closed for user core
Feb  9 08:54:40.227240 systemd[1]: Started sshd@2-164.90.156.194:22-139.178.89.65:46528.service.
Feb  9 08:54:40.227808 systemd[1]: sshd@1-164.90.156.194:22-139.178.89.65:46514.service: Deactivated successfully.
Feb  9 08:54:40.229326 systemd[1]: session-2.scope: Deactivated successfully.
Feb  9 08:54:40.229379 systemd-logind[1179]: Session 2 logged out. Waiting for processes to exit.
Feb  9 08:54:40.236380 systemd-logind[1179]: Removed session 2.
Feb  9 08:54:40.285596 sshd[1307]: Accepted publickey for core from 139.178.89.65 port 46528 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:54:40.288230 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:54:40.294584 systemd-logind[1179]: New session 3 of user core.
Feb  9 08:54:40.294919 systemd[1]: Started session-3.scope.
Feb  9 08:54:40.354971 sshd[1307]: pam_unix(sshd:session): session closed for user core
Feb  9 08:54:40.359489 systemd[1]: Started sshd@3-164.90.156.194:22-139.178.89.65:46530.service.
Feb  9 08:54:40.360391 systemd[1]: sshd@2-164.90.156.194:22-139.178.89.65:46528.service: Deactivated successfully.
Feb  9 08:54:40.361779 systemd-logind[1179]: Session 3 logged out. Waiting for processes to exit.
Feb  9 08:54:40.362022 systemd[1]: session-3.scope: Deactivated successfully.
Feb  9 08:54:40.369719 systemd-logind[1179]: Removed session 3.
Feb  9 08:54:40.417814 sshd[1315]: Accepted publickey for core from 139.178.89.65 port 46530 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:54:40.419773 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:54:40.425896 systemd[1]: Started session-4.scope.
Feb  9 08:54:40.426406 systemd-logind[1179]: New session 4 of user core.
Feb  9 08:54:40.493042 sshd[1315]: pam_unix(sshd:session): session closed for user core
Feb  9 08:54:40.498461 systemd[1]: Started sshd@4-164.90.156.194:22-139.178.89.65:46542.service.
Feb  9 08:54:40.499619 systemd[1]: sshd@3-164.90.156.194:22-139.178.89.65:46530.service: Deactivated successfully.
Feb  9 08:54:40.501429 systemd-logind[1179]: Session 4 logged out. Waiting for processes to exit.
Feb  9 08:54:40.507342 systemd[1]: session-4.scope: Deactivated successfully.
Feb  9 08:54:40.509592 systemd-logind[1179]: Removed session 4.
Feb  9 08:54:40.561482 sshd[1322]: Accepted publickey for core from 139.178.89.65 port 46542 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:54:40.562858 sshd[1322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:54:40.570738 systemd[1]: Started session-5.scope.
Feb  9 08:54:40.571119 systemd-logind[1179]: New session 5 of user core.
Feb  9 08:54:40.647213 sudo[1327]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb  9 08:54:40.647950 sudo[1327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Feb  9 08:54:41.447700 systemd[1]: Starting systemd-networkd-wait-online.service...
Feb  9 08:54:41.455597 systemd[1]: Finished systemd-networkd-wait-online.service.
Feb  9 08:54:41.456715 systemd[1]: Reached target network-online.target.
Feb  9 08:54:41.459538 systemd[1]: Starting docker.service...
Feb  9 08:54:41.511584 env[1344]: time="2024-02-09T08:54:41.511499479Z" level=info msg="Starting up"
Feb  9 08:54:41.513659 env[1344]: time="2024-02-09T08:54:41.513620828Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb  9 08:54:41.513659 env[1344]: time="2024-02-09T08:54:41.513653423Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb  9 08:54:41.513859 env[1344]: time="2024-02-09T08:54:41.513679224Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb  9 08:54:41.513859 env[1344]: time="2024-02-09T08:54:41.513694086Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb  9 08:54:41.516247 env[1344]: time="2024-02-09T08:54:41.516166837Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb  9 08:54:41.516247 env[1344]: time="2024-02-09T08:54:41.516210218Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb  9 08:54:41.516247 env[1344]: time="2024-02-09T08:54:41.516237819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb  9 08:54:41.516247 env[1344]: time="2024-02-09T08:54:41.516252210Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb  9 08:54:41.599383 env[1344]: time="2024-02-09T08:54:41.599280195Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Feb  9 08:54:41.599383 env[1344]: time="2024-02-09T08:54:41.599324144Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Feb  9 08:54:41.599793 env[1344]: time="2024-02-09T08:54:41.599558347Z" level=info msg="Loading containers: start."
Feb  9 08:54:41.739007 kernel: Initializing XFRM netlink socket
Feb  9 08:54:41.777812 env[1344]: time="2024-02-09T08:54:41.777761171Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb  9 08:54:41.864258 systemd-networkd[1069]: docker0: Link UP
Feb  9 08:54:41.879671 env[1344]: time="2024-02-09T08:54:41.879628620Z" level=info msg="Loading containers: done."
Feb  9 08:54:41.892749 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2495587508-merged.mount: Deactivated successfully.
Feb  9 08:54:41.900236 env[1344]: time="2024-02-09T08:54:41.900180127Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb  9 08:54:41.900696 env[1344]: time="2024-02-09T08:54:41.900673556Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Feb  9 08:54:41.900922 env[1344]: time="2024-02-09T08:54:41.900902455Z" level=info msg="Daemon has completed initialization"
Feb  9 08:54:41.926303 systemd[1]: Started docker.service.
Feb  9 08:54:41.934923 env[1344]: time="2024-02-09T08:54:41.934841665Z" level=info msg="API listen on /run/docker.sock"
Feb  9 08:54:41.962014 systemd[1]: Starting coreos-metadata.service...
Feb  9 08:54:42.006383 coreos-metadata[1461]: Feb 09 08:54:42.006 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1
Feb  9 08:54:42.018484 coreos-metadata[1461]: Feb 09 08:54:42.018 INFO Fetch successful
Feb  9 08:54:42.032907 systemd[1]: Finished coreos-metadata.service.
Feb  9 08:54:42.050131 systemd[1]: Reloading.
Feb  9 08:54:42.156044 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2024-02-09T08:54:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 08:54:42.156073 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2024-02-09T08:54:42Z" level=info msg="torcx already run"
Feb  9 08:54:42.212865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 08:54:42.212893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 08:54:42.232888 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 08:54:42.319328 systemd[1]: Started kubelet.service.
Feb  9 08:54:42.397608 kubelet[1548]: E0209 08:54:42.397510    1548 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb  9 08:54:42.400066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 08:54:42.400266 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 08:54:42.984251 env[1199]: time="2024-02-09T08:54:42.984172682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\""
Feb  9 08:54:43.620450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057957575.mount: Deactivated successfully.
Feb  9 08:54:45.744178 env[1199]: time="2024-02-09T08:54:45.744110707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:45.746343 env[1199]: time="2024-02-09T08:54:45.746293631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:45.748719 env[1199]: time="2024-02-09T08:54:45.748668657Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:45.751114 env[1199]: time="2024-02-09T08:54:45.751075450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:45.752888 env[1199]: time="2024-02-09T08:54:45.752828900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\""
Feb  9 08:54:45.769043 env[1199]: time="2024-02-09T08:54:45.768995848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\""
Feb  9 08:54:47.914633 systemd-timesyncd[1134]: Timed out waiting for reply from 173.71.73.214:123 (0.flatcar.pool.ntp.org).
Feb  9 08:54:49.197579 systemd-timesyncd[1134]: Contacted time server 99.119.214.210:123 (0.flatcar.pool.ntp.org).
Feb  9 08:54:49.197660 systemd-timesyncd[1134]: Initial clock synchronization to Fri 2024-02-09 08:54:49.197226 UTC.
Feb  9 08:54:49.197985 systemd-resolved[1131]: Clock change detected. Flushing caches.
Feb  9 08:54:49.590535 env[1199]: time="2024-02-09T08:54:49.590395582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:49.592643 env[1199]: time="2024-02-09T08:54:49.592587665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:49.594474 env[1199]: time="2024-02-09T08:54:49.594437523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:49.597278 env[1199]: time="2024-02-09T08:54:49.597234956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:49.598212 env[1199]: time="2024-02-09T08:54:49.598180087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\""
Feb  9 08:54:49.617787 env[1199]: time="2024-02-09T08:54:49.617736383Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\""
Feb  9 08:54:51.073526 env[1199]: time="2024-02-09T08:54:51.073452983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:51.076398 env[1199]: time="2024-02-09T08:54:51.076332148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:51.078840 env[1199]: time="2024-02-09T08:54:51.078727655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:51.081307 env[1199]: time="2024-02-09T08:54:51.081254701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:51.082917 env[1199]: time="2024-02-09T08:54:51.082835112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\""
Feb  9 08:54:51.103589 env[1199]: time="2024-02-09T08:54:51.103541745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\""
Feb  9 08:54:52.528565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283560496.mount: Deactivated successfully.
Feb  9 08:54:53.237088 env[1199]: time="2024-02-09T08:54:53.237011197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.239404 env[1199]: time="2024-02-09T08:54:53.239331766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.241550 env[1199]: time="2024-02-09T08:54:53.241505001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.243471 env[1199]: time="2024-02-09T08:54:53.243425744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.244118 env[1199]: time="2024-02-09T08:54:53.244078402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\""
Feb  9 08:54:53.259635 env[1199]: time="2024-02-09T08:54:53.259586090Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb  9 08:54:53.640824 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb  9 08:54:53.641033 systemd[1]: Stopped kubelet.service.
Feb  9 08:54:53.643076 systemd[1]: Started kubelet.service.
Feb  9 08:54:53.720405 kubelet[1588]: E0209 08:54:53.720292    1588 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb  9 08:54:53.725076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 08:54:53.725342 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 08:54:53.796901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584279328.mount: Deactivated successfully.
Feb  9 08:54:53.805423 env[1199]: time="2024-02-09T08:54:53.805344614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.808675 env[1199]: time="2024-02-09T08:54:53.808617861Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.811165 env[1199]: time="2024-02-09T08:54:53.811113355Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.813665 env[1199]: time="2024-02-09T08:54:53.813605224Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:53.814611 env[1199]: time="2024-02-09T08:54:53.814554240Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Feb  9 08:54:53.830991 env[1199]: time="2024-02-09T08:54:53.830942763Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\""
Feb  9 08:54:54.666576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011491888.mount: Deactivated successfully.
Feb  9 08:54:59.637804 env[1199]: time="2024-02-09T08:54:59.637743834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:59.641912 env[1199]: time="2024-02-09T08:54:59.641849473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:59.645012 env[1199]: time="2024-02-09T08:54:59.644951517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:59.647330 env[1199]: time="2024-02-09T08:54:59.647269762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:54:59.648349 env[1199]: time="2024-02-09T08:54:59.648301306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\""
Feb  9 08:54:59.662020 env[1199]: time="2024-02-09T08:54:59.661973519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\""
Feb  9 08:55:00.274326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045905414.mount: Deactivated successfully.
Feb  9 08:55:01.118419 env[1199]: time="2024-02-09T08:55:01.117477001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:01.121515 env[1199]: time="2024-02-09T08:55:01.121449133Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:01.124611 env[1199]: time="2024-02-09T08:55:01.124526209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:01.129004 env[1199]: time="2024-02-09T08:55:01.128919721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:01.130152 env[1199]: time="2024-02-09T08:55:01.130055134Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\""
Feb  9 08:55:03.890951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb  9 08:55:03.891246 systemd[1]: Stopped kubelet.service.
Feb  9 08:55:03.894190 systemd[1]: Started kubelet.service.
Feb  9 08:55:03.990453 kubelet[1666]: E0209 08:55:03.990363    1666 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set"
Feb  9 08:55:03.993500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb  9 08:55:03.993711 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb  9 08:55:05.503643 systemd[1]: Stopped kubelet.service.
Feb  9 08:55:05.534259 systemd[1]: Reloading.
Feb  9 08:55:05.641466 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2024-02-09T08:55:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 08:55:05.642127 /usr/lib/systemd/system-generators/torcx-generator[1696]: time="2024-02-09T08:55:05Z" level=info msg="torcx already run"
Feb  9 08:55:05.803768 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 08:55:05.803802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 08:55:05.831005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 08:55:05.967224 systemd[1]: Started kubelet.service.
Feb  9 08:55:06.057765 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 08:55:06.057765 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 08:55:06.057765 kubelet[1750]: I0209 08:55:06.057433    1750 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  9 08:55:06.059103 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 08:55:06.059103 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 08:55:06.609542 kubelet[1750]: I0209 08:55:06.609490    1750 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb  9 08:55:06.609828 kubelet[1750]: I0209 08:55:06.609796    1750 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  9 08:55:06.610281 kubelet[1750]: I0209 08:55:06.610255    1750 server.go:836] "Client rotation is on, will bootstrap in background"
Feb  9 08:55:06.616815 kubelet[1750]: E0209 08:55:06.616715    1750 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.90.156.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.616815 kubelet[1750]: I0209 08:55:06.616816    1750 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 08:55:06.621616 kubelet[1750]: I0209 08:55:06.621525    1750 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  9 08:55:06.622223 kubelet[1750]: I0209 08:55:06.622173    1750 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  9 08:55:06.622412 kubelet[1750]: I0209 08:55:06.622288    1750 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb  9 08:55:06.622412 kubelet[1750]: I0209 08:55:06.622316    1750 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb  9 08:55:06.622412 kubelet[1750]: I0209 08:55:06.622330    1750 container_manager_linux.go:308] "Creating device plugin manager"
Feb  9 08:55:06.622675 kubelet[1750]: I0209 08:55:06.622512    1750 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 08:55:06.627550 kubelet[1750]: I0209 08:55:06.627510    1750 kubelet.go:398] "Attempting to sync node with API server"
Feb  9 08:55:06.627840 kubelet[1750]: I0209 08:55:06.627818    1750 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  9 08:55:06.628194 kubelet[1750]: I0209 08:55:06.628170    1750 kubelet.go:297] "Adding apiserver pod source"
Feb  9 08:55:06.628354 kubelet[1750]: I0209 08:55:06.628340    1750 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  9 08:55:06.630004 kubelet[1750]: W0209 08:55:06.629934    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://164.90.156.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.630202 kubelet[1750]: E0209 08:55:06.630185    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.90.156.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.630493 kubelet[1750]: I0209 08:55:06.630474    1750 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  9 08:55:06.631039 kubelet[1750]: W0209 08:55:06.631015    1750 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb  9 08:55:06.631739 kubelet[1750]: I0209 08:55:06.631712    1750 server.go:1186] "Started kubelet"
Feb  9 08:55:06.635843 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Feb  9 08:55:06.636261 kubelet[1750]: I0209 08:55:06.636228    1750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  9 08:55:06.637322 kubelet[1750]: W0209 08:55:06.636558    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://164.90.156.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-e-7e5a76b0b8&limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.637322 kubelet[1750]: E0209 08:55:06.636623    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.90.156.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-e-7e5a76b0b8&limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.637555 kubelet[1750]: E0209 08:55:06.636690    1750 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-e-7e5a76b0b8.17b225ec8cce512b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-e-7e5a76b0b8", UID:"ci-3510.3.2-e-7e5a76b0b8", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-e-7e5a76b0b8"}, FirstTimestamp:time.Date(2024, time.February, 9, 8, 55, 6, 631680299, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 8, 55, 6, 631680299, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://164.90.156.194:6443/api/v1/namespaces/default/events": dial tcp 164.90.156.194:6443: connect: connection refused'(may retry after sleeping)
Feb  9 08:55:06.637555 kubelet[1750]: I0209 08:55:06.637055    1750 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb  9 08:55:06.638001 kubelet[1750]: I0209 08:55:06.637969    1750 server.go:451] "Adding debug handlers to kubelet server"
Feb  9 08:55:06.641014 kubelet[1750]: E0209 08:55:06.640966    1750 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  9 08:55:06.641014 kubelet[1750]: E0209 08:55:06.641009    1750 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  9 08:55:06.646853 kubelet[1750]: I0209 08:55:06.646805    1750 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb  9 08:55:06.648360 kubelet[1750]: I0209 08:55:06.648321    1750 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  9 08:55:06.652575 kubelet[1750]: W0209 08:55:06.652514    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://164.90.156.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.652802 kubelet[1750]: E0209 08:55:06.652785    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.90.156.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.653295 kubelet[1750]: E0209 08:55:06.653265    1750 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://164.90.156.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-e-7e5a76b0b8?timeout=10s": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.708010 kubelet[1750]: I0209 08:55:06.707964    1750 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  9 08:55:06.708417 kubelet[1750]: I0209 08:55:06.708398    1750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  9 08:55:06.708601 kubelet[1750]: I0209 08:55:06.708584    1750 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 08:55:06.718832 kubelet[1750]: I0209 08:55:06.718793    1750 policy_none.go:49] "None policy: Start"
Feb  9 08:55:06.720424 kubelet[1750]: I0209 08:55:06.720393    1750 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  9 08:55:06.720653 kubelet[1750]: I0209 08:55:06.720639    1750 state_mem.go:35] "Initializing new in-memory state store"
Feb  9 08:55:06.736818 kubelet[1750]: I0209 08:55:06.736774    1750 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  9 08:55:06.737455 kubelet[1750]: I0209 08:55:06.737424    1750 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  9 08:55:06.740108 kubelet[1750]: E0209 08:55:06.740008    1750 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-e-7e5a76b0b8\" not found"
Feb  9 08:55:06.745136 kubelet[1750]: I0209 08:55:06.745097    1750 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb  9 08:55:06.749701 kubelet[1750]: I0209 08:55:06.749614    1750 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.750068 kubelet[1750]: E0209 08:55:06.750040    1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.156.194:6443/api/v1/nodes\": dial tcp 164.90.156.194:6443: connect: connection refused" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.785688 kubelet[1750]: I0209 08:55:06.785632    1750 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb  9 08:55:06.785981 kubelet[1750]: I0209 08:55:06.785960    1750 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb  9 08:55:06.786484 kubelet[1750]: I0209 08:55:06.786419    1750 kubelet.go:2113] "Starting kubelet main sync loop"
Feb  9 08:55:06.786609 kubelet[1750]: E0209 08:55:06.786516    1750 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb  9 08:55:06.787546 kubelet[1750]: W0209 08:55:06.787095    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://164.90.156.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.787546 kubelet[1750]: E0209 08:55:06.787140    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.90.156.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.854701 kubelet[1750]: E0209 08:55:06.854590    1750 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://164.90.156.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-e-7e5a76b0b8?timeout=10s": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:06.888817 kubelet[1750]: I0209 08:55:06.886673    1750 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:06.889794 kubelet[1750]: I0209 08:55:06.889724    1750 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:06.891256 kubelet[1750]: I0209 08:55:06.891228    1750 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:06.896728 kubelet[1750]: I0209 08:55:06.896692    1750 status_manager.go:698] "Failed to get status for pod" podUID=0ed01b10bf528f9ccfc1b65408d6d2ba pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8" err="Get \"https://164.90.156.194:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\": dial tcp 164.90.156.194:6443: connect: connection refused"
Feb  9 08:55:06.896990 kubelet[1750]: I0209 08:55:06.896966    1750 status_manager.go:698] "Failed to get status for pod" podUID=159bc9733865df590745fab523bd0ff1 pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8" err="Get \"https://164.90.156.194:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\": dial tcp 164.90.156.194:6443: connect: connection refused"
Feb  9 08:55:06.897187 kubelet[1750]: I0209 08:55:06.897168    1750 status_manager.go:698] "Failed to get status for pod" podUID=a128f3446e9066763bd12019ca5fbe03 pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8" err="Get \"https://164.90.156.194:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8\": dial tcp 164.90.156.194:6443: connect: connection refused"
Feb  9 08:55:06.950620 kubelet[1750]: I0209 08:55:06.950552    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/159bc9733865df590745fab523bd0ff1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"159bc9733865df590745fab523bd0ff1\") " pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.950620 kubelet[1750]: I0209 08:55:06.950639    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.950902 kubelet[1750]: I0209 08:55:06.950680    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.950902 kubelet[1750]: I0209 08:55:06.950718    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.950902 kubelet[1750]: I0209 08:55:06.950756    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.950902 kubelet[1750]: I0209 08:55:06.950786    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/159bc9733865df590745fab523bd0ff1-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"159bc9733865df590745fab523bd0ff1\") " pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.950902 kubelet[1750]: I0209 08:55:06.950820    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/159bc9733865df590745fab523bd0ff1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"159bc9733865df590745fab523bd0ff1\") " pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.951081 kubelet[1750]: I0209 08:55:06.950852    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.951081 kubelet[1750]: I0209 08:55:06.950882    1750 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a128f3446e9066763bd12019ca5fbe03-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"a128f3446e9066763bd12019ca5fbe03\") " pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.952227 kubelet[1750]: I0209 08:55:06.952186    1750 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:06.952837 kubelet[1750]: E0209 08:55:06.952810    1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.156.194:6443/api/v1/nodes\": dial tcp 164.90.156.194:6443: connect: connection refused" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:07.197198 kubelet[1750]: E0209 08:55:07.197142    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:07.197985 env[1199]: time="2024-02-09T08:55:07.197933478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8,Uid:0ed01b10bf528f9ccfc1b65408d6d2ba,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:07.201990 kubelet[1750]: E0209 08:55:07.201939    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:07.202657 env[1199]: time="2024-02-09T08:55:07.202594725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-e-7e5a76b0b8,Uid:a128f3446e9066763bd12019ca5fbe03,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:07.204466 kubelet[1750]: E0209 08:55:07.204415    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:07.205074 env[1199]: time="2024-02-09T08:55:07.205024203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-e-7e5a76b0b8,Uid:159bc9733865df590745fab523bd0ff1,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:07.256362 kubelet[1750]: E0209 08:55:07.256291    1750 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://164.90.156.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-e-7e5a76b0b8?timeout=10s": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:07.354754 kubelet[1750]: I0209 08:55:07.354721    1750 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:07.355327 kubelet[1750]: E0209 08:55:07.355293    1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.156.194:6443/api/v1/nodes\": dial tcp 164.90.156.194:6443: connect: connection refused" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:07.776994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426464348.mount: Deactivated successfully.
Feb  9 08:55:07.787058 env[1199]: time="2024-02-09T08:55:07.786934903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.795417 env[1199]: time="2024-02-09T08:55:07.795341716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.796839 env[1199]: time="2024-02-09T08:55:07.796786579Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.798707 env[1199]: time="2024-02-09T08:55:07.798639097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.803641 env[1199]: time="2024-02-09T08:55:07.803425671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.805335 env[1199]: time="2024-02-09T08:55:07.805262359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.808483 env[1199]: time="2024-02-09T08:55:07.808419779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.813455 env[1199]: time="2024-02-09T08:55:07.813406533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.816535 env[1199]: time="2024-02-09T08:55:07.816473655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.820251 env[1199]: time="2024-02-09T08:55:07.820181750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.822718 env[1199]: time="2024-02-09T08:55:07.822645423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.825846 env[1199]: time="2024-02-09T08:55:07.825786565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:07.882646 kubelet[1750]: W0209 08:55:07.882587    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://164.90.156.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:07.882646 kubelet[1750]: E0209 08:55:07.882647    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.90.156.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:07.901157 env[1199]: time="2024-02-09T08:55:07.901020448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:07.901331 env[1199]: time="2024-02-09T08:55:07.901175124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:07.901331 env[1199]: time="2024-02-09T08:55:07.901216432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:07.901705 env[1199]: time="2024-02-09T08:55:07.901647610Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c83f59a11b235326cda1140b06543f838ee9ce5c91c5b598e2c5d61d76b6d93 pid=1831 runtime=io.containerd.runc.v2
Feb  9 08:55:07.906385 env[1199]: time="2024-02-09T08:55:07.906239010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:07.906385 env[1199]: time="2024-02-09T08:55:07.906329567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:07.906654 env[1199]: time="2024-02-09T08:55:07.906347233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:07.909496 env[1199]: time="2024-02-09T08:55:07.909047943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/806d4930afcb20db9c1f9891d30276dbe26613da1c35697170b0503ad470c2cf pid=1838 runtime=io.containerd.runc.v2
Feb  9 08:55:07.914668 env[1199]: time="2024-02-09T08:55:07.914545901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:07.914668 env[1199]: time="2024-02-09T08:55:07.914606370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:07.914668 env[1199]: time="2024-02-09T08:55:07.914623881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:07.915155 env[1199]: time="2024-02-09T08:55:07.915086215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db00dcbaa249a179fc526634119f32e6d8ccf6aed84328bc0a958457c125822a pid=1856 runtime=io.containerd.runc.v2
Feb  9 08:55:07.982997 kubelet[1750]: W0209 08:55:07.982938    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://164.90.156.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:07.982997 kubelet[1750]: E0209 08:55:07.983002    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.90.156.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.060976 kubelet[1750]: E0209 08:55:08.060831    1750 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://164.90.156.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-e-7e5a76b0b8?timeout=10s": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.063435 env[1199]: time="2024-02-09T08:55:08.063331138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8,Uid:0ed01b10bf528f9ccfc1b65408d6d2ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c83f59a11b235326cda1140b06543f838ee9ce5c91c5b598e2c5d61d76b6d93\""
Feb  9 08:55:08.065026 env[1199]: time="2024-02-09T08:55:08.064987928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-e-7e5a76b0b8,Uid:a128f3446e9066763bd12019ca5fbe03,Namespace:kube-system,Attempt:0,} returns sandbox id \"806d4930afcb20db9c1f9891d30276dbe26613da1c35697170b0503ad470c2cf\""
Feb  9 08:55:08.065362 kubelet[1750]: E0209 08:55:08.065325    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:08.066793 kubelet[1750]: E0209 08:55:08.066761    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:08.071642 env[1199]: time="2024-02-09T08:55:08.071588576Z" level=info msg="CreateContainer within sandbox \"2c83f59a11b235326cda1140b06543f838ee9ce5c91c5b598e2c5d61d76b6d93\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb  9 08:55:08.072277 env[1199]: time="2024-02-09T08:55:08.072231832Z" level=info msg="CreateContainer within sandbox \"806d4930afcb20db9c1f9891d30276dbe26613da1c35697170b0503ad470c2cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb  9 08:55:08.075642 kubelet[1750]: E0209 08:55:08.075513    1750 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-e-7e5a76b0b8.17b225ec8cce512b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-e-7e5a76b0b8", UID:"ci-3510.3.2-e-7e5a76b0b8", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-e-7e5a76b0b8"}, FirstTimestamp:time.Date(2024, time.February, 9, 8, 55, 6, 631680299, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 8, 55, 6, 631680299, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://164.90.156.194:6443/api/v1/namespaces/default/events": dial tcp 164.90.156.194:6443: connect: connection refused'(may retry after sleeping)
Feb  9 08:55:08.078633 env[1199]: time="2024-02-09T08:55:08.078586538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-e-7e5a76b0b8,Uid:159bc9733865df590745fab523bd0ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"db00dcbaa249a179fc526634119f32e6d8ccf6aed84328bc0a958457c125822a\""
Feb  9 08:55:08.079951 kubelet[1750]: E0209 08:55:08.079723    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:08.084694 env[1199]: time="2024-02-09T08:55:08.084577662Z" level=info msg="CreateContainer within sandbox \"db00dcbaa249a179fc526634119f32e6d8ccf6aed84328bc0a958457c125822a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb  9 08:55:08.106651 env[1199]: time="2024-02-09T08:55:08.106587308Z" level=info msg="CreateContainer within sandbox \"2c83f59a11b235326cda1140b06543f838ee9ce5c91c5b598e2c5d61d76b6d93\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"56fb9d9e4535c7c326ee9fc671634eaf7844d3422c2e2d0ad07c8ab2f2d84962\""
Feb  9 08:55:08.107996 env[1199]: time="2024-02-09T08:55:08.107861540Z" level=info msg="StartContainer for \"56fb9d9e4535c7c326ee9fc671634eaf7844d3422c2e2d0ad07c8ab2f2d84962\""
Feb  9 08:55:08.117563 kubelet[1750]: W0209 08:55:08.117423    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://164.90.156.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.117563 kubelet[1750]: E0209 08:55:08.117507    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.90.156.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.121362 env[1199]: time="2024-02-09T08:55:08.121283414Z" level=info msg="CreateContainer within sandbox \"806d4930afcb20db9c1f9891d30276dbe26613da1c35697170b0503ad470c2cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abb3bd11b3b81df743e3fde678e6c0d4a77322f89a49b872e71f239c659edf4f\""
Feb  9 08:55:08.122219 env[1199]: time="2024-02-09T08:55:08.122175153Z" level=info msg="StartContainer for \"abb3bd11b3b81df743e3fde678e6c0d4a77322f89a49b872e71f239c659edf4f\""
Feb  9 08:55:08.127825 env[1199]: time="2024-02-09T08:55:08.127759309Z" level=info msg="CreateContainer within sandbox \"db00dcbaa249a179fc526634119f32e6d8ccf6aed84328bc0a958457c125822a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbffdf75a45cdc4a58c1d81da167d73b031d460304337819c1096cc943eae3e0\""
Feb  9 08:55:08.128889 env[1199]: time="2024-02-09T08:55:08.128822553Z" level=info msg="StartContainer for \"fbffdf75a45cdc4a58c1d81da167d73b031d460304337819c1096cc943eae3e0\""
Feb  9 08:55:08.153880 kubelet[1750]: W0209 08:55:08.153802    1750 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://164.90.156.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-e-7e5a76b0b8&limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.153880 kubelet[1750]: E0209 08:55:08.153881    1750 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.90.156.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-e-7e5a76b0b8&limit=500&resourceVersion=0": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.158747 kubelet[1750]: I0209 08:55:08.157046    1750 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:08.158747 kubelet[1750]: E0209 08:55:08.157948    1750 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.90.156.194:6443/api/v1/nodes\": dial tcp 164.90.156.194:6443: connect: connection refused" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:08.253057 env[1199]: time="2024-02-09T08:55:08.252890858Z" level=info msg="StartContainer for \"56fb9d9e4535c7c326ee9fc671634eaf7844d3422c2e2d0ad07c8ab2f2d84962\" returns successfully"
Feb  9 08:55:08.287472 env[1199]: time="2024-02-09T08:55:08.287390332Z" level=info msg="StartContainer for \"fbffdf75a45cdc4a58c1d81da167d73b031d460304337819c1096cc943eae3e0\" returns successfully"
Feb  9 08:55:08.304977 env[1199]: time="2024-02-09T08:55:08.304904473Z" level=info msg="StartContainer for \"abb3bd11b3b81df743e3fde678e6c0d4a77322f89a49b872e71f239c659edf4f\" returns successfully"
Feb  9 08:55:08.794301 kubelet[1750]: E0209 08:55:08.794270    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:08.794940 kubelet[1750]: I0209 08:55:08.794615    1750 status_manager.go:698] "Failed to get status for pod" podUID=a128f3446e9066763bd12019ca5fbe03 pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8" err="Get \"https://164.90.156.194:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8\": dial tcp 164.90.156.194:6443: connect: connection refused"
Feb  9 08:55:08.797212 kubelet[1750]: E0209 08:55:08.797184    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:08.797667 kubelet[1750]: I0209 08:55:08.797631    1750 status_manager.go:698] "Failed to get status for pod" podUID=159bc9733865df590745fab523bd0ff1 pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8" err="Get \"https://164.90.156.194:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\": dial tcp 164.90.156.194:6443: connect: connection refused"
Feb  9 08:55:08.801205 kubelet[1750]: E0209 08:55:08.801175    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:08.809383 kubelet[1750]: E0209 08:55:08.809327    1750 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.90.156.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.90.156.194:6443: connect: connection refused
Feb  9 08:55:08.829922 kubelet[1750]: I0209 08:55:08.829873    1750 status_manager.go:698] "Failed to get status for pod" podUID=0ed01b10bf528f9ccfc1b65408d6d2ba pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8" err="Get \"https://164.90.156.194:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\": dial tcp 164.90.156.194:6443: connect: connection refused"
Feb  9 08:55:09.759737 kubelet[1750]: I0209 08:55:09.759695    1750 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:09.804158 kubelet[1750]: E0209 08:55:09.803483    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:09.804158 kubelet[1750]: E0209 08:55:09.804079    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:09.811406 kubelet[1750]: E0209 08:55:09.811319    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:10.805470 kubelet[1750]: E0209 08:55:10.805432    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:10.807059 kubelet[1750]: E0209 08:55:10.807031    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:11.668980 kubelet[1750]: I0209 08:55:11.668933    1750 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:12.594555 kubelet[1750]: E0209 08:55:12.594509    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:12.632142 kubelet[1750]: I0209 08:55:12.631895    1750 apiserver.go:52] "Watching apiserver"
Feb  9 08:55:12.650034 kubelet[1750]: I0209 08:55:12.649984    1750 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  9 08:55:12.689989 kubelet[1750]: I0209 08:55:12.689949    1750 reconciler.go:41] "Reconciler: start to sync state"
Feb  9 08:55:12.809474 kubelet[1750]: E0209 08:55:12.809447    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:13.728418 kubelet[1750]: E0209 08:55:13.728327    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:13.811602 kubelet[1750]: E0209 08:55:13.811560    1750 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:14.623740 systemd[1]: Reloading.
Feb  9 08:55:14.734950 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2024-02-09T08:55:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]"
Feb  9 08:55:14.735527 /usr/lib/systemd/system-generators/torcx-generator[2080]: time="2024-02-09T08:55:14Z" level=info msg="torcx already run"
Feb  9 08:55:14.859725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Feb  9 08:55:14.859762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb  9 08:55:14.892677 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb  9 08:55:15.024814 systemd[1]: Stopping kubelet.service...
Feb  9 08:55:15.043240 systemd[1]: kubelet.service: Deactivated successfully.
Feb  9 08:55:15.043748 systemd[1]: Stopped kubelet.service.
Feb  9 08:55:15.046988 systemd[1]: Started kubelet.service.
Feb  9 08:55:15.168842 sudo[2143]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb  9 08:55:15.169555 sudo[2143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb  9 08:55:15.185568 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 08:55:15.186136 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 08:55:15.186414 kubelet[2133]: I0209 08:55:15.186326    2133 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb  9 08:55:15.188651 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
Feb  9 08:55:15.194589 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb  9 08:55:15.200582 kubelet[2133]: I0209 08:55:15.200536    2133 server.go:412] "Kubelet version" kubeletVersion="v1.26.5"
Feb  9 08:55:15.200582 kubelet[2133]: I0209 08:55:15.200578    2133 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb  9 08:55:15.200958 kubelet[2133]: I0209 08:55:15.200931    2133 server.go:836] "Client rotation is on, will bootstrap in background"
Feb  9 08:55:15.203170 kubelet[2133]: I0209 08:55:15.203138    2133 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb  9 08:55:15.209403 kubelet[2133]: I0209 08:55:15.207833    2133 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb  9 08:55:15.212148 kubelet[2133]: I0209 08:55:15.210362    2133 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb  9 08:55:15.212148 kubelet[2133]: I0209 08:55:15.211640    2133 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb  9 08:55:15.212148 kubelet[2133]: I0209 08:55:15.211757    2133 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]}
Feb  9 08:55:15.212148 kubelet[2133]: I0209 08:55:15.211790    2133 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Feb  9 08:55:15.212148 kubelet[2133]: I0209 08:55:15.211809    2133 container_manager_linux.go:308] "Creating device plugin manager"
Feb  9 08:55:15.212148 kubelet[2133]: I0209 08:55:15.211970    2133 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 08:55:15.217942 kubelet[2133]: I0209 08:55:15.217910    2133 kubelet.go:398] "Attempting to sync node with API server"
Feb  9 08:55:15.217942 kubelet[2133]: I0209 08:55:15.217956    2133 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb  9 08:55:15.218165 kubelet[2133]: I0209 08:55:15.217992    2133 kubelet.go:297] "Adding apiserver pod source"
Feb  9 08:55:15.218165 kubelet[2133]: I0209 08:55:15.218014    2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb  9 08:55:15.233404 kubelet[2133]: I0209 08:55:15.229796    2133 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Feb  9 08:55:15.237428 kubelet[2133]: I0209 08:55:15.231992    2133 server.go:1186] "Started kubelet"
Feb  9 08:55:15.237606 kubelet[2133]: I0209 08:55:15.237515    2133 server.go:161] "Starting to listen" address="0.0.0.0" port=10250
Feb  9 08:55:15.243880 kubelet[2133]: E0209 08:55:15.240913    2133 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Feb  9 08:55:15.243880 kubelet[2133]: E0209 08:55:15.240960    2133 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb  9 08:55:15.243880 kubelet[2133]: I0209 08:55:15.243275    2133 server.go:451] "Adding debug handlers to kubelet server"
Feb  9 08:55:15.251390 kubelet[2133]: I0209 08:55:15.234184    2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb  9 08:55:15.258707 kubelet[2133]: I0209 08:55:15.258653    2133 volume_manager.go:293] "Starting Kubelet Volume Manager"
Feb  9 08:55:15.264248 kubelet[2133]: I0209 08:55:15.260434    2133 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb  9 08:55:15.361827 kubelet[2133]: I0209 08:55:15.361781    2133 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.386824 kubelet[2133]: I0209 08:55:15.386775    2133 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.386992 kubelet[2133]: I0209 08:55:15.386887    2133 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.510579 kubelet[2133]: I0209 08:55:15.510466    2133 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
Feb  9 08:55:15.538315 kubelet[2133]: I0209 08:55:15.538219    2133 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb  9 08:55:15.538315 kubelet[2133]: I0209 08:55:15.538311    2133 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb  9 08:55:15.538534 kubelet[2133]: I0209 08:55:15.538347    2133 state_mem.go:36] "Initialized new in-memory state store"
Feb  9 08:55:15.545398 kubelet[2133]: I0209 08:55:15.540023    2133 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb  9 08:55:15.545398 kubelet[2133]: I0209 08:55:15.540064    2133 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Feb  9 08:55:15.545398 kubelet[2133]: I0209 08:55:15.540075    2133 policy_none.go:49] "None policy: Start"
Feb  9 08:55:15.546776 kubelet[2133]: I0209 08:55:15.546741    2133 memory_manager.go:169] "Starting memorymanager" policy="None"
Feb  9 08:55:15.546776 kubelet[2133]: I0209 08:55:15.546783    2133 state_mem.go:35] "Initializing new in-memory state store"
Feb  9 08:55:15.547611 kubelet[2133]: I0209 08:55:15.547573    2133 state_mem.go:75] "Updated machine memory state"
Feb  9 08:55:15.551391 kubelet[2133]: I0209 08:55:15.551333    2133 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb  9 08:55:15.563536 kubelet[2133]: I0209 08:55:15.563480    2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb  9 08:55:15.605505 kubelet[2133]: I0209 08:55:15.605466    2133 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6
Feb  9 08:55:15.605722 kubelet[2133]: I0209 08:55:15.605709    2133 status_manager.go:176] "Starting to sync pod status with apiserver"
Feb  9 08:55:15.605812 kubelet[2133]: I0209 08:55:15.605801    2133 kubelet.go:2113] "Starting kubelet main sync loop"
Feb  9 08:55:15.605926 kubelet[2133]: E0209 08:55:15.605918    2133 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb  9 08:55:15.706768 kubelet[2133]: I0209 08:55:15.706712    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:15.706980 kubelet[2133]: I0209 08:55:15.706858    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:15.706980 kubelet[2133]: I0209 08:55:15.706908    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:15.721261 kubelet[2133]: E0209 08:55:15.721215    2133 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.725462 kubelet[2133]: E0209 08:55:15.725422    2133 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-e-7e5a76b0b8\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.772988 kubelet[2133]: I0209 08:55:15.772866    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.772988 kubelet[2133]: I0209 08:55:15.772955    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/159bc9733865df590745fab523bd0ff1-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"159bc9733865df590745fab523bd0ff1\") " pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773176 kubelet[2133]: I0209 08:55:15.772999    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/159bc9733865df590745fab523bd0ff1-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"159bc9733865df590745fab523bd0ff1\") " pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773176 kubelet[2133]: I0209 08:55:15.773047    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/159bc9733865df590745fab523bd0ff1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"159bc9733865df590745fab523bd0ff1\") " pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773176 kubelet[2133]: I0209 08:55:15.773128    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773268 kubelet[2133]: I0209 08:55:15.773213    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773268 kubelet[2133]: I0209 08:55:15.773257    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773322 kubelet[2133]: I0209 08:55:15.773301    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ed01b10bf528f9ccfc1b65408d6d2ba-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"0ed01b10bf528f9ccfc1b65408d6d2ba\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:15.773359 kubelet[2133]: I0209 08:55:15.773349    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a128f3446e9066763bd12019ca5fbe03-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-e-7e5a76b0b8\" (UID: \"a128f3446e9066763bd12019ca5fbe03\") " pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:16.020799 kubelet[2133]: E0209 08:55:16.020754    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:16.023594 kubelet[2133]: E0209 08:55:16.023344    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:16.028156 sudo[2143]: pam_unix(sudo:session): session closed for user root
Feb  9 08:55:16.029665 kubelet[2133]: E0209 08:55:16.029637    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:16.229805 kubelet[2133]: I0209 08:55:16.229763    2133 apiserver.go:52] "Watching apiserver"
Feb  9 08:55:16.261688 kubelet[2133]: I0209 08:55:16.261639    2133 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb  9 08:55:16.276149 kubelet[2133]: I0209 08:55:16.275841    2133 reconciler.go:41] "Reconciler: start to sync state"
Feb  9 08:55:16.634473 kubelet[2133]: E0209 08:55:16.634333    2133 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-e-7e5a76b0b8\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:16.635110 kubelet[2133]: E0209 08:55:16.635083    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:16.827070 kubelet[2133]: E0209 08:55:16.827020    2133 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:16.827537 kubelet[2133]: E0209 08:55:16.827514    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:17.029653 kubelet[2133]: E0209 08:55:17.029617    2133 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-e-7e5a76b0b8\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8"
Feb  9 08:55:17.030343 kubelet[2133]: E0209 08:55:17.030321    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:17.557810 sudo[1327]: pam_unix(sudo:session): session closed for user root
Feb  9 08:55:17.562886 sshd[1322]: pam_unix(sshd:session): session closed for user core
Feb  9 08:55:17.566987 systemd[1]: sshd@4-164.90.156.194:22-139.178.89.65:46542.service: Deactivated successfully.
Feb  9 08:55:17.568815 systemd[1]: session-5.scope: Deactivated successfully.
Feb  9 08:55:17.568819 systemd-logind[1179]: Session 5 logged out. Waiting for processes to exit.
Feb  9 08:55:17.570151 systemd-logind[1179]: Removed session 5.
Feb  9 08:55:17.627646 kubelet[2133]: I0209 08:55:17.627585    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-e-7e5a76b0b8" podStartSLOduration=5.6270196519999995 pod.CreationTimestamp="2024-02-09 08:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:17.238734719 +0000 UTC m=+2.181579298" watchObservedRunningTime="2024-02-09 08:55:17.627019652 +0000 UTC m=+2.569864210"
Feb  9 08:55:17.629463 kubelet[2133]: E0209 08:55:17.629436    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:17.629961 kubelet[2133]: E0209 08:55:17.629925    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:17.630724 kubelet[2133]: E0209 08:55:17.630706    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:18.031501 kubelet[2133]: I0209 08:55:18.031233    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-e-7e5a76b0b8" podStartSLOduration=3.031187032 pod.CreationTimestamp="2024-02-09 08:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:17.628043362 +0000 UTC m=+2.570887938" watchObservedRunningTime="2024-02-09 08:55:18.031187032 +0000 UTC m=+2.974031612"
Feb  9 08:55:18.630995 kubelet[2133]: E0209 08:55:18.630956    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:19.546743 kubelet[2133]: E0209 08:55:19.546703    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:19.563185 kubelet[2133]: I0209 08:55:19.563138    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-e-7e5a76b0b8" podStartSLOduration=6.563086104 pod.CreationTimestamp="2024-02-09 08:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:18.031607783 +0000 UTC m=+2.974452363" watchObservedRunningTime="2024-02-09 08:55:19.563086104 +0000 UTC m=+4.505930687"
Feb  9 08:55:19.631858 kubelet[2133]: E0209 08:55:19.631823    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:20.300937 kubelet[2133]: E0209 08:55:20.300895    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:20.633842 kubelet[2133]: E0209 08:55:20.633744    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:22.395627 kubelet[2133]: E0209 08:55:22.393667    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:22.637051 kubelet[2133]: E0209 08:55:22.636998    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:23.643085 kubelet[2133]: E0209 08:55:23.643026    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:25.032009 update_engine[1180]: I0209 08:55:25.031576  1180 update_attempter.cc:509] Updating boot flags...
Feb  9 08:55:28.675612 kubelet[2133]: I0209 08:55:28.675578    2133 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb  9 08:55:28.676950 env[1199]: time="2024-02-09T08:55:28.676890161Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb  9 08:55:28.677639 kubelet[2133]: I0209 08:55:28.677613    2133 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb  9 08:55:28.851319 kubelet[2133]: I0209 08:55:28.851279    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:28.853513 kubelet[2133]: I0209 08:55:28.853471    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:28.949902 kubelet[2133]: I0209 08:55:28.949774    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-xtables-lock\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.949902 kubelet[2133]: I0209 08:55:28.949821    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-cgroup\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.949902 kubelet[2133]: I0209 08:55:28.949847    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-config-path\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.949902 kubelet[2133]: I0209 08:55:28.949866    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-net\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.949902 kubelet[2133]: I0209 08:55:28.949892    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7bc2449-dd33-4f19-8be6-def75967e05b-kube-proxy\") pod \"kube-proxy-29ntt\" (UID: \"b7bc2449-dd33-4f19-8be6-def75967e05b\") " pod="kube-system/kube-proxy-29ntt"
Feb  9 08:55:28.949902 kubelet[2133]: I0209 08:55:28.949911    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7bc2449-dd33-4f19-8be6-def75967e05b-lib-modules\") pod \"kube-proxy-29ntt\" (UID: \"b7bc2449-dd33-4f19-8be6-def75967e05b\") " pod="kube-system/kube-proxy-29ntt"
Feb  9 08:55:28.950251 kubelet[2133]: I0209 08:55:28.949931    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-kernel\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950251 kubelet[2133]: I0209 08:55:28.949949    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7bc2449-dd33-4f19-8be6-def75967e05b-xtables-lock\") pod \"kube-proxy-29ntt\" (UID: \"b7bc2449-dd33-4f19-8be6-def75967e05b\") " pod="kube-system/kube-proxy-29ntt"
Feb  9 08:55:28.950251 kubelet[2133]: I0209 08:55:28.949968    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-run\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950251 kubelet[2133]: I0209 08:55:28.949985    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-etc-cni-netd\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950251 kubelet[2133]: I0209 08:55:28.950004    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-lib-modules\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950251 kubelet[2133]: I0209 08:55:28.950026    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cni-path\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950461 kubelet[2133]: I0209 08:55:28.950044    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-hubble-tls\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950461 kubelet[2133]: I0209 08:55:28.950063    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvgz\" (UniqueName: \"kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-kube-api-access-fkvgz\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950461 kubelet[2133]: I0209 08:55:28.950080    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-bpf-maps\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950461 kubelet[2133]: I0209 08:55:28.950097    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-hostproc\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950461 kubelet[2133]: I0209 08:55:28.950117    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6d75155-37fc-484f-9681-1b8003bc5516-clustermesh-secrets\") pod \"cilium-r8g4b\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") " pod="kube-system/cilium-r8g4b"
Feb  9 08:55:28.950461 kubelet[2133]: I0209 08:55:28.950140    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrbv\" (UniqueName: \"kubernetes.io/projected/b7bc2449-dd33-4f19-8be6-def75967e05b-kube-api-access-rqrbv\") pod \"kube-proxy-29ntt\" (UID: \"b7bc2449-dd33-4f19-8be6-def75967e05b\") " pod="kube-system/kube-proxy-29ntt"
Feb  9 08:55:29.076074 kubelet[2133]: E0209 08:55:29.074982    2133 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb  9 08:55:29.076074 kubelet[2133]: E0209 08:55:29.075052    2133 projected.go:198] Error preparing data for projected volume kube-api-access-rqrbv for pod kube-system/kube-proxy-29ntt: configmap "kube-root-ca.crt" not found
Feb  9 08:55:29.076074 kubelet[2133]: E0209 08:55:29.075136    2133 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7bc2449-dd33-4f19-8be6-def75967e05b-kube-api-access-rqrbv podName:b7bc2449-dd33-4f19-8be6-def75967e05b nodeName:}" failed. No retries permitted until 2024-02-09 08:55:29.575108326 +0000 UTC m=+14.517952883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rqrbv" (UniqueName: "kubernetes.io/projected/b7bc2449-dd33-4f19-8be6-def75967e05b-kube-api-access-rqrbv") pod "kube-proxy-29ntt" (UID: "b7bc2449-dd33-4f19-8be6-def75967e05b") : configmap "kube-root-ca.crt" not found
Feb  9 08:55:29.076768 kubelet[2133]: E0209 08:55:29.076746    2133 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb  9 08:55:29.076925 kubelet[2133]: E0209 08:55:29.076897    2133 projected.go:198] Error preparing data for projected volume kube-api-access-fkvgz for pod kube-system/cilium-r8g4b: configmap "kube-root-ca.crt" not found
Feb  9 08:55:29.077041 kubelet[2133]: E0209 08:55:29.077030    2133 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-kube-api-access-fkvgz podName:b6d75155-37fc-484f-9681-1b8003bc5516 nodeName:}" failed. No retries permitted until 2024-02-09 08:55:29.577011431 +0000 UTC m=+14.519855989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fkvgz" (UniqueName: "kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-kube-api-access-fkvgz") pod "cilium-r8g4b" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516") : configmap "kube-root-ca.crt" not found
Feb  9 08:55:29.649535 kubelet[2133]: I0209 08:55:29.649478    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:29.756743 kubelet[2133]: E0209 08:55:29.756697    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:29.757783 kubelet[2133]: E0209 08:55:29.757480    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:29.758060 kubelet[2133]: I0209 08:55:29.757647    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0817e9c-98c1-4901-82a5-dc438f5090ef-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-pv9jg\" (UID: \"e0817e9c-98c1-4901-82a5-dc438f5090ef\") " pod="kube-system/cilium-operator-f59cbd8c6-pv9jg"
Feb  9 08:55:29.758147 kubelet[2133]: I0209 08:55:29.758118    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48cc\" (UniqueName: \"kubernetes.io/projected/e0817e9c-98c1-4901-82a5-dc438f5090ef-kube-api-access-r48cc\") pod \"cilium-operator-f59cbd8c6-pv9jg\" (UID: \"e0817e9c-98c1-4901-82a5-dc438f5090ef\") " pod="kube-system/cilium-operator-f59cbd8c6-pv9jg"
Feb  9 08:55:29.759402 env[1199]: time="2024-02-09T08:55:29.759031830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29ntt,Uid:b7bc2449-dd33-4f19-8be6-def75967e05b,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:29.759798 env[1199]: time="2024-02-09T08:55:29.759398956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8g4b,Uid:b6d75155-37fc-484f-9681-1b8003bc5516,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:29.789211 env[1199]: time="2024-02-09T08:55:29.789095338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:29.789423 env[1199]: time="2024-02-09T08:55:29.789222328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:29.789423 env[1199]: time="2024-02-09T08:55:29.789264700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:29.789584 env[1199]: time="2024-02-09T08:55:29.789529023Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0374944b7e375ea992a43b0118bb547277a845944c90d6372d11da55c62c1e17 pid=2254 runtime=io.containerd.runc.v2
Feb  9 08:55:29.801058 env[1199]: time="2024-02-09T08:55:29.800946928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:29.801246 env[1199]: time="2024-02-09T08:55:29.801077102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:29.801246 env[1199]: time="2024-02-09T08:55:29.801116503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:29.801954 env[1199]: time="2024-02-09T08:55:29.801761676Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f pid=2274 runtime=io.containerd.runc.v2
Feb  9 08:55:29.910454 env[1199]: time="2024-02-09T08:55:29.910257122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8g4b,Uid:b6d75155-37fc-484f-9681-1b8003bc5516,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\""
Feb  9 08:55:29.912584 kubelet[2133]: E0209 08:55:29.911892    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:29.915542 env[1199]: time="2024-02-09T08:55:29.915480391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29ntt,Uid:b7bc2449-dd33-4f19-8be6-def75967e05b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0374944b7e375ea992a43b0118bb547277a845944c90d6372d11da55c62c1e17\""
Feb  9 08:55:29.917626 env[1199]: time="2024-02-09T08:55:29.915542941Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb  9 08:55:29.919459 kubelet[2133]: E0209 08:55:29.919067    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:29.929217 env[1199]: time="2024-02-09T08:55:29.929158656Z" level=info msg="CreateContainer within sandbox \"0374944b7e375ea992a43b0118bb547277a845944c90d6372d11da55c62c1e17\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb  9 08:55:29.951656 env[1199]: time="2024-02-09T08:55:29.951578650Z" level=info msg="CreateContainer within sandbox \"0374944b7e375ea992a43b0118bb547277a845944c90d6372d11da55c62c1e17\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a7c6b4a4a88c84b934be73e21622b02569757461480e45e54662c73b5e74b449\""
Feb  9 08:55:29.953360 env[1199]: time="2024-02-09T08:55:29.953315105Z" level=info msg="StartContainer for \"a7c6b4a4a88c84b934be73e21622b02569757461480e45e54662c73b5e74b449\""
Feb  9 08:55:29.955498 kubelet[2133]: E0209 08:55:29.955443    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:29.959883 env[1199]: time="2024-02-09T08:55:29.959826899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-pv9jg,Uid:e0817e9c-98c1-4901-82a5-dc438f5090ef,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:30.008652 env[1199]: time="2024-02-09T08:55:30.004179286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:30.008652 env[1199]: time="2024-02-09T08:55:30.004234435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:30.008652 env[1199]: time="2024-02-09T08:55:30.004261556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:30.008652 env[1199]: time="2024-02-09T08:55:30.004720117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb pid=2364 runtime=io.containerd.runc.v2
Feb  9 08:55:30.071263 env[1199]: time="2024-02-09T08:55:30.070461784Z" level=info msg="StartContainer for \"a7c6b4a4a88c84b934be73e21622b02569757461480e45e54662c73b5e74b449\" returns successfully"
Feb  9 08:55:30.112532 env[1199]: time="2024-02-09T08:55:30.112315772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-pv9jg,Uid:e0817e9c-98c1-4901-82a5-dc438f5090ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb\""
Feb  9 08:55:30.113853 kubelet[2133]: E0209 08:55:30.113817    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:30.658220 kubelet[2133]: E0209 08:55:30.657803    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:30.671546 kubelet[2133]: I0209 08:55:30.671495    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-29ntt" podStartSLOduration=2.671429679 pod.CreationTimestamp="2024-02-09 08:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:30.671056368 +0000 UTC m=+15.613900946" watchObservedRunningTime="2024-02-09 08:55:30.671429679 +0000 UTC m=+15.614274257"
Feb  9 08:55:31.681930 kubelet[2133]: E0209 08:55:31.681886    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:34.972698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount72928871.mount: Deactivated successfully.
Feb  9 08:55:38.343091 env[1199]: time="2024-02-09T08:55:38.342998911Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:38.345637 env[1199]: time="2024-02-09T08:55:38.345585159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:38.347801 env[1199]: time="2024-02-09T08:55:38.347755679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:38.348591 env[1199]: time="2024-02-09T08:55:38.348535060Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb  9 08:55:38.350010 env[1199]: time="2024-02-09T08:55:38.349983070Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb  9 08:55:38.354986 env[1199]: time="2024-02-09T08:55:38.354929402Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 08:55:38.372144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94680178.mount: Deactivated successfully.
Feb  9 08:55:38.385171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679752233.mount: Deactivated successfully.
Feb  9 08:55:38.405085 env[1199]: time="2024-02-09T08:55:38.405033430Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\""
Feb  9 08:55:38.407588 env[1199]: time="2024-02-09T08:55:38.407547662Z" level=info msg="StartContainer for \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\""
Feb  9 08:55:38.510706 env[1199]: time="2024-02-09T08:55:38.510338338Z" level=info msg="StartContainer for \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\" returns successfully"
Feb  9 08:55:38.548272 env[1199]: time="2024-02-09T08:55:38.548208905Z" level=info msg="shim disconnected" id=a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e
Feb  9 08:55:38.548272 env[1199]: time="2024-02-09T08:55:38.548257367Z" level=warning msg="cleaning up after shim disconnected" id=a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e namespace=k8s.io
Feb  9 08:55:38.548272 env[1199]: time="2024-02-09T08:55:38.548269418Z" level=info msg="cleaning up dead shim"
Feb  9 08:55:38.559568 env[1199]: time="2024-02-09T08:55:38.559498710Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:55:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2565 runtime=io.containerd.runc.v2\n"
Feb  9 08:55:38.696328 kubelet[2133]: E0209 08:55:38.695867    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:38.699787 env[1199]: time="2024-02-09T08:55:38.699042824Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 08:55:38.747640 env[1199]: time="2024-02-09T08:55:38.747564891Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\""
Feb  9 08:55:38.750131 env[1199]: time="2024-02-09T08:55:38.749999631Z" level=info msg="StartContainer for \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\""
Feb  9 08:55:38.843050 env[1199]: time="2024-02-09T08:55:38.842989893Z" level=info msg="StartContainer for \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\" returns successfully"
Feb  9 08:55:38.845874 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb  9 08:55:38.846139 systemd[1]: Stopped systemd-sysctl.service.
Feb  9 08:55:38.847315 systemd[1]: Stopping systemd-sysctl.service...
Feb  9 08:55:38.850425 systemd[1]: Starting systemd-sysctl.service...
Feb  9 08:55:38.870633 systemd[1]: Finished systemd-sysctl.service.
Feb  9 08:55:38.887806 env[1199]: time="2024-02-09T08:55:38.887746269Z" level=info msg="shim disconnected" id=512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18
Feb  9 08:55:38.887806 env[1199]: time="2024-02-09T08:55:38.887799051Z" level=warning msg="cleaning up after shim disconnected" id=512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18 namespace=k8s.io
Feb  9 08:55:38.887806 env[1199]: time="2024-02-09T08:55:38.887808810Z" level=info msg="cleaning up dead shim"
Feb  9 08:55:38.898754 env[1199]: time="2024-02-09T08:55:38.898708198Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:55:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2632 runtime=io.containerd.runc.v2\n"
Feb  9 08:55:39.368957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e-rootfs.mount: Deactivated successfully.
Feb  9 08:55:39.705210 kubelet[2133]: E0209 08:55:39.704503    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:39.710597 env[1199]: time="2024-02-09T08:55:39.710544476Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 08:55:39.748098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862649721.mount: Deactivated successfully.
Feb  9 08:55:39.757066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981565748.mount: Deactivated successfully.
Feb  9 08:55:39.800183 env[1199]: time="2024-02-09T08:55:39.800113974Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\""
Feb  9 08:55:39.802724 env[1199]: time="2024-02-09T08:55:39.801846699Z" level=info msg="StartContainer for \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\""
Feb  9 08:55:39.882456 env[1199]: time="2024-02-09T08:55:39.882412438Z" level=info msg="StartContainer for \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\" returns successfully"
Feb  9 08:55:39.918860 env[1199]: time="2024-02-09T08:55:39.918795250Z" level=info msg="shim disconnected" id=a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc
Feb  9 08:55:39.918860 env[1199]: time="2024-02-09T08:55:39.918864740Z" level=warning msg="cleaning up after shim disconnected" id=a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc namespace=k8s.io
Feb  9 08:55:39.919132 env[1199]: time="2024-02-09T08:55:39.918879511Z" level=info msg="cleaning up dead shim"
Feb  9 08:55:39.936605 env[1199]: time="2024-02-09T08:55:39.936540085Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:55:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2692 runtime=io.containerd.runc.v2\n"
Feb  9 08:55:40.664387 env[1199]: time="2024-02-09T08:55:40.664291224Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:40.667503 env[1199]: time="2024-02-09T08:55:40.667458629Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:40.680605 env[1199]: time="2024-02-09T08:55:40.680523169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Feb  9 08:55:40.681876 env[1199]: time="2024-02-09T08:55:40.681793220Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb  9 08:55:40.690445 env[1199]: time="2024-02-09T08:55:40.688825375Z" level=info msg="CreateContainer within sandbox \"ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb  9 08:55:40.702442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198494011.mount: Deactivated successfully.
Feb  9 08:55:40.709873 env[1199]: time="2024-02-09T08:55:40.709765621Z" level=info msg="CreateContainer within sandbox \"ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\""
Feb  9 08:55:40.711718 env[1199]: time="2024-02-09T08:55:40.711670777Z" level=info msg="StartContainer for \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\""
Feb  9 08:55:40.714192 kubelet[2133]: E0209 08:55:40.713853    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:40.727270 env[1199]: time="2024-02-09T08:55:40.727208388Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 08:55:40.781228 env[1199]: time="2024-02-09T08:55:40.779338220Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\""
Feb  9 08:55:40.784861 env[1199]: time="2024-02-09T08:55:40.784744494Z" level=info msg="StartContainer for \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\""
Feb  9 08:55:40.853794 env[1199]: time="2024-02-09T08:55:40.853689953Z" level=info msg="StartContainer for \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\" returns successfully"
Feb  9 08:55:40.879333 env[1199]: time="2024-02-09T08:55:40.879269781Z" level=info msg="StartContainer for \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\" returns successfully"
Feb  9 08:55:40.924305 env[1199]: time="2024-02-09T08:55:40.924169280Z" level=info msg="shim disconnected" id=63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9
Feb  9 08:55:40.925069 env[1199]: time="2024-02-09T08:55:40.925035514Z" level=warning msg="cleaning up after shim disconnected" id=63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9 namespace=k8s.io
Feb  9 08:55:40.925212 env[1199]: time="2024-02-09T08:55:40.925193297Z" level=info msg="cleaning up dead shim"
Feb  9 08:55:40.940875 env[1199]: time="2024-02-09T08:55:40.940823476Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:55:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2786 runtime=io.containerd.runc.v2\n"
Feb  9 08:55:41.723272 kubelet[2133]: E0209 08:55:41.723243    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:41.725108 kubelet[2133]: E0209 08:55:41.724397    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:41.731129 env[1199]: time="2024-02-09T08:55:41.729331214Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 08:55:41.772539 env[1199]: time="2024-02-09T08:55:41.772447967Z" level=info msg="CreateContainer within sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\""
Feb  9 08:55:41.773407 env[1199]: time="2024-02-09T08:55:41.773346342Z" level=info msg="StartContainer for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\""
Feb  9 08:55:41.954944 env[1199]: time="2024-02-09T08:55:41.954886405Z" level=info msg="StartContainer for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" returns successfully"
Feb  9 08:55:42.123441 kubelet[2133]: I0209 08:55:42.123013    2133 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
Feb  9 08:55:42.152744 kubelet[2133]: I0209 08:55:42.152597    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-pv9jg" podStartSLOduration=-9.223372023702248e+09 pod.CreationTimestamp="2024-02-09 08:55:29 +0000 UTC" firstStartedPulling="2024-02-09 08:55:30.115517746 +0000 UTC m=+15.058362316" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:41.878804114 +0000 UTC m=+26.821648692" watchObservedRunningTime="2024-02-09 08:55:42.152529141 +0000 UTC m=+27.095373720"
Feb  9 08:55:42.153558 kubelet[2133]: I0209 08:55:42.153519    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:42.158968 kubelet[2133]: I0209 08:55:42.158923    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:55:42.264277 kubelet[2133]: I0209 08:55:42.264223    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bde3d946-d9f6-49bc-83ce-b9435fd4de58-config-volume\") pod \"coredns-787d4945fb-z9zp7\" (UID: \"bde3d946-d9f6-49bc-83ce-b9435fd4de58\") " pod="kube-system/coredns-787d4945fb-z9zp7"
Feb  9 08:55:42.264668 kubelet[2133]: I0209 08:55:42.264643    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmn7c\" (UniqueName: \"kubernetes.io/projected/bde3d946-d9f6-49bc-83ce-b9435fd4de58-kube-api-access-fmn7c\") pod \"coredns-787d4945fb-z9zp7\" (UID: \"bde3d946-d9f6-49bc-83ce-b9435fd4de58\") " pod="kube-system/coredns-787d4945fb-z9zp7"
Feb  9 08:55:42.264913 kubelet[2133]: I0209 08:55:42.264897    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dfc0bcd-8c17-4b74-8122-7c0c42f23501-config-volume\") pod \"coredns-787d4945fb-j7hwc\" (UID: \"5dfc0bcd-8c17-4b74-8122-7c0c42f23501\") " pod="kube-system/coredns-787d4945fb-j7hwc"
Feb  9 08:55:42.265056 kubelet[2133]: I0209 08:55:42.265044    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxt4p\" (UniqueName: \"kubernetes.io/projected/5dfc0bcd-8c17-4b74-8122-7c0c42f23501-kube-api-access-sxt4p\") pod \"coredns-787d4945fb-j7hwc\" (UID: \"5dfc0bcd-8c17-4b74-8122-7c0c42f23501\") " pod="kube-system/coredns-787d4945fb-j7hwc"
Feb  9 08:55:42.470106 kubelet[2133]: E0209 08:55:42.470047    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:42.471655 env[1199]: time="2024-02-09T08:55:42.471241929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z9zp7,Uid:bde3d946-d9f6-49bc-83ce-b9435fd4de58,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:42.471853 kubelet[2133]: E0209 08:55:42.471349    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:42.472238 env[1199]: time="2024-02-09T08:55:42.472170716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j7hwc,Uid:5dfc0bcd-8c17-4b74-8122-7c0c42f23501,Namespace:kube-system,Attempt:0,}"
Feb  9 08:55:42.729335 kubelet[2133]: E0209 08:55:42.729211    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:42.730281 kubelet[2133]: E0209 08:55:42.730255    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:43.731746 kubelet[2133]: E0209 08:55:43.731711    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:44.494449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Feb  9 08:55:44.491852 systemd-networkd[1069]: cilium_host: Link UP
Feb  9 08:55:44.492044 systemd-networkd[1069]: cilium_net: Link UP
Feb  9 08:55:44.492051 systemd-networkd[1069]: cilium_net: Gained carrier
Feb  9 08:55:44.492272 systemd-networkd[1069]: cilium_host: Gained carrier
Feb  9 08:55:44.493664 systemd-networkd[1069]: cilium_host: Gained IPv6LL
Feb  9 08:55:44.650729 systemd-networkd[1069]: cilium_vxlan: Link UP
Feb  9 08:55:44.650739 systemd-networkd[1069]: cilium_vxlan: Gained carrier
Feb  9 08:55:44.736455 kubelet[2133]: E0209 08:55:44.736422    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:44.826919 systemd-networkd[1069]: cilium_net: Gained IPv6LL
Feb  9 08:55:45.051422 kernel: NET: Registered PF_ALG protocol family
Feb  9 08:55:45.912106 systemd-networkd[1069]: lxc_health: Link UP
Feb  9 08:55:45.923969 systemd-networkd[1069]: lxc_health: Gained carrier
Feb  9 08:55:45.924589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 08:55:46.087303 systemd-networkd[1069]: lxc599acdad9ea9: Link UP
Feb  9 08:55:46.094425 kernel: eth0: renamed from tmpe0368
Feb  9 08:55:46.104118 systemd-networkd[1069]: lxc599acdad9ea9: Gained carrier
Feb  9 08:55:46.104574 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc599acdad9ea9: link becomes ready
Feb  9 08:55:46.105258 systemd-networkd[1069]: lxca9d55eacb716: Link UP
Feb  9 08:55:46.122415 kernel: eth0: renamed from tmp3dcd5
Feb  9 08:55:46.136414 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca9d55eacb716: link becomes ready
Feb  9 08:55:46.136788 systemd-networkd[1069]: lxca9d55eacb716: Gained carrier
Feb  9 08:55:46.222504 systemd-networkd[1069]: cilium_vxlan: Gained IPv6LL
Feb  9 08:55:47.226569 systemd-networkd[1069]: lxc_health: Gained IPv6LL
Feb  9 08:55:47.482568 systemd-networkd[1069]: lxc599acdad9ea9: Gained IPv6LL
Feb  9 08:55:47.762253 kubelet[2133]: E0209 08:55:47.760670    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:47.785459 kubelet[2133]: I0209 08:55:47.785412    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r8g4b" podStartSLOduration=-9.223372017069422e+09 pod.CreationTimestamp="2024-02-09 08:55:28 +0000 UTC" firstStartedPulling="2024-02-09 08:55:29.913595369 +0000 UTC m=+14.856439925" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:42.778305158 +0000 UTC m=+27.721149736" watchObservedRunningTime="2024-02-09 08:55:47.785354008 +0000 UTC m=+32.728198681"
Feb  9 08:55:47.802685 systemd-networkd[1069]: lxca9d55eacb716: Gained IPv6LL
Feb  9 08:55:48.743756 kubelet[2133]: E0209 08:55:48.743719    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:49.745659 kubelet[2133]: E0209 08:55:49.745623    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:50.492388 env[1199]: time="2024-02-09T08:55:50.492295517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:50.492813 env[1199]: time="2024-02-09T08:55:50.492414963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:50.492813 env[1199]: time="2024-02-09T08:55:50.492452463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:50.492813 env[1199]: time="2024-02-09T08:55:50.492755303Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e036818430311622fc9e48c111e94a49560b1e6fb3a3a058417dbf3f58ebdacf pid=3333 runtime=io.containerd.runc.v2
Feb  9 08:55:50.511212 env[1199]: time="2024-02-09T08:55:50.511120933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:55:50.511212 env[1199]: time="2024-02-09T08:55:50.511219103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:55:50.511427 env[1199]: time="2024-02-09T08:55:50.511241620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:55:50.511482 env[1199]: time="2024-02-09T08:55:50.511443405Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3dcd50a15b3f9ca61d5b9a6097fa0df615a554bfc2b5f20ad0bb7903bd13cbc1 pid=3350 runtime=io.containerd.runc.v2
Feb  9 08:55:50.578081 systemd[1]: run-containerd-runc-k8s.io-3dcd50a15b3f9ca61d5b9a6097fa0df615a554bfc2b5f20ad0bb7903bd13cbc1-runc.ExXR7C.mount: Deactivated successfully.
Feb  9 08:55:50.643887 env[1199]: time="2024-02-09T08:55:50.643826931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-j7hwc,Uid:5dfc0bcd-8c17-4b74-8122-7c0c42f23501,Namespace:kube-system,Attempt:0,} returns sandbox id \"e036818430311622fc9e48c111e94a49560b1e6fb3a3a058417dbf3f58ebdacf\""
Feb  9 08:55:50.646363 kubelet[2133]: E0209 08:55:50.645354    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:50.656026 env[1199]: time="2024-02-09T08:55:50.655963280Z" level=info msg="CreateContainer within sandbox \"e036818430311622fc9e48c111e94a49560b1e6fb3a3a058417dbf3f58ebdacf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb  9 08:55:50.688672 env[1199]: time="2024-02-09T08:55:50.688618007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z9zp7,Uid:bde3d946-d9f6-49bc-83ce-b9435fd4de58,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dcd50a15b3f9ca61d5b9a6097fa0df615a554bfc2b5f20ad0bb7903bd13cbc1\""
Feb  9 08:55:50.689489 env[1199]: time="2024-02-09T08:55:50.689440373Z" level=info msg="CreateContainer within sandbox \"e036818430311622fc9e48c111e94a49560b1e6fb3a3a058417dbf3f58ebdacf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4b83c8f3fef1b00c58f6b68f866f7f86324c0ef1e54d19315a73fef2f73bcda\""
Feb  9 08:55:50.690561 kubelet[2133]: E0209 08:55:50.690042    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:50.694669 env[1199]: time="2024-02-09T08:55:50.694602524Z" level=info msg="StartContainer for \"e4b83c8f3fef1b00c58f6b68f866f7f86324c0ef1e54d19315a73fef2f73bcda\""
Feb  9 08:55:50.695426 env[1199]: time="2024-02-09T08:55:50.695362280Z" level=info msg="CreateContainer within sandbox \"3dcd50a15b3f9ca61d5b9a6097fa0df615a554bfc2b5f20ad0bb7903bd13cbc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb  9 08:55:50.723225 env[1199]: time="2024-02-09T08:55:50.722931378Z" level=info msg="CreateContainer within sandbox \"3dcd50a15b3f9ca61d5b9a6097fa0df615a554bfc2b5f20ad0bb7903bd13cbc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c962a12971a8ff3852671a6d3beb92443137e373a2d55547e7d9f1c44e8e5f17\""
Feb  9 08:55:50.726783 env[1199]: time="2024-02-09T08:55:50.726734516Z" level=info msg="StartContainer for \"c962a12971a8ff3852671a6d3beb92443137e373a2d55547e7d9f1c44e8e5f17\""
Feb  9 08:55:50.825622 env[1199]: time="2024-02-09T08:55:50.824762230Z" level=info msg="StartContainer for \"e4b83c8f3fef1b00c58f6b68f866f7f86324c0ef1e54d19315a73fef2f73bcda\" returns successfully"
Feb  9 08:55:50.849326 env[1199]: time="2024-02-09T08:55:50.849258313Z" level=info msg="StartContainer for \"c962a12971a8ff3852671a6d3beb92443137e373a2d55547e7d9f1c44e8e5f17\" returns successfully"
Feb  9 08:55:51.770221 kubelet[2133]: E0209 08:55:51.770189    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:51.774934 kubelet[2133]: E0209 08:55:51.774904    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:51.800976 kubelet[2133]: I0209 08:55:51.800814    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-z9zp7" podStartSLOduration=22.800773901 pod.CreationTimestamp="2024-02-09 08:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:51.800474879 +0000 UTC m=+36.743319455" watchObservedRunningTime="2024-02-09 08:55:51.800773901 +0000 UTC m=+36.743618478"
Feb  9 08:55:51.801165 kubelet[2133]: I0209 08:55:51.801059    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-j7hwc" podStartSLOduration=22.801034984 pod.CreationTimestamp="2024-02-09 08:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:55:51.788778812 +0000 UTC m=+36.731623382" watchObservedRunningTime="2024-02-09 08:55:51.801034984 +0000 UTC m=+36.743879562"
Feb  9 08:55:52.775855 kubelet[2133]: E0209 08:55:52.775817    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:52.776605 kubelet[2133]: E0209 08:55:52.776575    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:53.777879 kubelet[2133]: E0209 08:55:53.777841    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:55:53.778551 kubelet[2133]: E0209 08:55:53.778534    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:01.848651 systemd[1]: Started sshd@5-164.90.156.194:22-34.176.20.17:33458.service.
Feb  9 08:56:03.625829 sshd[3536]: Invalid user hbliu from 34.176.20.17 port 33458
Feb  9 08:56:03.631107 sshd[3536]: pam_faillock(sshd:auth): User unknown
Feb  9 08:56:03.631993 sshd[3536]: pam_unix(sshd:auth): check pass; user unknown
Feb  9 08:56:03.632066 sshd[3536]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=34.176.20.17
Feb  9 08:56:03.632651 sshd[3536]: pam_faillock(sshd:auth): User unknown
Feb  9 08:56:05.826717 sshd[3536]: Failed password for invalid user hbliu from 34.176.20.17 port 33458 ssh2
Feb  9 08:56:06.368093 sshd[3536]: Received disconnect from 34.176.20.17 port 33458:11: Bye Bye [preauth]
Feb  9 08:56:06.368093 sshd[3536]: Disconnected from invalid user hbliu 34.176.20.17 port 33458 [preauth]
Feb  9 08:56:06.370143 systemd[1]: sshd@5-164.90.156.194:22-34.176.20.17:33458.service: Deactivated successfully.
Feb  9 08:56:14.622337 systemd[1]: Started sshd@6-164.90.156.194:22-125.167.130.131:34752.service.
Feb  9 08:56:17.276268 sshd[3541]: Invalid user keycloak from 125.167.130.131 port 34752
Feb  9 08:56:17.279721 sshd[3541]: pam_faillock(sshd:auth): User unknown
Feb  9 08:56:17.280391 sshd[3541]: pam_unix(sshd:auth): check pass; user unknown
Feb  9 08:56:17.280441 sshd[3541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.167.130.131
Feb  9 08:56:17.281194 sshd[3541]: pam_faillock(sshd:auth): User unknown
Feb  9 08:56:19.731174 sshd[3541]: Failed password for invalid user keycloak from 125.167.130.131 port 34752 ssh2
Feb  9 08:56:21.251904 sshd[3541]: Received disconnect from 125.167.130.131 port 34752:11: Bye Bye [preauth]
Feb  9 08:56:21.251904 sshd[3541]: Disconnected from invalid user keycloak 125.167.130.131 port 34752 [preauth]
Feb  9 08:56:21.253798 systemd[1]: sshd@6-164.90.156.194:22-125.167.130.131:34752.service: Deactivated successfully.
Feb  9 08:56:21.724670 systemd[1]: Started sshd@7-164.90.156.194:22-139.178.89.65:52848.service.
Feb  9 08:56:21.779156 sshd[3547]: Accepted publickey for core from 139.178.89.65 port 52848 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:21.781848 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:21.790561 systemd-logind[1179]: New session 6 of user core.
Feb  9 08:56:21.792423 systemd[1]: Started session-6.scope.
Feb  9 08:56:21.982187 sshd[3547]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:21.986746 systemd[1]: sshd@7-164.90.156.194:22-139.178.89.65:52848.service: Deactivated successfully.
Feb  9 08:56:21.988127 systemd[1]: session-6.scope: Deactivated successfully.
Feb  9 08:56:21.988813 systemd-logind[1179]: Session 6 logged out. Waiting for processes to exit.
Feb  9 08:56:21.989962 systemd-logind[1179]: Removed session 6.
Feb  9 08:56:26.607468 kubelet[2133]: E0209 08:56:26.607411    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:26.987657 systemd[1]: Started sshd@8-164.90.156.194:22-139.178.89.65:52858.service.
Feb  9 08:56:27.040888 sshd[3561]: Accepted publickey for core from 139.178.89.65 port 52858 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:27.042911 sshd[3561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:27.049620 systemd[1]: Started session-7.scope.
Feb  9 08:56:27.050002 systemd-logind[1179]: New session 7 of user core.
Feb  9 08:56:27.206316 sshd[3561]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:27.210657 systemd[1]: sshd@8-164.90.156.194:22-139.178.89.65:52858.service: Deactivated successfully.
Feb  9 08:56:27.212467 systemd[1]: session-7.scope: Deactivated successfully.
Feb  9 08:56:27.213208 systemd-logind[1179]: Session 7 logged out. Waiting for processes to exit.
Feb  9 08:56:27.214849 systemd-logind[1179]: Removed session 7.
Feb  9 08:56:32.212327 systemd[1]: Started sshd@9-164.90.156.194:22-139.178.89.65:38778.service.
Feb  9 08:56:32.273767 sshd[3577]: Accepted publickey for core from 139.178.89.65 port 38778 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:32.275840 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:32.283519 systemd[1]: Started session-8.scope.
Feb  9 08:56:32.283828 systemd-logind[1179]: New session 8 of user core.
Feb  9 08:56:32.453106 sshd[3577]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:32.463833 systemd[1]: sshd@9-164.90.156.194:22-139.178.89.65:38778.service: Deactivated successfully.
Feb  9 08:56:32.464819 systemd[1]: session-8.scope: Deactivated successfully.
Feb  9 08:56:32.465495 systemd-logind[1179]: Session 8 logged out. Waiting for processes to exit.
Feb  9 08:56:32.466226 systemd-logind[1179]: Removed session 8.
Feb  9 08:56:33.607813 kubelet[2133]: E0209 08:56:33.607770    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:37.452279 systemd[1]: Started sshd@10-164.90.156.194:22-139.178.89.65:38784.service.
Feb  9 08:56:37.513927 sshd[3591]: Accepted publickey for core from 139.178.89.65 port 38784 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:37.517182 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:37.524048 systemd[1]: Started session-9.scope.
Feb  9 08:56:37.524319 systemd-logind[1179]: New session 9 of user core.
Feb  9 08:56:37.685671 sshd[3591]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:37.689532 systemd[1]: sshd@10-164.90.156.194:22-139.178.89.65:38784.service: Deactivated successfully.
Feb  9 08:56:37.690943 systemd[1]: session-9.scope: Deactivated successfully.
Feb  9 08:56:37.690944 systemd-logind[1179]: Session 9 logged out. Waiting for processes to exit.
Feb  9 08:56:37.692503 systemd-logind[1179]: Removed session 9.
Feb  9 08:56:42.691698 systemd[1]: Started sshd@11-164.90.156.194:22-139.178.89.65:42186.service.
Feb  9 08:56:42.745730 sshd[3605]: Accepted publickey for core from 139.178.89.65 port 42186 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:42.748344 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:42.755046 systemd[1]: Started session-10.scope.
Feb  9 08:56:42.755536 systemd-logind[1179]: New session 10 of user core.
Feb  9 08:56:42.903260 sshd[3605]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:42.908941 systemd[1]: Started sshd@12-164.90.156.194:22-139.178.89.65:42202.service.
Feb  9 08:56:42.911546 systemd[1]: sshd@11-164.90.156.194:22-139.178.89.65:42186.service: Deactivated successfully.
Feb  9 08:56:42.913349 systemd[1]: session-10.scope: Deactivated successfully.
Feb  9 08:56:42.913862 systemd-logind[1179]: Session 10 logged out. Waiting for processes to exit.
Feb  9 08:56:42.915902 systemd-logind[1179]: Removed session 10.
Feb  9 08:56:42.966915 sshd[3617]: Accepted publickey for core from 139.178.89.65 port 42202 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:42.970015 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:42.978178 systemd[1]: Started session-11.scope.
Feb  9 08:56:42.978252 systemd-logind[1179]: New session 11 of user core.
Feb  9 08:56:43.607144 kubelet[2133]: E0209 08:56:43.607102    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:44.192355 sshd[3617]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:44.193739 systemd[1]: Started sshd@13-164.90.156.194:22-139.178.89.65:42214.service.
Feb  9 08:56:44.210940 systemd-logind[1179]: Session 11 logged out. Waiting for processes to exit.
Feb  9 08:56:44.214989 systemd[1]: sshd@12-164.90.156.194:22-139.178.89.65:42202.service: Deactivated successfully.
Feb  9 08:56:44.216359 systemd[1]: session-11.scope: Deactivated successfully.
Feb  9 08:56:44.219239 systemd-logind[1179]: Removed session 11.
Feb  9 08:56:44.281455 sshd[3628]: Accepted publickey for core from 139.178.89.65 port 42214 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:44.284105 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:44.290478 systemd-logind[1179]: New session 12 of user core.
Feb  9 08:56:44.290858 systemd[1]: Started session-12.scope.
Feb  9 08:56:44.467653 sshd[3628]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:44.471950 systemd-logind[1179]: Session 12 logged out. Waiting for processes to exit.
Feb  9 08:56:44.472106 systemd[1]: sshd@13-164.90.156.194:22-139.178.89.65:42214.service: Deactivated successfully.
Feb  9 08:56:44.473673 systemd[1]: session-12.scope: Deactivated successfully.
Feb  9 08:56:44.474597 systemd-logind[1179]: Removed session 12.
Feb  9 08:56:48.607759 kubelet[2133]: E0209 08:56:48.607718    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:49.472746 systemd[1]: Started sshd@14-164.90.156.194:22-139.178.89.65:57194.service.
Feb  9 08:56:49.525884 sshd[3643]: Accepted publickey for core from 139.178.89.65 port 57194 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:49.528614 sshd[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:49.537309 systemd[1]: Started session-13.scope.
Feb  9 08:56:49.537844 systemd-logind[1179]: New session 13 of user core.
Feb  9 08:56:49.701570 sshd[3643]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:49.705592 systemd[1]: sshd@14-164.90.156.194:22-139.178.89.65:57194.service: Deactivated successfully.
Feb  9 08:56:49.706567 systemd[1]: session-13.scope: Deactivated successfully.
Feb  9 08:56:49.706998 systemd-logind[1179]: Session 13 logged out. Waiting for processes to exit.
Feb  9 08:56:49.708290 systemd-logind[1179]: Removed session 13.
Feb  9 08:56:54.707500 systemd[1]: Started sshd@15-164.90.156.194:22-139.178.89.65:57200.service.
Feb  9 08:56:54.760616 sshd[3656]: Accepted publickey for core from 139.178.89.65 port 57200 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:54.763405 sshd[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:54.771080 systemd[1]: Started session-14.scope.
Feb  9 08:56:54.771615 systemd-logind[1179]: New session 14 of user core.
Feb  9 08:56:54.911398 sshd[3656]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:54.916636 systemd[1]: Started sshd@16-164.90.156.194:22-139.178.89.65:57208.service.
Feb  9 08:56:54.917360 systemd[1]: sshd@15-164.90.156.194:22-139.178.89.65:57200.service: Deactivated successfully.
Feb  9 08:56:54.920071 systemd-logind[1179]: Session 14 logged out. Waiting for processes to exit.
Feb  9 08:56:54.922752 systemd[1]: session-14.scope: Deactivated successfully.
Feb  9 08:56:54.924027 systemd-logind[1179]: Removed session 14.
Feb  9 08:56:54.979926 sshd[3667]: Accepted publickey for core from 139.178.89.65 port 57208 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:54.982242 sshd[3667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:54.988064 systemd-logind[1179]: New session 15 of user core.
Feb  9 08:56:54.989527 systemd[1]: Started session-15.scope.
Feb  9 08:56:55.332290 sshd[3667]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:55.339639 systemd[1]: Started sshd@17-164.90.156.194:22-139.178.89.65:57220.service.
Feb  9 08:56:55.341037 systemd[1]: sshd@16-164.90.156.194:22-139.178.89.65:57208.service: Deactivated successfully.
Feb  9 08:56:55.347268 systemd[1]: session-15.scope: Deactivated successfully.
Feb  9 08:56:55.349921 systemd-logind[1179]: Session 15 logged out. Waiting for processes to exit.
Feb  9 08:56:55.352868 systemd-logind[1179]: Removed session 15.
Feb  9 08:56:55.407311 sshd[3679]: Accepted publickey for core from 139.178.89.65 port 57220 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:55.409635 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:55.416640 systemd[1]: Started session-16.scope.
Feb  9 08:56:55.417217 systemd-logind[1179]: New session 16 of user core.
Feb  9 08:56:55.608242 kubelet[2133]: E0209 08:56:55.608101    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:56.617707 sshd[3679]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:56.621864 systemd[1]: Started sshd@18-164.90.156.194:22-139.178.89.65:57236.service.
Feb  9 08:56:56.626646 systemd[1]: sshd@17-164.90.156.194:22-139.178.89.65:57220.service: Deactivated successfully.
Feb  9 08:56:56.631978 systemd[1]: session-16.scope: Deactivated successfully.
Feb  9 08:56:56.632769 systemd-logind[1179]: Session 16 logged out. Waiting for processes to exit.
Feb  9 08:56:56.634382 systemd-logind[1179]: Removed session 16.
Feb  9 08:56:56.709244 sshd[3698]: Accepted publickey for core from 139.178.89.65 port 57236 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:56.712070 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:56.721202 systemd[1]: Started session-17.scope.
Feb  9 08:56:56.723008 systemd-logind[1179]: New session 17 of user core.
Feb  9 08:56:57.046700 sshd[3698]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:57.052156 systemd[1]: Started sshd@19-164.90.156.194:22-139.178.89.65:57242.service.
Feb  9 08:56:57.062282 systemd[1]: sshd@18-164.90.156.194:22-139.178.89.65:57236.service: Deactivated successfully.
Feb  9 08:56:57.064687 systemd[1]: session-17.scope: Deactivated successfully.
Feb  9 08:56:57.065675 systemd-logind[1179]: Session 17 logged out. Waiting for processes to exit.
Feb  9 08:56:57.071776 systemd-logind[1179]: Removed session 17.
Feb  9 08:56:57.123169 sshd[3755]: Accepted publickey for core from 139.178.89.65 port 57242 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:56:57.125825 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:56:57.131860 systemd[1]: Started session-18.scope.
Feb  9 08:56:57.133425 systemd-logind[1179]: New session 18 of user core.
Feb  9 08:56:57.274720 sshd[3755]: pam_unix(sshd:session): session closed for user core
Feb  9 08:56:57.278821 systemd[1]: sshd@19-164.90.156.194:22-139.178.89.65:57242.service: Deactivated successfully.
Feb  9 08:56:57.280629 systemd[1]: session-18.scope: Deactivated successfully.
Feb  9 08:56:57.281459 systemd-logind[1179]: Session 18 logged out. Waiting for processes to exit.
Feb  9 08:56:57.283236 systemd-logind[1179]: Removed session 18.
Feb  9 08:56:58.607879 kubelet[2133]: E0209 08:56:58.607836    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:56:59.606938 kubelet[2133]: E0209 08:56:59.606894    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:02.280342 systemd[1]: Started sshd@20-164.90.156.194:22-139.178.89.65:34506.service.
Feb  9 08:57:02.333712 sshd[3772]: Accepted publickey for core from 139.178.89.65 port 34506 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:02.335049 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:02.342834 systemd[1]: Started session-19.scope.
Feb  9 08:57:02.344454 systemd-logind[1179]: New session 19 of user core.
Feb  9 08:57:02.489915 sshd[3772]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:02.494866 systemd[1]: sshd@20-164.90.156.194:22-139.178.89.65:34506.service: Deactivated successfully.
Feb  9 08:57:02.496800 systemd-logind[1179]: Session 19 logged out. Waiting for processes to exit.
Feb  9 08:57:02.496932 systemd[1]: session-19.scope: Deactivated successfully.
Feb  9 08:57:02.498605 systemd-logind[1179]: Removed session 19.
Feb  9 08:57:07.494859 systemd[1]: Started sshd@21-164.90.156.194:22-139.178.89.65:34510.service.
Feb  9 08:57:07.550261 sshd[3812]: Accepted publickey for core from 139.178.89.65 port 34510 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:07.552458 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:07.558066 systemd[1]: Started session-20.scope.
Feb  9 08:57:07.558515 systemd-logind[1179]: New session 20 of user core.
Feb  9 08:57:07.711597 sshd[3812]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:07.715094 systemd[1]: sshd@21-164.90.156.194:22-139.178.89.65:34510.service: Deactivated successfully.
Feb  9 08:57:07.716530 systemd[1]: session-20.scope: Deactivated successfully.
Feb  9 08:57:07.717444 systemd-logind[1179]: Session 20 logged out. Waiting for processes to exit.
Feb  9 08:57:07.718406 systemd-logind[1179]: Removed session 20.
Feb  9 08:57:12.723258 systemd[1]: Started sshd@22-164.90.156.194:22-139.178.89.65:55228.service.
Feb  9 08:57:12.778697 sshd[3825]: Accepted publickey for core from 139.178.89.65 port 55228 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:12.781310 sshd[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:12.786469 systemd-logind[1179]: New session 21 of user core.
Feb  9 08:57:12.787060 systemd[1]: Started session-21.scope.
Feb  9 08:57:12.924672 sshd[3825]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:12.928735 systemd[1]: sshd@22-164.90.156.194:22-139.178.89.65:55228.service: Deactivated successfully.
Feb  9 08:57:12.930652 systemd-logind[1179]: Session 21 logged out. Waiting for processes to exit.
Feb  9 08:57:12.930690 systemd[1]: session-21.scope: Deactivated successfully.
Feb  9 08:57:12.931681 systemd-logind[1179]: Removed session 21.
Feb  9 08:57:17.607648 kubelet[2133]: E0209 08:57:17.607596    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:17.930610 systemd[1]: Started sshd@23-164.90.156.194:22-139.178.89.65:55232.service.
Feb  9 08:57:17.988399 sshd[3840]: Accepted publickey for core from 139.178.89.65 port 55232 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:17.990746 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:17.999105 systemd-logind[1179]: New session 22 of user core.
Feb  9 08:57:18.000237 systemd[1]: Started session-22.scope.
Feb  9 08:57:18.164001 sshd[3840]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:18.168523 systemd[1]: sshd@23-164.90.156.194:22-139.178.89.65:55232.service: Deactivated successfully.
Feb  9 08:57:18.171040 systemd[1]: session-22.scope: Deactivated successfully.
Feb  9 08:57:18.172086 systemd-logind[1179]: Session 22 logged out. Waiting for processes to exit.
Feb  9 08:57:18.174496 systemd-logind[1179]: Removed session 22.
Feb  9 08:57:23.168580 systemd[1]: Started sshd@24-164.90.156.194:22-139.178.89.65:40650.service.
Feb  9 08:57:23.223355 sshd[3853]: Accepted publickey for core from 139.178.89.65 port 40650 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:23.225591 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:23.230981 systemd[1]: Started session-23.scope.
Feb  9 08:57:23.231228 systemd-logind[1179]: New session 23 of user core.
Feb  9 08:57:23.370598 sshd[3853]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:23.374813 systemd[1]: sshd@24-164.90.156.194:22-139.178.89.65:40650.service: Deactivated successfully.
Feb  9 08:57:23.376476 systemd[1]: session-23.scope: Deactivated successfully.
Feb  9 08:57:23.377226 systemd-logind[1179]: Session 23 logged out. Waiting for processes to exit.
Feb  9 08:57:23.378517 systemd-logind[1179]: Removed session 23.
Feb  9 08:57:28.376494 systemd[1]: Started sshd@25-164.90.156.194:22-139.178.89.65:40738.service.
Feb  9 08:57:28.426658 sshd[3866]: Accepted publickey for core from 139.178.89.65 port 40738 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:28.429516 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:28.434824 systemd-logind[1179]: New session 24 of user core.
Feb  9 08:57:28.435670 systemd[1]: Started session-24.scope.
Feb  9 08:57:28.570050 sshd[3866]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:28.573340 systemd-logind[1179]: Session 24 logged out. Waiting for processes to exit.
Feb  9 08:57:28.573646 systemd[1]: sshd@25-164.90.156.194:22-139.178.89.65:40738.service: Deactivated successfully.
Feb  9 08:57:28.574623 systemd[1]: session-24.scope: Deactivated successfully.
Feb  9 08:57:28.575160 systemd-logind[1179]: Removed session 24.
Feb  9 08:57:33.575790 systemd[1]: Started sshd@26-164.90.156.194:22-139.178.89.65:40754.service.
Feb  9 08:57:33.630746 sshd[3881]: Accepted publickey for core from 139.178.89.65 port 40754 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:33.632708 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:33.639184 systemd[1]: Started session-25.scope.
Feb  9 08:57:33.639702 systemd-logind[1179]: New session 25 of user core.
Feb  9 08:57:33.788015 sshd[3881]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:33.788355 systemd[1]: Started sshd@27-164.90.156.194:22-139.178.89.65:40770.service.
Feb  9 08:57:33.793548 systemd[1]: sshd@26-164.90.156.194:22-139.178.89.65:40754.service: Deactivated successfully.
Feb  9 08:57:33.796034 systemd[1]: session-25.scope: Deactivated successfully.
Feb  9 08:57:33.797063 systemd-logind[1179]: Session 25 logged out. Waiting for processes to exit.
Feb  9 08:57:33.799862 systemd-logind[1179]: Removed session 25.
Feb  9 08:57:33.859129 sshd[3892]: Accepted publickey for core from 139.178.89.65 port 40770 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:33.861511 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:33.869296 systemd[1]: Started session-26.scope.
Feb  9 08:57:33.869966 systemd-logind[1179]: New session 26 of user core.
Feb  9 08:57:35.852775 systemd[1]: run-containerd-runc-k8s.io-c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4-runc.WytbKz.mount: Deactivated successfully.
Feb  9 08:57:35.870696 env[1199]: time="2024-02-09T08:57:35.870646048Z" level=info msg="StopContainer for \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\" with timeout 30 (s)"
Feb  9 08:57:35.871360 env[1199]: time="2024-02-09T08:57:35.871327427Z" level=info msg="Stop container \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\" with signal terminated"
Feb  9 08:57:35.911112 env[1199]: time="2024-02-09T08:57:35.910895424Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb  9 08:57:35.918719 env[1199]: time="2024-02-09T08:57:35.918584748Z" level=info msg="StopContainer for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" with timeout 1 (s)"
Feb  9 08:57:35.918928 env[1199]: time="2024-02-09T08:57:35.918890436Z" level=info msg="Stop container \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" with signal terminated"
Feb  9 08:57:35.932182 systemd-networkd[1069]: lxc_health: Link DOWN
Feb  9 08:57:35.932191 systemd-networkd[1069]: lxc_health: Lost carrier
Feb  9 08:57:35.937012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730-rootfs.mount: Deactivated successfully.
Feb  9 08:57:35.976356 env[1199]: time="2024-02-09T08:57:35.974206575Z" level=info msg="shim disconnected" id=88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730
Feb  9 08:57:35.976356 env[1199]: time="2024-02-09T08:57:35.974251708Z" level=warning msg="cleaning up after shim disconnected" id=88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730 namespace=k8s.io
Feb  9 08:57:35.976356 env[1199]: time="2024-02-09T08:57:35.974260927Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:36.005110 env[1199]: time="2024-02-09T08:57:36.005059309Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3951 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:36.009620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4-rootfs.mount: Deactivated successfully.
Feb  9 08:57:36.011718 env[1199]: time="2024-02-09T08:57:36.011670637Z" level=info msg="StopContainer for \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\" returns successfully"
Feb  9 08:57:36.012725 env[1199]: time="2024-02-09T08:57:36.012463198Z" level=info msg="StopPodSandbox for \"ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb\""
Feb  9 08:57:36.012725 env[1199]: time="2024-02-09T08:57:36.012539192Z" level=info msg="Container to stop \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:36.021381 env[1199]: time="2024-02-09T08:57:36.021308288Z" level=info msg="shim disconnected" id=c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4
Feb  9 08:57:36.021727 env[1199]: time="2024-02-09T08:57:36.021699298Z" level=warning msg="cleaning up after shim disconnected" id=c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4 namespace=k8s.io
Feb  9 08:57:36.021727 env[1199]: time="2024-02-09T08:57:36.021722794Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:36.038779 env[1199]: time="2024-02-09T08:57:36.038731998Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3983 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:36.042781 env[1199]: time="2024-02-09T08:57:36.042722240Z" level=info msg="StopContainer for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" returns successfully"
Feb  9 08:57:36.043286 env[1199]: time="2024-02-09T08:57:36.043254244Z" level=info msg="StopPodSandbox for \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\""
Feb  9 08:57:36.043402 env[1199]: time="2024-02-09T08:57:36.043324578Z" level=info msg="Container to stop \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:36.043402 env[1199]: time="2024-02-09T08:57:36.043340220Z" level=info msg="Container to stop \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:36.043402 env[1199]: time="2024-02-09T08:57:36.043352581Z" level=info msg="Container to stop \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:36.043402 env[1199]: time="2024-02-09T08:57:36.043366711Z" level=info msg="Container to stop \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:36.043402 env[1199]: time="2024-02-09T08:57:36.043390220Z" level=info msg="Container to stop \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:36.062866 env[1199]: time="2024-02-09T08:57:36.059575490Z" level=info msg="shim disconnected" id=ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb
Feb  9 08:57:36.062866 env[1199]: time="2024-02-09T08:57:36.059624550Z" level=warning msg="cleaning up after shim disconnected" id=ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb namespace=k8s.io
Feb  9 08:57:36.062866 env[1199]: time="2024-02-09T08:57:36.059634004Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:36.074348 env[1199]: time="2024-02-09T08:57:36.074293357Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4021 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:36.075761 env[1199]: time="2024-02-09T08:57:36.075718890Z" level=info msg="TearDown network for sandbox \"ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb\" successfully"
Feb  9 08:57:36.075761 env[1199]: time="2024-02-09T08:57:36.075755049Z" level=info msg="StopPodSandbox for \"ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb\" returns successfully"
Feb  9 08:57:36.091707 env[1199]: time="2024-02-09T08:57:36.091643577Z" level=info msg="shim disconnected" id=0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f
Feb  9 08:57:36.091707 env[1199]: time="2024-02-09T08:57:36.091694633Z" level=warning msg="cleaning up after shim disconnected" id=0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f namespace=k8s.io
Feb  9 08:57:36.091707 env[1199]: time="2024-02-09T08:57:36.091707419Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:36.102257 env[1199]: time="2024-02-09T08:57:36.102169102Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:36.102684 env[1199]: time="2024-02-09T08:57:36.102619455Z" level=info msg="TearDown network for sandbox \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" successfully"
Feb  9 08:57:36.102684 env[1199]: time="2024-02-09T08:57:36.102659677Z" level=info msg="StopPodSandbox for \"0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f\" returns successfully"
Feb  9 08:57:36.187076 kubelet[2133]: I0209 08:57:36.187017    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r48cc\" (UniqueName: \"kubernetes.io/projected/e0817e9c-98c1-4901-82a5-dc438f5090ef-kube-api-access-r48cc\") pod \"e0817e9c-98c1-4901-82a5-dc438f5090ef\" (UID: \"e0817e9c-98c1-4901-82a5-dc438f5090ef\") "
Feb  9 08:57:36.187606 kubelet[2133]: I0209 08:57:36.187462    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0817e9c-98c1-4901-82a5-dc438f5090ef-cilium-config-path\") pod \"e0817e9c-98c1-4901-82a5-dc438f5090ef\" (UID: \"e0817e9c-98c1-4901-82a5-dc438f5090ef\") "
Feb  9 08:57:36.188670 kubelet[2133]: W0209 08:57:36.188554    2133 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e0817e9c-98c1-4901-82a5-dc438f5090ef/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 08:57:36.191403 kubelet[2133]: I0209 08:57:36.190976    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0817e9c-98c1-4901-82a5-dc438f5090ef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e0817e9c-98c1-4901-82a5-dc438f5090ef" (UID: "e0817e9c-98c1-4901-82a5-dc438f5090ef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 08:57:36.193267 kubelet[2133]: I0209 08:57:36.193225    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0817e9c-98c1-4901-82a5-dc438f5090ef-kube-api-access-r48cc" (OuterVolumeSpecName: "kube-api-access-r48cc") pod "e0817e9c-98c1-4901-82a5-dc438f5090ef" (UID: "e0817e9c-98c1-4901-82a5-dc438f5090ef"). InnerVolumeSpecName "kube-api-access-r48cc". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 08:57:36.288641 kubelet[2133]: I0209 08:57:36.288590    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-xtables-lock\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.288641 kubelet[2133]: I0209 08:57:36.288655    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cni-path\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289053 kubelet[2133]: I0209 08:57:36.288686    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-bpf-maps\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289053 kubelet[2133]: I0209 08:57:36.288718    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-net\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289053 kubelet[2133]: I0209 08:57:36.288767    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-hubble-tls\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289053 kubelet[2133]: I0209 08:57:36.288796    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-etc-cni-netd\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289053 kubelet[2133]: I0209 08:57:36.288829    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-run\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289053 kubelet[2133]: I0209 08:57:36.288971    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-lib-modules\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289283 kubelet[2133]: I0209 08:57:36.289020    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkvgz\" (UniqueName: \"kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-kube-api-access-fkvgz\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289283 kubelet[2133]: I0209 08:57:36.289053    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-cgroup\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289283 kubelet[2133]: I0209 08:57:36.289090    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6d75155-37fc-484f-9681-1b8003bc5516-clustermesh-secrets\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289283 kubelet[2133]: I0209 08:57:36.289124    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-kernel\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289283 kubelet[2133]: I0209 08:57:36.289164    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-config-path\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289283 kubelet[2133]: I0209 08:57:36.289193    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-hostproc\") pod \"b6d75155-37fc-484f-9681-1b8003bc5516\" (UID: \"b6d75155-37fc-484f-9681-1b8003bc5516\") "
Feb  9 08:57:36.289725 kubelet[2133]: I0209 08:57:36.289688    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.289788 kubelet[2133]: I0209 08:57:36.289738    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.289788 kubelet[2133]: I0209 08:57:36.289765    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.289851 kubelet[2133]: I0209 08:57:36.289791    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.289970 kubelet[2133]: I0209 08:57:36.289951    2133 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-r48cc\" (UniqueName: \"kubernetes.io/projected/e0817e9c-98c1-4901-82a5-dc438f5090ef-kube-api-access-r48cc\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.290031 kubelet[2133]: I0209 08:57:36.290012    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.290241 kubelet[2133]: I0209 08:57:36.290163    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0817e9c-98c1-4901-82a5-dc438f5090ef-cilium-config-path\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.290408 kubelet[2133]: I0209 08:57:36.290283    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.290489 kubelet[2133]: I0209 08:57:36.290419    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.292599 kubelet[2133]: I0209 08:57:36.291555    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.292599 kubelet[2133]: I0209 08:57:36.292103    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.292599 kubelet[2133]: W0209 08:57:36.292291    2133 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b6d75155-37fc-484f-9681-1b8003bc5516/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 08:57:36.292599 kubelet[2133]: I0209 08:57:36.292506    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:36.296764 kubelet[2133]: I0209 08:57:36.296709    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 08:57:36.307699 kubelet[2133]: I0209 08:57:36.303400    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 08:57:36.307699 kubelet[2133]: I0209 08:57:36.307026    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d75155-37fc-484f-9681-1b8003bc5516-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 08:57:36.307699 kubelet[2133]: I0209 08:57:36.307647    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-kube-api-access-fkvgz" (OuterVolumeSpecName: "kube-api-access-fkvgz") pod "b6d75155-37fc-484f-9681-1b8003bc5516" (UID: "b6d75155-37fc-484f-9681-1b8003bc5516"). InnerVolumeSpecName "kube-api-access-fkvgz". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 08:57:36.390913 kubelet[2133]: I0209 08:57:36.390861    2133 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-etc-cni-netd\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391210 kubelet[2133]: I0209 08:57:36.391193    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-run\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391325 kubelet[2133]: I0209 08:57:36.391312    2133 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-hubble-tls\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391472 kubelet[2133]: I0209 08:57:36.391459    2133 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-lib-modules\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391575 kubelet[2133]: I0209 08:57:36.391564    2133 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fkvgz\" (UniqueName: \"kubernetes.io/projected/b6d75155-37fc-484f-9681-1b8003bc5516-kube-api-access-fkvgz\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391661 kubelet[2133]: I0209 08:57:36.391650    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-cgroup\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391750 kubelet[2133]: I0209 08:57:36.391740    2133 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6d75155-37fc-484f-9681-1b8003bc5516-clustermesh-secrets\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391859 kubelet[2133]: I0209 08:57:36.391849    2133 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-kernel\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.391943 kubelet[2133]: I0209 08:57:36.391934    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6d75155-37fc-484f-9681-1b8003bc5516-cilium-config-path\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.392027 kubelet[2133]: I0209 08:57:36.392016    2133 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-hostproc\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.392111 kubelet[2133]: I0209 08:57:36.392101    2133 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-xtables-lock\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.392192 kubelet[2133]: I0209 08:57:36.392183    2133 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-cni-path\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.392283 kubelet[2133]: I0209 08:57:36.392271    2133 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-bpf-maps\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.392389 kubelet[2133]: I0209 08:57:36.392360    2133 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6d75155-37fc-484f-9681-1b8003bc5516-host-proc-sys-net\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:36.842934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb-rootfs.mount: Deactivated successfully.
Feb  9 08:57:36.843122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee0e91dfc3421f8308320eef53824d835a9e29febc486fa95e607e891ae0faeb-shm.mount: Deactivated successfully.
Feb  9 08:57:36.843242 systemd[1]: var-lib-kubelet-pods-e0817e9c\x2d98c1\x2d4901\x2d82a5\x2ddc438f5090ef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr48cc.mount: Deactivated successfully.
Feb  9 08:57:36.843339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f-rootfs.mount: Deactivated successfully.
Feb  9 08:57:36.843440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e0bb8afc896efa295c1e784cfe27621be71255c52ce7c0ef9a0531de1c2f23f-shm.mount: Deactivated successfully.
Feb  9 08:57:36.843532 systemd[1]: var-lib-kubelet-pods-b6d75155\x2d37fc\x2d484f\x2d9681\x2d1b8003bc5516-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfkvgz.mount: Deactivated successfully.
Feb  9 08:57:36.843637 systemd[1]: var-lib-kubelet-pods-b6d75155\x2d37fc\x2d484f\x2d9681\x2d1b8003bc5516-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 08:57:36.843737 systemd[1]: var-lib-kubelet-pods-b6d75155\x2d37fc\x2d484f\x2d9681\x2d1b8003bc5516-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 08:57:37.027916 kubelet[2133]: I0209 08:57:37.027827    2133 scope.go:115] "RemoveContainer" containerID="88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730"
Feb  9 08:57:37.036288 env[1199]: time="2024-02-09T08:57:37.035599298Z" level=info msg="RemoveContainer for \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\""
Feb  9 08:57:37.051699 env[1199]: time="2024-02-09T08:57:37.051435613Z" level=info msg="RemoveContainer for \"88f22b4fe5058f7fc90578c08122adf818b79cce7b1aa99545091476469e6730\" returns successfully"
Feb  9 08:57:37.054123 kubelet[2133]: I0209 08:57:37.054054    2133 scope.go:115] "RemoveContainer" containerID="c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4"
Feb  9 08:57:37.059911 env[1199]: time="2024-02-09T08:57:37.058288134Z" level=info msg="RemoveContainer for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\""
Feb  9 08:57:37.077815 env[1199]: time="2024-02-09T08:57:37.077724186Z" level=info msg="RemoveContainer for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" returns successfully"
Feb  9 08:57:37.078961 kubelet[2133]: I0209 08:57:37.078907    2133 scope.go:115] "RemoveContainer" containerID="63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9"
Feb  9 08:57:37.082755 env[1199]: time="2024-02-09T08:57:37.082713205Z" level=info msg="RemoveContainer for \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\""
Feb  9 08:57:37.094079 env[1199]: time="2024-02-09T08:57:37.093948546Z" level=info msg="RemoveContainer for \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\" returns successfully"
Feb  9 08:57:37.095676 kubelet[2133]: I0209 08:57:37.095646    2133 scope.go:115] "RemoveContainer" containerID="a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc"
Feb  9 08:57:37.097742 env[1199]: time="2024-02-09T08:57:37.097702501Z" level=info msg="RemoveContainer for \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\""
Feb  9 08:57:37.105512 env[1199]: time="2024-02-09T08:57:37.105438098Z" level=info msg="RemoveContainer for \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\" returns successfully"
Feb  9 08:57:37.106171 kubelet[2133]: I0209 08:57:37.106143    2133 scope.go:115] "RemoveContainer" containerID="512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18"
Feb  9 08:57:37.108964 env[1199]: time="2024-02-09T08:57:37.108915951Z" level=info msg="RemoveContainer for \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\""
Feb  9 08:57:37.116930 env[1199]: time="2024-02-09T08:57:37.116873195Z" level=info msg="RemoveContainer for \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\" returns successfully"
Feb  9 08:57:37.117607 kubelet[2133]: I0209 08:57:37.117577    2133 scope.go:115] "RemoveContainer" containerID="a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e"
Feb  9 08:57:37.122548 env[1199]: time="2024-02-09T08:57:37.122495846Z" level=info msg="RemoveContainer for \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\""
Feb  9 08:57:37.125786 env[1199]: time="2024-02-09T08:57:37.125727811Z" level=info msg="RemoveContainer for \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\" returns successfully"
Feb  9 08:57:37.126204 kubelet[2133]: I0209 08:57:37.126179    2133 scope.go:115] "RemoveContainer" containerID="c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4"
Feb  9 08:57:37.127005 env[1199]: time="2024-02-09T08:57:37.126903457Z" level=error msg="ContainerStatus for \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\": not found"
Feb  9 08:57:37.127886 kubelet[2133]: E0209 08:57:37.127859    2133 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\": not found" containerID="c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4"
Feb  9 08:57:37.128811 kubelet[2133]: I0209 08:57:37.128771    2133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4} err="failed to get container status \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2267f5e05346f8294f67a0c580c137a766be0b7818a2c8706402e5a69763ba4\": not found"
Feb  9 08:57:37.129109 kubelet[2133]: I0209 08:57:37.129088    2133 scope.go:115] "RemoveContainer" containerID="63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9"
Feb  9 08:57:37.129635 env[1199]: time="2024-02-09T08:57:37.129553116Z" level=error msg="ContainerStatus for \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\": not found"
Feb  9 08:57:37.130016 kubelet[2133]: E0209 08:57:37.129997    2133 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\": not found" containerID="63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9"
Feb  9 08:57:37.130187 kubelet[2133]: I0209 08:57:37.130170    2133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9} err="failed to get container status \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\": rpc error: code = NotFound desc = an error occurred when try to find container \"63a5554f44360ebb7ff6dc35b373e0fe80ad347aa52a8ebc4f8f15a25d8d5fa9\": not found"
Feb  9 08:57:37.130348 kubelet[2133]: I0209 08:57:37.130334    2133 scope.go:115] "RemoveContainer" containerID="a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc"
Feb  9 08:57:37.130774 env[1199]: time="2024-02-09T08:57:37.130699190Z" level=error msg="ContainerStatus for \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\": not found"
Feb  9 08:57:37.131093 kubelet[2133]: E0209 08:57:37.131077    2133 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\": not found" containerID="a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc"
Feb  9 08:57:37.131266 kubelet[2133]: I0209 08:57:37.131252    2133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc} err="failed to get container status \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\": rpc error: code = NotFound desc = an error occurred when try to find container \"a433db4909b62bc4cf0483371667715214b0ba08ec843f48efd033cd39e615bc\": not found"
Feb  9 08:57:37.131404 kubelet[2133]: I0209 08:57:37.131390    2133 scope.go:115] "RemoveContainer" containerID="512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18"
Feb  9 08:57:37.131856 env[1199]: time="2024-02-09T08:57:37.131777921Z" level=error msg="ContainerStatus for \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\": not found"
Feb  9 08:57:37.131988 kubelet[2133]: E0209 08:57:37.131971    2133 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\": not found" containerID="512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18"
Feb  9 08:57:37.132061 kubelet[2133]: I0209 08:57:37.132003    2133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18} err="failed to get container status \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\": rpc error: code = NotFound desc = an error occurred when try to find container \"512848781b08af0c37637142578633a5749c53dc62106d8e924419a856340f18\": not found"
Feb  9 08:57:37.132061 kubelet[2133]: I0209 08:57:37.132015    2133 scope.go:115] "RemoveContainer" containerID="a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e"
Feb  9 08:57:37.132483 env[1199]: time="2024-02-09T08:57:37.132414346Z" level=error msg="ContainerStatus for \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\": not found"
Feb  9 08:57:37.132787 kubelet[2133]: E0209 08:57:37.132769    2133 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\": not found" containerID="a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e"
Feb  9 08:57:37.133082 kubelet[2133]: I0209 08:57:37.133065    2133 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e} err="failed to get container status \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7b126d99abf4c0de6744d4d9a6b6b24d9f8627366e1a21c7bf059d854d9681e\": not found"
Feb  9 08:57:37.611301 kubelet[2133]: I0209 08:57:37.611272    2133 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b6d75155-37fc-484f-9681-1b8003bc5516 path="/var/lib/kubelet/pods/b6d75155-37fc-484f-9681-1b8003bc5516/volumes"
Feb  9 08:57:37.613169 kubelet[2133]: I0209 08:57:37.613139    2133 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e0817e9c-98c1-4901-82a5-dc438f5090ef path="/var/lib/kubelet/pods/e0817e9c-98c1-4901-82a5-dc438f5090ef/volumes"
Feb  9 08:57:37.762021 sshd[3892]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:37.766414 systemd[1]: Started sshd@28-164.90.156.194:22-139.178.89.65:40782.service.
Feb  9 08:57:37.772358 systemd[1]: sshd@27-164.90.156.194:22-139.178.89.65:40770.service: Deactivated successfully.
Feb  9 08:57:37.773770 systemd[1]: session-26.scope: Deactivated successfully.
Feb  9 08:57:37.775460 systemd-logind[1179]: Session 26 logged out. Waiting for processes to exit.
Feb  9 08:57:37.777475 systemd-logind[1179]: Removed session 26.
Feb  9 08:57:37.843430 sshd[4065]: Accepted publickey for core from 139.178.89.65 port 40782 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:37.846014 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:37.854068 systemd[1]: Started session-27.scope.
Feb  9 08:57:37.855096 systemd-logind[1179]: New session 27 of user core.
Feb  9 08:57:38.891869 sshd[4065]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:38.896035 systemd[1]: Started sshd@29-164.90.156.194:22-139.178.89.65:43992.service.
Feb  9 08:57:38.901684 systemd[1]: sshd@28-164.90.156.194:22-139.178.89.65:40782.service: Deactivated successfully.
Feb  9 08:57:38.904661 systemd-logind[1179]: Session 27 logged out. Waiting for processes to exit.
Feb  9 08:57:38.905272 systemd[1]: session-27.scope: Deactivated successfully.
Feb  9 08:57:38.908812 systemd-logind[1179]: Removed session 27.
Feb  9 08:57:38.933579 kubelet[2133]: I0209 08:57:38.933505    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:57:38.936614 kubelet[2133]: E0209 08:57:38.936514    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6d75155-37fc-484f-9681-1b8003bc5516" containerName="mount-cgroup"
Feb  9 08:57:38.936614 kubelet[2133]: E0209 08:57:38.936557    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6d75155-37fc-484f-9681-1b8003bc5516" containerName="apply-sysctl-overwrites"
Feb  9 08:57:38.936614 kubelet[2133]: E0209 08:57:38.936568    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6d75155-37fc-484f-9681-1b8003bc5516" containerName="mount-bpf-fs"
Feb  9 08:57:38.936614 kubelet[2133]: E0209 08:57:38.936585    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e0817e9c-98c1-4901-82a5-dc438f5090ef" containerName="cilium-operator"
Feb  9 08:57:38.936614 kubelet[2133]: E0209 08:57:38.936599    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6d75155-37fc-484f-9681-1b8003bc5516" containerName="clean-cilium-state"
Feb  9 08:57:38.936614 kubelet[2133]: E0209 08:57:38.936611    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6d75155-37fc-484f-9681-1b8003bc5516" containerName="cilium-agent"
Feb  9 08:57:38.937380 kubelet[2133]: I0209 08:57:38.936653    2133 memory_manager.go:346] "RemoveStaleState removing state" podUID="b6d75155-37fc-484f-9681-1b8003bc5516" containerName="cilium-agent"
Feb  9 08:57:38.937380 kubelet[2133]: I0209 08:57:38.936665    2133 memory_manager.go:346] "RemoveStaleState removing state" podUID="e0817e9c-98c1-4901-82a5-dc438f5090ef" containerName="cilium-operator"
Feb  9 08:57:38.971230 sshd[4077]: Accepted publickey for core from 139.178.89.65 port 43992 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:38.979613 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:38.991459 systemd-logind[1179]: New session 28 of user core.
Feb  9 08:57:38.992648 systemd[1]: Started session-28.scope.
Feb  9 08:57:39.115624 kubelet[2133]: I0209 08:57:39.115581    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-lib-modules\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115624 kubelet[2133]: I0209 08:57:39.115634    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-xtables-lock\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115891 kubelet[2133]: I0209 08:57:39.115660    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-run\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115891 kubelet[2133]: I0209 08:57:39.115683    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-ipsec-secrets\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115891 kubelet[2133]: I0209 08:57:39.115703    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-bpf-maps\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115891 kubelet[2133]: I0209 08:57:39.115730    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cni-path\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115891 kubelet[2133]: I0209 08:57:39.115759    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-etc-cni-netd\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.115891 kubelet[2133]: I0209 08:57:39.115792    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-clustermesh-secrets\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116160 kubelet[2133]: I0209 08:57:39.115819    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hubble-tls\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116160 kubelet[2133]: I0209 08:57:39.115840    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-config-path\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116160 kubelet[2133]: I0209 08:57:39.115859    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqwhd\" (UniqueName: \"kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-kube-api-access-fqwhd\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116160 kubelet[2133]: I0209 08:57:39.115878    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-net\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116160 kubelet[2133]: I0209 08:57:39.115898    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-kernel\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116424 kubelet[2133]: I0209 08:57:39.115920    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hostproc\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.116424 kubelet[2133]: I0209 08:57:39.115941    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-cgroup\") pod \"cilium-bpqtg\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") " pod="kube-system/cilium-bpqtg"
Feb  9 08:57:39.179084 sshd[4077]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:39.183339 systemd[1]: Started sshd@30-164.90.156.194:22-139.178.89.65:43994.service.
Feb  9 08:57:39.197012 systemd[1]: sshd@29-164.90.156.194:22-139.178.89.65:43992.service: Deactivated successfully.
Feb  9 08:57:39.198412 systemd-logind[1179]: Session 28 logged out. Waiting for processes to exit.
Feb  9 08:57:39.198476 systemd[1]: session-28.scope: Deactivated successfully.
Feb  9 08:57:39.199874 systemd-logind[1179]: Removed session 28.
Feb  9 08:57:39.267454 sshd[4090]: Accepted publickey for core from 139.178.89.65 port 43994 ssh2: RSA SHA256:zxCjWE6I1sqRNr8f+A5DoPj4YLVmU7ObDiNpO/GSq00
Feb  9 08:57:39.269671 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Feb  9 08:57:39.282572 systemd[1]: Started session-29.scope.
Feb  9 08:57:39.285596 systemd-logind[1179]: New session 29 of user core.
Feb  9 08:57:39.553232 kubelet[2133]: E0209 08:57:39.553093    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:39.553886 env[1199]: time="2024-02-09T08:57:39.553838474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bpqtg,Uid:387d9380-66bf-47fa-9603-6f9f0e1b8b6d,Namespace:kube-system,Attempt:0,}"
Feb  9 08:57:39.579982 env[1199]: time="2024-02-09T08:57:39.579867618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:57:39.580253 env[1199]: time="2024-02-09T08:57:39.579931307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:57:39.580443 env[1199]: time="2024-02-09T08:57:39.580396327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:57:39.580930 env[1199]: time="2024-02-09T08:57:39.580835695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59 pid=4114 runtime=io.containerd.runc.v2
Feb  9 08:57:39.609657 kubelet[2133]: E0209 08:57:39.608506    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:39.649200 env[1199]: time="2024-02-09T08:57:39.649150231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bpqtg,Uid:387d9380-66bf-47fa-9603-6f9f0e1b8b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59\""
Feb  9 08:57:39.652290 kubelet[2133]: E0209 08:57:39.651860    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:39.658918 env[1199]: time="2024-02-09T08:57:39.658851342Z" level=info msg="CreateContainer within sandbox \"509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 08:57:39.685116 env[1199]: time="2024-02-09T08:57:39.685040251Z" level=info msg="CreateContainer within sandbox \"509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379\""
Feb  9 08:57:39.687738 env[1199]: time="2024-02-09T08:57:39.685874976Z" level=info msg="StartContainer for \"e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379\""
Feb  9 08:57:39.751672 env[1199]: time="2024-02-09T08:57:39.751618562Z" level=info msg="StartContainer for \"e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379\" returns successfully"
Feb  9 08:57:39.798050 env[1199]: time="2024-02-09T08:57:39.797992883Z" level=info msg="shim disconnected" id=e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379
Feb  9 08:57:39.798462 env[1199]: time="2024-02-09T08:57:39.798433945Z" level=warning msg="cleaning up after shim disconnected" id=e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379 namespace=k8s.io
Feb  9 08:57:39.798612 env[1199]: time="2024-02-09T08:57:39.798594024Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:39.810087 env[1199]: time="2024-02-09T08:57:39.809942329Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4197 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:40.050763 env[1199]: time="2024-02-09T08:57:40.049674748Z" level=info msg="StopPodSandbox for \"509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59\""
Feb  9 08:57:40.050763 env[1199]: time="2024-02-09T08:57:40.049772728Z" level=info msg="Container to stop \"e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb  9 08:57:40.105053 env[1199]: time="2024-02-09T08:57:40.104805220Z" level=info msg="shim disconnected" id=509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59
Feb  9 08:57:40.105336 env[1199]: time="2024-02-09T08:57:40.105304970Z" level=warning msg="cleaning up after shim disconnected" id=509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59 namespace=k8s.io
Feb  9 08:57:40.105469 env[1199]: time="2024-02-09T08:57:40.105452993Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:40.117034 env[1199]: time="2024-02-09T08:57:40.116971906Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4231 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:40.117626 env[1199]: time="2024-02-09T08:57:40.117589470Z" level=info msg="TearDown network for sandbox \"509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59\" successfully"
Feb  9 08:57:40.117793 env[1199]: time="2024-02-09T08:57:40.117769094Z" level=info msg="StopPodSandbox for \"509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59\" returns successfully"
Feb  9 08:57:40.225142 kubelet[2133]: I0209 08:57:40.225091    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hostproc\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.225727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-509d225845736082977beab48077351bc130680a6a5840d681aae6b0793aaa59-shm.mount: Deactivated successfully.
Feb  9 08:57:40.226651 kubelet[2133]: I0209 08:57:40.225712    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-lib-modules\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226651 kubelet[2133]: I0209 08:57:40.225764    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-kernel\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226651 kubelet[2133]: I0209 08:57:40.225783    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-net\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226651 kubelet[2133]: I0209 08:57:40.225817    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-cgroup\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226651 kubelet[2133]: I0209 08:57:40.225854    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-clustermesh-secrets\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226651 kubelet[2133]: I0209 08:57:40.225888    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-xtables-lock\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226881 kubelet[2133]: I0209 08:57:40.225907    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cni-path\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226881 kubelet[2133]: I0209 08:57:40.225930    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-run\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226881 kubelet[2133]: I0209 08:57:40.225964    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-ipsec-secrets\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226881 kubelet[2133]: I0209 08:57:40.225982    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-bpf-maps\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.226881 kubelet[2133]: I0209 08:57:40.226004    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hubble-tls\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.227645 kubelet[2133]: I0209 08:57:40.227077    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqwhd\" (UniqueName: \"kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-kube-api-access-fqwhd\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.227645 kubelet[2133]: I0209 08:57:40.227133    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-etc-cni-netd\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.227645 kubelet[2133]: I0209 08:57:40.227158    2133 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-config-path\") pod \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\" (UID: \"387d9380-66bf-47fa-9603-6f9f0e1b8b6d\") "
Feb  9 08:57:40.227645 kubelet[2133]: I0209 08:57:40.227221    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hostproc" (OuterVolumeSpecName: "hostproc") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.227645 kubelet[2133]: I0209 08:57:40.227237    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.227881 kubelet[2133]: I0209 08:57:40.227250    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.227881 kubelet[2133]: I0209 08:57:40.227272    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.227881 kubelet[2133]: I0209 08:57:40.227286    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.228209 kubelet[2133]: I0209 08:57:40.228025    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.228524 kubelet[2133]: I0209 08:57:40.228496    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.228812 kubelet[2133]: W0209 08:57:40.228776    2133 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/387d9380-66bf-47fa-9603-6f9f0e1b8b6d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled
Feb  9 08:57:40.231481 systemd[1]: var-lib-kubelet-pods-387d9380\x2d66bf\x2d47fa\x2d9603\x2d6f9f0e1b8b6d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb  9 08:57:40.232910 kubelet[2133]: I0209 08:57:40.232878    2133 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-bpf-maps\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.232910 kubelet[2133]: I0209 08:57:40.232914    2133 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-etc-cni-netd\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.233064 kubelet[2133]: I0209 08:57:40.232930    2133 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-lib-modules\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.233064 kubelet[2133]: I0209 08:57:40.232947    2133 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-kernel\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.233064 kubelet[2133]: I0209 08:57:40.232960    2133 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-host-proc-sys-net\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.233064 kubelet[2133]: I0209 08:57:40.232972    2133 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hostproc\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.233064 kubelet[2133]: I0209 08:57:40.232983    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-cgroup\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.233847 kubelet[2133]: I0209 08:57:40.233823    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.233992 kubelet[2133]: I0209 08:57:40.233978    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cni-path" (OuterVolumeSpecName: "cni-path") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.234087 kubelet[2133]: I0209 08:57:40.234075    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb  9 08:57:40.234467 kubelet[2133]: I0209 08:57:40.234424    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb  9 08:57:40.234540 kubelet[2133]: I0209 08:57:40.234528    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 08:57:40.238953 systemd[1]: var-lib-kubelet-pods-387d9380\x2d66bf\x2d47fa\x2d9603\x2d6f9f0e1b8b6d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Feb  9 08:57:40.242148 kubelet[2133]: I0209 08:57:40.239945    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb  9 08:57:40.242137 systemd[1]: var-lib-kubelet-pods-387d9380\x2d66bf\x2d47fa\x2d9603\x2d6f9f0e1b8b6d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb  9 08:57:40.243710 kubelet[2133]: I0209 08:57:40.243665    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 08:57:40.247820 systemd[1]: var-lib-kubelet-pods-387d9380\x2d66bf\x2d47fa\x2d9603\x2d6f9f0e1b8b6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfqwhd.mount: Deactivated successfully.
Feb  9 08:57:40.249569 kubelet[2133]: I0209 08:57:40.249516    2133 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-kube-api-access-fqwhd" (OuterVolumeSpecName: "kube-api-access-fqwhd") pod "387d9380-66bf-47fa-9603-6f9f0e1b8b6d" (UID: "387d9380-66bf-47fa-9603-6f9f0e1b8b6d"). InnerVolumeSpecName "kube-api-access-fqwhd". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb  9 08:57:40.333962 kubelet[2133]: I0209 08:57:40.333910    2133 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-clustermesh-secrets\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.333962 kubelet[2133]: I0209 08:57:40.333964    2133 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-xtables-lock\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.333962 kubelet[2133]: I0209 08:57:40.333985    2133 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cni-path\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.334218 kubelet[2133]: I0209 08:57:40.334003    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-run\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.334218 kubelet[2133]: I0209 08:57:40.334027    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-ipsec-secrets\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.334218 kubelet[2133]: I0209 08:57:40.334048    2133 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-hubble-tls\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.334218 kubelet[2133]: I0209 08:57:40.334066    2133 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fqwhd\" (UniqueName: \"kubernetes.io/projected/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-kube-api-access-fqwhd\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.334218 kubelet[2133]: I0209 08:57:40.334085    2133 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/387d9380-66bf-47fa-9603-6f9f0e1b8b6d-cilium-config-path\") on node \"ci-3510.3.2-e-7e5a76b0b8\" DevicePath \"\""
Feb  9 08:57:40.599808 kubelet[2133]: E0209 08:57:40.599765    2133 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb  9 08:57:41.052681 kubelet[2133]: I0209 08:57:41.052635    2133 scope.go:115] "RemoveContainer" containerID="e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379"
Feb  9 08:57:41.057986 env[1199]: time="2024-02-09T08:57:41.057636726Z" level=info msg="RemoveContainer for \"e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379\""
Feb  9 08:57:41.060997 env[1199]: time="2024-02-09T08:57:41.060846041Z" level=info msg="RemoveContainer for \"e9dfb10fb4856f03bd982f81bd56ad65ae74cf8880b6319669e6fb5955737379\" returns successfully"
Feb  9 08:57:41.097454 kubelet[2133]: I0209 08:57:41.097416    2133 topology_manager.go:210] "Topology Admit Handler"
Feb  9 08:57:41.097742 kubelet[2133]: E0209 08:57:41.097723    2133 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="387d9380-66bf-47fa-9603-6f9f0e1b8b6d" containerName="mount-cgroup"
Feb  9 08:57:41.097842 kubelet[2133]: I0209 08:57:41.097831    2133 memory_manager.go:346] "RemoveStaleState removing state" podUID="387d9380-66bf-47fa-9603-6f9f0e1b8b6d" containerName="mount-cgroup"
Feb  9 08:57:41.239694 kubelet[2133]: I0209 08:57:41.239645    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-cilium-cgroup\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.240312 kubelet[2133]: I0209 08:57:41.240289    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-bpf-maps\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.240527 kubelet[2133]: I0209 08:57:41.240485    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-lib-modules\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.240691 kubelet[2133]: I0209 08:57:41.240668    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-cilium-config-path\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.240825 kubelet[2133]: I0209 08:57:41.240813    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxp8\" (UniqueName: \"kubernetes.io/projected/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-kube-api-access-zdxp8\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241065 kubelet[2133]: I0209 08:57:41.241051    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-hostproc\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241177 kubelet[2133]: I0209 08:57:41.241168    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-etc-cni-netd\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241277 kubelet[2133]: I0209 08:57:41.241268    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-clustermesh-secrets\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241461 kubelet[2133]: I0209 08:57:41.241451    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-cilium-ipsec-secrets\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241565 kubelet[2133]: I0209 08:57:41.241556    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-cilium-run\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241663 kubelet[2133]: I0209 08:57:41.241655    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-cni-path\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241763 kubelet[2133]: I0209 08:57:41.241754    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-xtables-lock\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241857 kubelet[2133]: I0209 08:57:41.241848    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-host-proc-sys-net\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.241992 kubelet[2133]: I0209 08:57:41.241981    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-host-proc-sys-kernel\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.242139 kubelet[2133]: I0209 08:57:41.242125    2133 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7-hubble-tls\") pod \"cilium-6mqdf\" (UID: \"7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7\") " pod="kube-system/cilium-6mqdf"
Feb  9 08:57:41.609399 kubelet[2133]: I0209 08:57:41.609342    2133 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=387d9380-66bf-47fa-9603-6f9f0e1b8b6d path="/var/lib/kubelet/pods/387d9380-66bf-47fa-9603-6f9f0e1b8b6d/volumes"
Feb  9 08:57:41.701452 kubelet[2133]: E0209 08:57:41.701406    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:41.703065 env[1199]: time="2024-02-09T08:57:41.702507857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mqdf,Uid:7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7,Namespace:kube-system,Attempt:0,}"
Feb  9 08:57:41.726570 env[1199]: time="2024-02-09T08:57:41.726442407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb  9 08:57:41.727460 env[1199]: time="2024-02-09T08:57:41.727410722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb  9 08:57:41.727630 env[1199]: time="2024-02-09T08:57:41.727604734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb  9 08:57:41.728062 env[1199]: time="2024-02-09T08:57:41.728026696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba pid=4261 runtime=io.containerd.runc.v2
Feb  9 08:57:41.790874 env[1199]: time="2024-02-09T08:57:41.790812083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mqdf,Uid:7cbf9255-2c3b-4e77-ae48-d2c2ce04e4c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\""
Feb  9 08:57:41.795062 kubelet[2133]: E0209 08:57:41.792083    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:41.801689 env[1199]: time="2024-02-09T08:57:41.800149772Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb  9 08:57:41.859221 env[1199]: time="2024-02-09T08:57:41.859109858Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8123685d37d2aba1050be3350643046584be67d9fe3f35b4627bf019fab9c0ab\""
Feb  9 08:57:41.861290 env[1199]: time="2024-02-09T08:57:41.860717923Z" level=info msg="StartContainer for \"8123685d37d2aba1050be3350643046584be67d9fe3f35b4627bf019fab9c0ab\""
Feb  9 08:57:41.991505 env[1199]: time="2024-02-09T08:57:41.991412130Z" level=info msg="StartContainer for \"8123685d37d2aba1050be3350643046584be67d9fe3f35b4627bf019fab9c0ab\" returns successfully"
Feb  9 08:57:42.026490 env[1199]: time="2024-02-09T08:57:42.026421885Z" level=info msg="shim disconnected" id=8123685d37d2aba1050be3350643046584be67d9fe3f35b4627bf019fab9c0ab
Feb  9 08:57:42.026490 env[1199]: time="2024-02-09T08:57:42.026483790Z" level=warning msg="cleaning up after shim disconnected" id=8123685d37d2aba1050be3350643046584be67d9fe3f35b4627bf019fab9c0ab namespace=k8s.io
Feb  9 08:57:42.026490 env[1199]: time="2024-02-09T08:57:42.026497260Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:42.036586 env[1199]: time="2024-02-09T08:57:42.036541020Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4347 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:42.058425 kubelet[2133]: E0209 08:57:42.057244    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:42.062041 env[1199]: time="2024-02-09T08:57:42.061992829Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb  9 08:57:42.109030 env[1199]: time="2024-02-09T08:57:42.108830473Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"96624d41976f983665e83467a32a71d0c9ff811a47691d33a5c3f4fea75f5f70\""
Feb  9 08:57:42.109797 env[1199]: time="2024-02-09T08:57:42.109753854Z" level=info msg="StartContainer for \"96624d41976f983665e83467a32a71d0c9ff811a47691d33a5c3f4fea75f5f70\""
Feb  9 08:57:42.202194 env[1199]: time="2024-02-09T08:57:42.202132716Z" level=info msg="StartContainer for \"96624d41976f983665e83467a32a71d0c9ff811a47691d33a5c3f4fea75f5f70\" returns successfully"
Feb  9 08:57:42.254627 env[1199]: time="2024-02-09T08:57:42.254577693Z" level=info msg="shim disconnected" id=96624d41976f983665e83467a32a71d0c9ff811a47691d33a5c3f4fea75f5f70
Feb  9 08:57:42.254960 env[1199]: time="2024-02-09T08:57:42.254937917Z" level=warning msg="cleaning up after shim disconnected" id=96624d41976f983665e83467a32a71d0c9ff811a47691d33a5c3f4fea75f5f70 namespace=k8s.io
Feb  9 08:57:42.255048 env[1199]: time="2024-02-09T08:57:42.255034167Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:42.269287 env[1199]: time="2024-02-09T08:57:42.269226830Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4410 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:43.064494 kubelet[2133]: E0209 08:57:43.064467    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:43.071179 env[1199]: time="2024-02-09T08:57:43.071127513Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb  9 08:57:43.088722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888037640.mount: Deactivated successfully.
Feb  9 08:57:43.103450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010133375.mount: Deactivated successfully.
Feb  9 08:57:43.109017 env[1199]: time="2024-02-09T08:57:43.108963538Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"580751a4722d07ef6b8972ccdecc8937516b47d22589cf215becca5cd86e38d4\""
Feb  9 08:57:43.111219 env[1199]: time="2024-02-09T08:57:43.111183919Z" level=info msg="StartContainer for \"580751a4722d07ef6b8972ccdecc8937516b47d22589cf215becca5cd86e38d4\""
Feb  9 08:57:43.203510 env[1199]: time="2024-02-09T08:57:43.203450112Z" level=info msg="StartContainer for \"580751a4722d07ef6b8972ccdecc8937516b47d22589cf215becca5cd86e38d4\" returns successfully"
Feb  9 08:57:43.232208 env[1199]: time="2024-02-09T08:57:43.232149379Z" level=info msg="shim disconnected" id=580751a4722d07ef6b8972ccdecc8937516b47d22589cf215becca5cd86e38d4
Feb  9 08:57:43.232559 env[1199]: time="2024-02-09T08:57:43.232530522Z" level=warning msg="cleaning up after shim disconnected" id=580751a4722d07ef6b8972ccdecc8937516b47d22589cf215becca5cd86e38d4 namespace=k8s.io
Feb  9 08:57:43.232687 env[1199]: time="2024-02-09T08:57:43.232669478Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:43.244389 env[1199]: time="2024-02-09T08:57:43.244294866Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4472 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:44.079437 kubelet[2133]: E0209 08:57:44.078582    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:44.094469 env[1199]: time="2024-02-09T08:57:44.091959872Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb  9 08:57:44.110774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176012589.mount: Deactivated successfully.
Feb  9 08:57:44.119893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4006186759.mount: Deactivated successfully.
Feb  9 08:57:44.122515 env[1199]: time="2024-02-09T08:57:44.122444239Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6a9b75f42e128d1c5a355e01c5a8c5951228ea418212a9057defeb0d7d1c646\""
Feb  9 08:57:44.125745 env[1199]: time="2024-02-09T08:57:44.125700484Z" level=info msg="StartContainer for \"f6a9b75f42e128d1c5a355e01c5a8c5951228ea418212a9057defeb0d7d1c646\""
Feb  9 08:57:44.199480 env[1199]: time="2024-02-09T08:57:44.199424119Z" level=info msg="StartContainer for \"f6a9b75f42e128d1c5a355e01c5a8c5951228ea418212a9057defeb0d7d1c646\" returns successfully"
Feb  9 08:57:44.234035 env[1199]: time="2024-02-09T08:57:44.233981255Z" level=info msg="shim disconnected" id=f6a9b75f42e128d1c5a355e01c5a8c5951228ea418212a9057defeb0d7d1c646
Feb  9 08:57:44.234405 env[1199]: time="2024-02-09T08:57:44.234355441Z" level=warning msg="cleaning up after shim disconnected" id=f6a9b75f42e128d1c5a355e01c5a8c5951228ea418212a9057defeb0d7d1c646 namespace=k8s.io
Feb  9 08:57:44.234505 env[1199]: time="2024-02-09T08:57:44.234488195Z" level=info msg="cleaning up dead shim"
Feb  9 08:57:44.247015 env[1199]: time="2024-02-09T08:57:44.246965750Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:57:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4528 runtime=io.containerd.runc.v2\n"
Feb  9 08:57:45.081845 kubelet[2133]: E0209 08:57:45.081804    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:45.090405 env[1199]: time="2024-02-09T08:57:45.089915230Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb  9 08:57:45.112149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744953347.mount: Deactivated successfully.
Feb  9 08:57:45.126552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392856496.mount: Deactivated successfully.
Feb  9 08:57:45.135007 env[1199]: time="2024-02-09T08:57:45.134954110Z" level=info msg="CreateContainer within sandbox \"140bd05f748b42387a8e3515e6ed62d432c05cb4a64b723b46bf72f878466aba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19\""
Feb  9 08:57:45.136221 env[1199]: time="2024-02-09T08:57:45.136186118Z" level=info msg="StartContainer for \"2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19\""
Feb  9 08:57:45.217524 env[1199]: time="2024-02-09T08:57:45.217457958Z" level=info msg="StartContainer for \"2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19\" returns successfully"
Feb  9 08:57:45.860457 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb  9 08:57:46.088523 kubelet[2133]: E0209 08:57:46.088289    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:46.113644 kubelet[2133]: I0209 08:57:46.113496    2133 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6mqdf" podStartSLOduration=5.113454905 pod.CreationTimestamp="2024-02-09 08:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:57:46.113181468 +0000 UTC m=+151.056026047" watchObservedRunningTime="2024-02-09 08:57:46.113454905 +0000 UTC m=+151.056299483"
Feb  9 08:57:47.090758 kubelet[2133]: E0209 08:57:47.090728    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:47.692264 systemd[1]: run-containerd-runc-k8s.io-2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19-runc.vxDAFE.mount: Deactivated successfully.
Feb  9 08:57:48.095335 kubelet[2133]: E0209 08:57:48.095186    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:49.151274 systemd-networkd[1069]: lxc_health: Link UP
Feb  9 08:57:49.161781 systemd-networkd[1069]: lxc_health: Gained carrier
Feb  9 08:57:49.164532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Feb  9 08:57:49.706488 kubelet[2133]: E0209 08:57:49.705963    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:49.929275 systemd[1]: run-containerd-runc-k8s.io-2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19-runc.CHSqX9.mount: Deactivated successfully.
Feb  9 08:57:50.100574 kubelet[2133]: E0209 08:57:50.100322    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:51.102242 kubelet[2133]: E0209 08:57:51.102207    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"
Feb  9 08:57:51.130583 systemd-networkd[1069]: lxc_health: Gained IPv6LL
Feb  9 08:57:52.178385 systemd[1]: run-containerd-runc-k8s.io-2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19-runc.tdZQu8.mount: Deactivated successfully.
Feb  9 08:57:54.383210 systemd[1]: run-containerd-runc-k8s.io-2e9e63f2747b5f204a23902a10c47445e89f90e1420b6cb11c3528c658455d19-runc.RmJ6oV.mount: Deactivated successfully.
Feb  9 08:57:54.526146 sshd[4090]: pam_unix(sshd:session): session closed for user core
Feb  9 08:57:54.529345 systemd[1]: sshd@30-164.90.156.194:22-139.178.89.65:43994.service: Deactivated successfully.
Feb  9 08:57:54.530937 systemd[1]: session-29.scope: Deactivated successfully.
Feb  9 08:57:54.530946 systemd-logind[1179]: Session 29 logged out. Waiting for processes to exit.
Feb  9 08:57:54.532435 systemd-logind[1179]: Removed session 29.
Feb  9 08:57:54.607627 kubelet[2133]: E0209 08:57:54.607592    2133 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"