Jul 2 10:54:49.892133 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 10:54:49.892172 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:54:49.892191 kernel: BIOS-provided physical RAM map: Jul 2 10:54:49.892200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 10:54:49.892209 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 10:54:49.892218 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 10:54:49.892228 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 2 10:54:49.892238 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 2 10:54:49.892247 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 2 10:54:49.892256 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 2 10:54:49.892269 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 10:54:49.892278 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 10:54:49.892288 kernel: NX (Execute Disable) protection: active Jul 2 10:54:49.892297 kernel: SMBIOS 2.8 present. Jul 2 10:54:49.892308 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 2 10:54:49.892318 kernel: Hypervisor detected: KVM Jul 2 10:54:49.892332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 10:54:49.892342 kernel: kvm-clock: cpu 0, msr 4c192001, primary cpu clock Jul 2 10:54:49.892352 kernel: kvm-clock: using sched offset of 4771315406 cycles Jul 2 10:54:49.892362 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 10:54:49.892372 kernel: tsc: Detected 2799.998 MHz processor Jul 2 10:54:49.892382 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 10:54:49.892393 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 10:54:49.892403 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 2 10:54:49.892413 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 10:54:49.892427 kernel: Using GB pages for direct mapping Jul 2 10:54:49.892437 kernel: ACPI: Early table checksum verification disabled Jul 2 10:54:49.892447 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 2 10:54:49.892457 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892467 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892477 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892487 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 2 10:54:49.892497 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892507 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892520 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892530 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:54:49.892540 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 2 10:54:49.892550 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 2 10:54:49.892560 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 2 10:54:49.892570 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 2 10:54:49.892585 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 2 10:54:49.892599 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 2 10:54:49.892610 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 2 10:54:49.892621 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 10:54:49.892631 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 10:54:49.892642 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 2 10:54:49.892652 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 2 10:54:49.892663 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 2 10:54:49.892677 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 2 10:54:49.892688 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 2 10:54:49.892698 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 2 10:54:49.892709 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 2 10:54:49.892719 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 2 10:54:49.892730 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 2 10:54:49.892740 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 2 10:54:49.892751 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 2 10:54:49.892761 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 2 10:54:49.892772 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 2 10:54:49.892786 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 2 10:54:49.892796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 10:54:49.892807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 10:54:49.892817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 2 10:54:49.892828 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 2 10:54:49.892839 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 2 10:54:49.892850 kernel: Zone ranges: Jul 2 10:54:49.892860 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 10:54:49.892871 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 2 10:54:49.892885 kernel: Normal empty Jul 2 10:54:49.892896 kernel: Movable zone start for each node Jul 2 10:54:49.892906 kernel: Early memory node ranges Jul 2 10:54:49.892917 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 10:54:49.892927 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 2 10:54:49.892938 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 2 10:54:49.895981 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 10:54:49.896001 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 10:54:49.896013 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 2 10:54:49.896030 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 10:54:49.896052 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 10:54:49.896064 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 10:54:49.896075 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 10:54:49.896086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 10:54:49.896097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 10:54:49.896107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 10:54:49.896118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 10:54:49.896129 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 10:54:49.896144 kernel: TSC deadline timer available Jul 2 10:54:49.896155 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 2 10:54:49.896166 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 2 10:54:49.896177 kernel: Booting paravirtualized kernel on KVM Jul 2 10:54:49.896187 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 10:54:49.896198 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 10:54:49.896209 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 10:54:49.896220 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 10:54:49.896231 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 10:54:49.896245 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Jul 2 10:54:49.896256 kernel: kvm-guest: PV spinlocks enabled Jul 2 10:54:49.896266 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 10:54:49.896277 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 2 10:54:49.896288 kernel: Policy zone: DMA32 Jul 2 10:54:49.896300 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:54:49.896312 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 10:54:49.896322 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 10:54:49.896337 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 10:54:49.896348 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 10:54:49.896359 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 192524K reserved, 0K cma-reserved) Jul 2 10:54:49.896370 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 10:54:49.896381 kernel: Kernel/User page tables isolation: enabled Jul 2 10:54:49.896391 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 10:54:49.896402 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 10:54:49.896413 kernel: rcu: Hierarchical RCU implementation. Jul 2 10:54:49.896424 kernel: rcu: RCU event tracing is enabled. Jul 2 10:54:49.896439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 10:54:49.896450 kernel: Rude variant of Tasks RCU enabled. Jul 2 10:54:49.896461 kernel: Tracing variant of Tasks RCU enabled. Jul 2 10:54:49.896472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 10:54:49.896482 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 10:54:49.896493 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 2 10:54:49.896504 kernel: random: crng init done Jul 2 10:54:49.896527 kernel: Console: colour VGA+ 80x25 Jul 2 10:54:49.896538 kernel: printk: console [tty0] enabled Jul 2 10:54:49.896549 kernel: printk: console [ttyS0] enabled Jul 2 10:54:49.896561 kernel: ACPI: Core revision 20210730 Jul 2 10:54:49.896572 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 10:54:49.896586 kernel: x2apic enabled Jul 2 10:54:49.896598 kernel: Switched APIC routing to physical x2apic. Jul 2 10:54:49.896609 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jul 2 10:54:49.896621 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jul 2 10:54:49.896632 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 10:54:49.896647 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 10:54:49.896659 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 10:54:49.896670 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 10:54:49.896681 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 10:54:49.896692 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 10:54:49.896703 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 10:54:49.896714 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 10:54:49.896725 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 10:54:49.896736 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 10:54:49.896747 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 10:54:49.896758 kernel: MMIO Stale Data: Unknown: No mitigations Jul 2 10:54:49.896773 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 2 10:54:49.896784 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 10:54:49.896795 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 10:54:49.896807 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 10:54:49.896817 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 10:54:49.896829 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 10:54:49.896840 kernel: Freeing SMP alternatives memory: 32K Jul 2 10:54:49.896851 kernel: pid_max: default: 32768 minimum: 301 Jul 2 10:54:49.896862 kernel: LSM: Security Framework initializing Jul 2 10:54:49.896872 kernel: SELinux: Initializing. Jul 2 10:54:49.896884 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 10:54:49.896899 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 10:54:49.896910 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 2 10:54:49.896921 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 2 10:54:49.896932 kernel: signal: max sigframe size: 1776 Jul 2 10:54:49.896944 kernel: rcu: Hierarchical SRCU implementation. Jul 2 10:54:49.896979 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 10:54:49.896991 kernel: smp: Bringing up secondary CPUs ... Jul 2 10:54:49.897002 kernel: x86: Booting SMP configuration: Jul 2 10:54:49.897013 kernel: .... node #0, CPUs: #1 Jul 2 10:54:49.897030 kernel: kvm-clock: cpu 1, msr 4c192041, secondary cpu clock Jul 2 10:54:49.897049 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 2 10:54:49.897061 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Jul 2 10:54:49.897073 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 10:54:49.897084 kernel: smpboot: Max logical packages: 16 Jul 2 10:54:49.897095 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jul 2 10:54:49.897106 kernel: devtmpfs: initialized Jul 2 10:54:49.897117 kernel: x86/mm: Memory block size: 128MB Jul 2 10:54:49.897128 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 10:54:49.897139 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 10:54:49.897155 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 10:54:49.897166 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 10:54:49.897178 kernel: audit: initializing netlink subsys (disabled) Jul 2 10:54:49.897189 kernel: audit: type=2000 audit(1719917688.844:1): state=initialized audit_enabled=0 res=1 Jul 2 10:54:49.897200 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 10:54:49.897211 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 10:54:49.897222 kernel: cpuidle: using governor menu Jul 2 10:54:49.897233 kernel: ACPI: bus type PCI registered Jul 2 10:54:49.897244 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 10:54:49.897260 kernel: dca service started, version 1.12.1 Jul 2 10:54:49.897271 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 2 10:54:49.897282 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 2 10:54:49.897293 kernel: PCI: Using configuration type 1 for base access Jul 2 10:54:49.897305 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 10:54:49.897316 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 10:54:49.897327 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 10:54:49.897338 kernel: ACPI: Added _OSI(Module Device) Jul 2 10:54:49.897353 kernel: ACPI: Added _OSI(Processor Device) Jul 2 10:54:49.897364 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 10:54:49.897375 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 10:54:49.897387 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 10:54:49.897398 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 10:54:49.897409 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 10:54:49.897420 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 10:54:49.897431 kernel: ACPI: Interpreter enabled Jul 2 10:54:49.897442 kernel: ACPI: PM: (supports S0 S5) Jul 2 10:54:49.897453 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 10:54:49.897468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 10:54:49.897480 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 2 10:54:49.897491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 10:54:49.897752 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 10:54:49.897905 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 10:54:49.898132 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 10:54:49.898151 kernel: PCI host bridge to bus 0000:00 Jul 2 10:54:49.898314 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 10:54:49.898446 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 10:54:49.898577 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 10:54:49.898708 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 2 10:54:49.898840 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 2 10:54:49.898986 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 2 10:54:49.899135 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 10:54:49.899304 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 2 10:54:49.899460 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 2 10:54:49.899610 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 2 10:54:49.899754 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 2 10:54:49.899900 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 2 10:54:49.904111 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 10:54:49.904289 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.904444 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 2 10:54:49.904615 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.904775 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 2 10:54:49.904957 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.905126 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 2 10:54:49.905311 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.905460 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 2 10:54:49.905614 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.905760 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 2 10:54:49.905912 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.906087 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 2 10:54:49.906247 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.906402 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 2 10:54:49.906558 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 2 10:54:49.906704 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 2 10:54:49.906861 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 10:54:49.907023 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 2 10:54:49.907184 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 2 10:54:49.907339 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 2 10:54:49.907483 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 2 10:54:49.907636 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 10:54:49.907785 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 10:54:49.907931 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 2 10:54:49.912161 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 2 10:54:49.912329 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 2 10:54:49.912489 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 2 10:54:49.912648 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 2 10:54:49.912795 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 2 10:54:49.912938 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 2 10:54:49.913143 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 2 10:54:49.913289 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 2 10:54:49.913463 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 2 10:54:49.913614 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 2 10:54:49.913761 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 2 10:54:49.913904 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 2 10:54:49.914074 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 10:54:49.914235 kernel: pci_bus 0000:02: extended config space not accessible Jul 2 10:54:49.914412 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 2 10:54:49.914570 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 2 10:54:49.914722 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 2 10:54:49.914872 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 2 10:54:49.915072 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 2 10:54:49.915228 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 2 10:54:49.915374 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 2 10:54:49.915536 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 2 10:54:49.915699 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 10:54:49.915881 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 2 10:54:49.916088 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 2 10:54:49.916246 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 2 10:54:49.916404 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 2 10:54:49.916559 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 10:54:49.916723 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 2 10:54:49.916896 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 2 10:54:49.917090 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 10:54:49.917259 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 2 10:54:49.917431 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 2 10:54:49.917594 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 10:54:49.917765 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 2 10:54:49.917937 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 2 10:54:49.918118 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 10:54:49.918273 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 2 10:54:49.918417 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 2 10:54:49.918561 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 10:54:49.918708 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 2 10:54:49.918855 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 2 10:54:49.926053 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 10:54:49.926077 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 10:54:49.926090 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 10:54:49.926109 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 10:54:49.926120 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 10:54:49.926132 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 2 10:54:49.926144 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 2 10:54:49.926155 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 2 10:54:49.926167 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 2 10:54:49.926178 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 2 10:54:49.926190 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 2 10:54:49.926201 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 2 10:54:49.926216 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 2 10:54:49.926228 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 2 10:54:49.926239 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 2 10:54:49.926250 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 2 10:54:49.926262 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 2 10:54:49.926273 kernel: iommu: Default domain type: Translated Jul 2 10:54:49.926285 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 10:54:49.926439 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 2 10:54:49.926587 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 10:54:49.926740 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 2 10:54:49.926758 kernel: vgaarb: loaded Jul 2 10:54:49.926770 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 10:54:49.926782 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 10:54:49.926793 kernel: PTP clock support registered Jul 2 10:54:49.926805 kernel: PCI: Using ACPI for IRQ routing Jul 2 10:54:49.926816 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 10:54:49.926827 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 10:54:49.926844 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 2 10:54:49.926855 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 10:54:49.926867 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 10:54:49.926878 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 10:54:49.926890 kernel: pnp: PnP ACPI init Jul 2 10:54:49.927107 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 2 10:54:49.927127 kernel: pnp: PnP ACPI: found 5 devices Jul 2 10:54:49.927139 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 10:54:49.927157 kernel: NET: Registered PF_INET protocol family Jul 2 10:54:49.927169 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 10:54:49.927180 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 10:54:49.927192 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 10:54:49.927203 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 10:54:49.927215 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 10:54:49.927226 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 10:54:49.927237 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 10:54:49.927249 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 10:54:49.927264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 10:54:49.927276 kernel: NET: Registered PF_XDP protocol family Jul 2 10:54:49.927420 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 2 10:54:49.927565 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 2 10:54:49.927709 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 2 10:54:49.927852 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 2 10:54:49.928010 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 2 10:54:49.928175 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 2 10:54:49.928320 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 2 10:54:49.928463 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 2 10:54:49.928606 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 2 10:54:49.928749 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 2 10:54:49.928894 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 2 10:54:49.929069 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 2 10:54:49.929216 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 2 10:54:49.929359 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 2 10:54:49.929504 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 2 10:54:49.929647 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 2 10:54:49.929797 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 2 10:54:49.929963 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 2 10:54:49.930127 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 2 10:54:49.930272 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 2 10:54:49.930435 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 2 10:54:49.930588 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 10:54:49.930752 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 2 10:54:49.930898 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 2 10:54:49.931076 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 2 10:54:49.931222 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 10:54:49.931365 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 2 10:54:49.931508 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 2 10:54:49.931669 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 2 10:54:49.931814 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 10:54:49.931976 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 2 10:54:49.932136 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 2 10:54:49.932283 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 2 10:54:49.932429 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 10:54:49.932574 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 2 10:54:49.932733 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 2 10:54:49.932878 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 2 10:54:49.943643 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 10:54:49.943803 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 2 10:54:49.943969 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 2 10:54:49.944132 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 2 10:54:49.944279 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 10:54:49.944425 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 2 10:54:49.944580 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 2 10:54:49.944724 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 2 10:54:49.944868 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 10:54:49.945030 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 2 10:54:49.945190 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 2 10:54:49.945345 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 2 10:54:49.945493 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 10:54:49.945634 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 10:54:49.945768 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 10:54:49.945902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 10:54:49.946055 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 2 10:54:49.946193 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 2 10:54:49.946330 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 2 10:54:49.946491 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 2 10:54:49.946634 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 2 10:54:49.946775 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 10:54:49.946926 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 2 10:54:49.947108 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 2 10:54:49.947253 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 2 10:54:49.947394 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 10:54:49.947552 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 2 10:54:49.947694 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 2 10:54:49.947834 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 10:54:49.948012 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 2 10:54:49.948173 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 2 10:54:49.948313 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 10:54:49.948472 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 2 10:54:49.948648 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 2 10:54:49.948789 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 10:54:49.948973 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 2 10:54:49.949130 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 2 10:54:49.949270 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 10:54:49.949419 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 2 10:54:49.949568 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 2 10:54:49.949709 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 10:54:49.949870 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 2 10:54:49.953528 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 2 10:54:49.953678 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 10:54:49.953698 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 2 10:54:49.953711 kernel: PCI: CLS 0 bytes, default 64 Jul 2 10:54:49.953723 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 10:54:49.953743 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 2 10:54:49.953755 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 10:54:49.953768 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jul 2 10:54:49.953780 kernel: Initialise system trusted keyrings Jul 2 10:54:49.953793 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 10:54:49.953805 kernel: Key type asymmetric registered Jul 2 10:54:49.953817 kernel: Asymmetric key parser 'x509' registered Jul 2 10:54:49.953829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 10:54:49.953841 kernel: io scheduler mq-deadline registered Jul 2 10:54:49.953857 kernel: io scheduler kyber registered Jul 2 10:54:49.953869 kernel: io scheduler bfq registered Jul 2 10:54:49.954030 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 2 10:54:49.954191 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 2 10:54:49.954336 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.954483 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 2 10:54:49.954626 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 2 10:54:49.954778 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.954924 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 2 10:54:49.955098 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 2 10:54:49.955244 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.955390 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 2 10:54:49.955534 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 2 10:54:49.955684 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.955830 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 2 10:54:49.955986 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 2 10:54:49.956146 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.956298 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 2 10:54:49.956443 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 2 10:54:49.956594 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.956739 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 2 10:54:49.956882 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 2 10:54:49.957049 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.957198 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 2 10:54:49.957343 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 2 10:54:49.957504 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:54:49.957523 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 10:54:49.957536 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 2 10:54:49.957549 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 2 10:54:49.957561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 10:54:49.957573 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 10:54:49.957586 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 10:54:49.957598 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 10:54:49.957618 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 10:54:49.957762 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 10:54:49.957902 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 10:54:49.958073 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T10:54:49 UTC (1719917689) Jul 2 10:54:49.958214 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 10:54:49.958232 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 10:54:49.958245 kernel: intel_pstate: CPU model not supported Jul 2 10:54:49.958262 kernel: NET: Registered PF_INET6 protocol family Jul 2 10:54:49.958275 kernel: Segment Routing with IPv6 Jul 2 10:54:49.958287 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 10:54:49.958300 kernel: NET: Registered PF_PACKET protocol family Jul 2 10:54:49.958312 kernel: Key type dns_resolver registered Jul 2 10:54:49.958324 kernel: IPI shorthand broadcast: enabled Jul 2 10:54:49.958336 kernel: sched_clock: Marking stable (962592071, 214513033)->(1437094187, -259989083) Jul 2 10:54:49.958348 kernel: registered taskstats version 1 Jul 2 10:54:49.958361 kernel: Loading compiled-in X.509 certificates Jul 2 10:54:49.958372 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 10:54:49.958389 kernel: Key type .fscrypt registered Jul 2 10:54:49.958401 kernel: Key type fscrypt-provisioning registered Jul 2 10:54:49.958413 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 10:54:49.958425 kernel: ima: Allocated hash algorithm: sha1 Jul 2 10:54:49.958437 kernel: ima: No architecture policies found Jul 2 10:54:49.958449 kernel: clk: Disabling unused clocks Jul 2 10:54:49.958461 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 10:54:49.958473 kernel: Write protecting the kernel read-only data: 28672k Jul 2 10:54:49.958489 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 10:54:49.958501 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 10:54:49.958513 kernel: Run /init as init process Jul 2 10:54:49.958525 kernel: with arguments: Jul 2 10:54:49.958537 kernel: /init Jul 2 10:54:49.958548 kernel: with environment: Jul 2 10:54:49.958560 kernel: HOME=/ Jul 2 10:54:49.958572 kernel: TERM=linux Jul 2 10:54:49.958583 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 10:54:49.958605 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 10:54:49.958627 systemd[1]: Detected virtualization kvm. Jul 2 10:54:49.958641 systemd[1]: Detected architecture x86-64. Jul 2 10:54:49.958653 systemd[1]: Running in initrd. Jul 2 10:54:49.958666 systemd[1]: No hostname configured, using default hostname. Jul 2 10:54:49.958678 systemd[1]: Hostname set to . Jul 2 10:54:49.958691 systemd[1]: Initializing machine ID from VM UUID. Jul 2 10:54:49.958708 systemd[1]: Queued start job for default target initrd.target. Jul 2 10:54:49.958721 systemd[1]: Started systemd-ask-password-console.path. Jul 2 10:54:49.958733 systemd[1]: Reached target cryptsetup.target. Jul 2 10:54:49.958746 systemd[1]: Reached target paths.target. Jul 2 10:54:49.958758 systemd[1]: Reached target slices.target. Jul 2 10:54:49.958775 systemd[1]: Reached target swap.target. Jul 2 10:54:49.958787 systemd[1]: Reached target timers.target. Jul 2 10:54:49.958801 systemd[1]: Listening on iscsid.socket. Jul 2 10:54:49.958817 systemd[1]: Listening on iscsiuio.socket. Jul 2 10:54:49.958830 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 10:54:49.958846 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 10:54:49.958859 systemd[1]: Listening on systemd-journald.socket. Jul 2 10:54:49.958872 systemd[1]: Listening on systemd-networkd.socket. Jul 2 10:54:49.958885 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 10:54:49.958897 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 10:54:49.958910 systemd[1]: Reached target sockets.target. Jul 2 10:54:49.958923 systemd[1]: Starting kmod-static-nodes.service... Jul 2 10:54:49.958940 systemd[1]: Finished network-cleanup.service. Jul 2 10:54:49.965025 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 10:54:49.965052 systemd[1]: Starting systemd-journald.service... Jul 2 10:54:49.965066 systemd[1]: Starting systemd-modules-load.service... Jul 2 10:54:49.965080 systemd[1]: Starting systemd-resolved.service... Jul 2 10:54:49.965093 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 10:54:49.965106 systemd[1]: Finished kmod-static-nodes.service. Jul 2 10:54:49.965119 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 10:54:49.965142 systemd-journald[201]: Journal started Jul 2 10:54:49.965222 systemd-journald[201]: Runtime Journal (/run/log/journal/4b1d47f644aa46839914818b5d049019) is 4.7M, max 38.1M, 33.3M free. Jul 2 10:54:49.892990 systemd-modules-load[202]: Inserted module 'overlay' Jul 2 10:54:49.996734 kernel: Bridge firewalling registered Jul 2 10:54:49.996765 systemd[1]: Started systemd-resolved.service. Jul 2 10:54:49.996786 kernel: audit: type=1130 audit(1719917689.981:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:49.996815 kernel: SCSI subsystem initialized Jul 2 10:54:49.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:49.942865 systemd-resolved[203]: Positive Trust Anchors: Jul 2 10:54:50.003615 systemd[1]: Started systemd-journald.service. Jul 2 10:54:50.003641 kernel: audit: type=1130 audit(1719917689.996:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.003667 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 10:54:49.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:49.942885 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 10:54:50.011397 kernel: device-mapper: uevent: version 1.0.3 Jul 2 10:54:50.011424 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 10:54:49.942929 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 10:54:50.023331 kernel: audit: type=1130 audit(1719917690.010:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.023358 kernel: audit: type=1130 audit(1719917690.016:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:49.946565 systemd-resolved[203]: Defaulting to hostname 'linux'. Jul 2 10:54:50.035063 kernel: audit: type=1130 audit(1719917690.022:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.035093 kernel: audit: type=1130 audit(1719917690.028:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:49.967622 systemd-modules-load[202]: Inserted module 'br_netfilter' Jul 2 10:54:50.010464 systemd-modules-load[202]: Inserted module 'dm_multipath' Jul 2 10:54:50.011395 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 10:54:50.016793 systemd[1]: Finished systemd-modules-load.service. Jul 2 10:54:50.024239 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 10:54:50.029813 systemd[1]: Reached target nss-lookup.target. Jul 2 10:54:50.036768 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 10:54:50.038835 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:54:50.043265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 10:54:50.055798 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:54:50.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.062022 kernel: audit: type=1130 audit(1719917690.055:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.058076 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 10:54:50.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.067997 kernel: audit: type=1130 audit(1719917690.061:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.068319 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 10:54:50.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.070136 systemd[1]: Starting dracut-cmdline.service... Jul 2 10:54:50.088826 kernel: audit: type=1130 audit(1719917690.068:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.088851 dracut-cmdline[224]: dracut-dracut-053 Jul 2 10:54:50.088851 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 10:54:50.088851 dracut-cmdline[224]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:54:50.166079 kernel: Loading iSCSI transport class v2.0-870. Jul 2 10:54:50.185967 kernel: iscsi: registered transport (tcp) Jul 2 10:54:50.213815 kernel: iscsi: registered transport (qla4xxx) Jul 2 10:54:50.213875 kernel: QLogic iSCSI HBA Driver Jul 2 10:54:50.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.260025 systemd[1]: Finished dracut-cmdline.service. Jul 2 10:54:50.261834 systemd[1]: Starting dracut-pre-udev.service... Jul 2 10:54:50.319999 kernel: raid6: sse2x4 gen() 14746 MB/s Jul 2 10:54:50.338024 kernel: raid6: sse2x4 xor() 8482 MB/s Jul 2 10:54:50.355999 kernel: raid6: sse2x2 gen() 9925 MB/s Jul 2 10:54:50.374046 kernel: raid6: sse2x2 xor() 8291 MB/s Jul 2 10:54:50.392023 kernel: raid6: sse2x1 gen() 9955 MB/s Jul 2 10:54:50.410632 kernel: raid6: sse2x1 xor() 7579 MB/s Jul 2 10:54:50.410678 kernel: raid6: using algorithm sse2x4 gen() 14746 MB/s Jul 2 10:54:50.410696 kernel: raid6: .... xor() 8482 MB/s, rmw enabled Jul 2 10:54:50.411817 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 10:54:50.427976 kernel: xor: automatically using best checksumming function avx Jul 2 10:54:50.537987 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 10:54:50.549782 systemd[1]: Finished dracut-pre-udev.service. Jul 2 10:54:50.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.550000 audit: BPF prog-id=7 op=LOAD Jul 2 10:54:50.550000 audit: BPF prog-id=8 op=LOAD Jul 2 10:54:50.552424 systemd[1]: Starting systemd-udevd.service... Jul 2 10:54:50.568473 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 2 10:54:50.576079 systemd[1]: Started systemd-udevd.service. Jul 2 10:54:50.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.581472 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 10:54:50.597824 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jul 2 10:54:50.636593 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 10:54:50.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.638361 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 10:54:50.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:50.727653 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 10:54:50.808974 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 10:54:50.830012 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 10:54:50.856006 kernel: AVX version of gcm_enc/dec engaged. Jul 2 10:54:50.856073 kernel: AES CTR mode by8 optimization enabled Jul 2 10:54:50.856971 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 10:54:50.857002 kernel: GPT:17805311 != 125829119 Jul 2 10:54:50.857018 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 10:54:50.857043 kernel: GPT:17805311 != 125829119 Jul 2 10:54:50.857058 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 10:54:50.857074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 10:54:50.866978 kernel: ACPI: bus type USB registered Jul 2 10:54:50.867043 kernel: usbcore: registered new interface driver usbfs Jul 2 10:54:50.867062 kernel: usbcore: registered new interface driver hub Jul 2 10:54:50.867079 kernel: usbcore: registered new device driver usb Jul 2 10:54:50.870972 kernel: libata version 3.00 loaded. Jul 2 10:54:50.888059 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 10:54:51.029168 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Jul 2 10:54:51.029211 kernel: ahci 0000:00:1f.2: version 3.0 Jul 2 10:54:51.029499 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 2 10:54:51.029526 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 2 10:54:51.029791 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 2 10:54:51.029972 kernel: scsi host0: ahci Jul 2 10:54:51.030184 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 2 10:54:51.030356 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 2 10:54:51.030520 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 2 10:54:51.030699 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 2 10:54:51.030867 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 2 10:54:51.031064 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 2 10:54:51.031238 kernel: hub 1-0:1.0: USB hub found Jul 2 10:54:51.031443 kernel: hub 1-0:1.0: 4 ports detected Jul 2 10:54:51.031621 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 2 10:54:51.031876 kernel: scsi host1: ahci Jul 2 10:54:51.032105 kernel: hub 2-0:1.0: USB hub found Jul 2 10:54:51.032294 kernel: hub 2-0:1.0: 4 ports detected Jul 2 10:54:51.032472 kernel: scsi host2: ahci Jul 2 10:54:51.032643 kernel: scsi host3: ahci Jul 2 10:54:51.032817 kernel: scsi host4: ahci Jul 2 10:54:51.033030 kernel: scsi host5: ahci Jul 2 10:54:51.033221 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jul 2 10:54:51.033240 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jul 2 10:54:51.033256 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jul 2 10:54:51.033272 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jul 2 10:54:51.033287 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jul 2 10:54:51.033303 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jul 2 10:54:51.029862 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 10:54:51.041794 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 10:54:51.050411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 10:54:51.056555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 10:54:51.058489 systemd[1]: Starting disk-uuid.service... Jul 2 10:54:51.068993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 10:54:51.071117 disk-uuid[528]: Primary Header is updated. Jul 2 10:54:51.071117 disk-uuid[528]: Secondary Entries is updated. Jul 2 10:54:51.071117 disk-uuid[528]: Secondary Header is updated. Jul 2 10:54:51.166622 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 2 10:54:51.240007 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 2 10:54:51.241970 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 2 10:54:51.248990 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 10:54:51.249049 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 10:54:51.251757 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 10:54:51.253293 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 10:54:51.305985 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 10:54:51.312924 kernel: usbcore: registered new interface driver usbhid Jul 2 10:54:51.312970 kernel: usbhid: USB HID core driver Jul 2 10:54:51.321981 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jul 2 10:54:51.322036 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 2 10:54:52.089988 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 10:54:52.090660 disk-uuid[529]: The operation has completed successfully. Jul 2 10:54:52.150197 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 10:54:52.150327 systemd[1]: Finished disk-uuid.service. Jul 2 10:54:52.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.152147 systemd[1]: Starting verity-setup.service... Jul 2 10:54:52.173040 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jul 2 10:54:52.223065 systemd[1]: Found device dev-mapper-usr.device. Jul 2 10:54:52.225990 systemd[1]: Finished verity-setup.service. Jul 2 10:54:52.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.227542 systemd[1]: Mounting sysusr-usr.mount... Jul 2 10:54:52.316988 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 10:54:52.317379 systemd[1]: Mounted sysusr-usr.mount. Jul 2 10:54:52.318213 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 10:54:52.319158 systemd[1]: Starting ignition-setup.service... Jul 2 10:54:52.323198 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 10:54:52.338045 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:54:52.338091 kernel: BTRFS info (device vda6): using free space tree Jul 2 10:54:52.338108 kernel: BTRFS info (device vda6): has skinny extents Jul 2 10:54:52.351676 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 10:54:52.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.358988 systemd[1]: Finished ignition-setup.service. Jul 2 10:54:52.360560 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 10:54:52.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.461000 audit: BPF prog-id=9 op=LOAD Jul 2 10:54:52.460805 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 10:54:52.463639 systemd[1]: Starting systemd-networkd.service... Jul 2 10:54:52.511386 systemd-networkd[710]: lo: Link UP Jul 2 10:54:52.511403 systemd-networkd[710]: lo: Gained carrier Jul 2 10:54:52.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.514305 systemd-networkd[710]: Enumeration completed Jul 2 10:54:52.514874 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:54:52.515624 systemd[1]: Started systemd-networkd.service. Jul 2 10:54:52.516675 systemd[1]: Reached target network.target. Jul 2 10:54:52.516863 systemd-networkd[710]: eth0: Link UP Jul 2 10:54:52.516869 systemd-networkd[710]: eth0: Gained carrier Jul 2 10:54:52.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.519432 systemd[1]: Starting iscsiuio.service... Jul 2 10:54:52.539829 systemd[1]: Started iscsiuio.service. Jul 2 10:54:52.543246 systemd[1]: Starting iscsid.service... Jul 2 10:54:52.548364 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 10:54:52.548364 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 10:54:52.548364 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 10:54:52.548364 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 10:54:52.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.557295 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 10:54:52.557295 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 10:54:52.551078 systemd[1]: Started iscsid.service. Jul 2 10:54:52.555012 systemd[1]: Starting dracut-initqueue.service... Jul 2 10:54:52.558257 systemd-networkd[710]: eth0: DHCPv4 address 10.230.70.110/30, gateway 10.230.70.109 acquired from 10.230.70.109 Jul 2 10:54:52.567364 ignition[629]: Ignition 2.14.0 Jul 2 10:54:52.567389 ignition[629]: Stage: fetch-offline Jul 2 10:54:52.567504 ignition[629]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:52.567542 ignition[629]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:52.569244 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:52.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.570858 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 10:54:52.569392 ignition[629]: parsed url from cmdline: "" Jul 2 10:54:52.573475 systemd[1]: Starting ignition-fetch.service... Jul 2 10:54:52.569399 ignition[629]: no config URL provided Jul 2 10:54:52.569409 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 10:54:52.569424 ignition[629]: no config at "/usr/lib/ignition/user.ign" Jul 2 10:54:52.569432 ignition[629]: failed to fetch config: resource requires networking Jul 2 10:54:52.569745 ignition[629]: Ignition finished successfully Jul 2 10:54:52.584464 systemd[1]: Finished dracut-initqueue.service. Jul 2 10:54:52.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.585311 systemd[1]: Reached target remote-fs-pre.target. Jul 2 10:54:52.586339 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 10:54:52.587525 systemd[1]: Reached target remote-fs.target. Jul 2 10:54:52.590320 systemd[1]: Starting dracut-pre-mount.service... Jul 2 10:54:52.596576 ignition[721]: Ignition 2.14.0 Jul 2 10:54:52.596589 ignition[721]: Stage: fetch Jul 2 10:54:52.596791 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:52.596823 ignition[721]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:52.601575 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:52.601724 ignition[721]: parsed url from cmdline: "" Jul 2 10:54:52.601732 ignition[721]: no config URL provided Jul 2 10:54:52.601741 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 10:54:52.601757 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jul 2 10:54:52.605165 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 10:54:52.605208 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 10:54:52.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.606128 systemd[1]: Finished dracut-pre-mount.service. Jul 2 10:54:52.608451 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 10:54:52.628093 ignition[721]: GET result: OK Jul 2 10:54:52.628743 ignition[721]: parsing config with SHA512: 3f078dfb2493a7cbee645ae42a7ea51c885ac5d1c1c1c80d2da476ca2676ba22aa893667198472d23322409b2316e0d61325ea09ca824ce7759f1df16c5bb3e4 Jul 2 10:54:52.636540 unknown[721]: fetched base config from "system" Jul 2 10:54:52.636563 unknown[721]: fetched base config from "system" Jul 2 10:54:52.637085 ignition[721]: fetch: fetch complete Jul 2 10:54:52.636572 unknown[721]: fetched user config from "openstack" Jul 2 10:54:52.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.637094 ignition[721]: fetch: fetch passed Jul 2 10:54:52.638546 systemd[1]: Finished ignition-fetch.service. Jul 2 10:54:52.637146 ignition[721]: Ignition finished successfully Jul 2 10:54:52.640397 systemd[1]: Starting ignition-kargs.service... Jul 2 10:54:52.651880 ignition[735]: Ignition 2.14.0 Jul 2 10:54:52.651893 ignition[735]: Stage: kargs Jul 2 10:54:52.652084 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:52.652118 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:52.653300 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:52.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.654806 ignition[735]: kargs: kargs passed Jul 2 10:54:52.655861 systemd[1]: Finished ignition-kargs.service. Jul 2 10:54:52.654865 ignition[735]: Ignition finished successfully Jul 2 10:54:52.657841 systemd[1]: Starting ignition-disks.service... Jul 2 10:54:52.667628 ignition[740]: Ignition 2.14.0 Jul 2 10:54:52.667645 ignition[740]: Stage: disks Jul 2 10:54:52.667798 ignition[740]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:52.667831 ignition[740]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:52.669067 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:52.670545 ignition[740]: disks: disks passed Jul 2 10:54:52.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.671411 systemd[1]: Finished ignition-disks.service. Jul 2 10:54:52.670607 ignition[740]: Ignition finished successfully Jul 2 10:54:52.672158 systemd[1]: Reached target initrd-root-device.target. Jul 2 10:54:52.673264 systemd[1]: Reached target local-fs-pre.target. Jul 2 10:54:52.674411 systemd[1]: Reached target local-fs.target. Jul 2 10:54:52.675633 systemd[1]: Reached target sysinit.target. Jul 2 10:54:52.676910 systemd[1]: Reached target basic.target. Jul 2 10:54:52.679414 systemd[1]: Starting systemd-fsck-root.service... Jul 2 10:54:52.698630 systemd-fsck[747]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 10:54:52.702349 systemd[1]: Finished systemd-fsck-root.service. Jul 2 10:54:52.703938 systemd[1]: Mounting sysroot.mount... Jul 2 10:54:52.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.714974 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 10:54:52.715204 systemd[1]: Mounted sysroot.mount. Jul 2 10:54:52.715890 systemd[1]: Reached target initrd-root-fs.target. Jul 2 10:54:52.718225 systemd[1]: Mounting sysroot-usr.mount... Jul 2 10:54:52.719323 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 10:54:52.720132 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 2 10:54:52.723063 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 10:54:52.723108 systemd[1]: Reached target ignition-diskful.target. Jul 2 10:54:52.726561 systemd[1]: Mounted sysroot-usr.mount. Jul 2 10:54:52.729583 systemd[1]: Starting initrd-setup-root.service... Jul 2 10:54:52.736118 initrd-setup-root[758]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 10:54:52.754838 initrd-setup-root[766]: cut: /sysroot/etc/group: No such file or directory Jul 2 10:54:52.762729 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 10:54:52.771642 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 10:54:52.836045 systemd[1]: Finished initrd-setup-root.service. Jul 2 10:54:52.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.837940 systemd[1]: Starting ignition-mount.service... Jul 2 10:54:52.843844 systemd[1]: Starting sysroot-boot.service... Jul 2 10:54:52.853103 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 10:54:52.864876 ignition[803]: INFO : Ignition 2.14.0 Jul 2 10:54:52.864876 ignition[803]: INFO : Stage: mount Jul 2 10:54:52.866432 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:52.866432 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:52.866432 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:52.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.869972 ignition[803]: INFO : mount: mount passed Jul 2 10:54:52.869972 ignition[803]: INFO : Ignition finished successfully Jul 2 10:54:52.867884 systemd[1]: Finished ignition-mount.service. Jul 2 10:54:52.878345 coreos-metadata[753]: Jul 02 10:54:52.878 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 10:54:52.881883 systemd[1]: Finished sysroot-boot.service. Jul 2 10:54:52.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.940776 coreos-metadata[753]: Jul 02 10:54:52.940 INFO Fetch successful Jul 2 10:54:52.941983 coreos-metadata[753]: Jul 02 10:54:52.941 INFO wrote hostname srv-f8jck.gb1.brightbox.com to /sysroot/etc/hostname Jul 2 10:54:52.945862 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 10:54:52.946036 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 2 10:54:52.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:52.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:53.247169 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 10:54:53.259993 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (811) Jul 2 10:54:53.264104 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:54:53.264152 kernel: BTRFS info (device vda6): using free space tree Jul 2 10:54:53.264171 kernel: BTRFS info (device vda6): has skinny extents Jul 2 10:54:53.270637 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 10:54:53.272374 systemd[1]: Starting ignition-files.service... Jul 2 10:54:53.292289 ignition[831]: INFO : Ignition 2.14.0 Jul 2 10:54:53.293316 ignition[831]: INFO : Stage: files Jul 2 10:54:53.294144 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:53.295091 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:53.297617 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:53.299931 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Jul 2 10:54:53.300820 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 10:54:53.300820 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 10:54:53.305605 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 10:54:53.306823 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 10:54:53.309215 unknown[831]: wrote ssh authorized keys file for user: core Jul 2 10:54:53.310128 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 10:54:53.313352 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 10:54:53.314517 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 10:54:54.081395 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 10:54:54.269789 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 10:54:54.271261 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 10:54:54.271261 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 10:54:54.536540 systemd-networkd[710]: eth0: Gained IPv6LL Jul 2 10:54:54.881663 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 10:54:55.230822 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 10:54:55.230822 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 10:54:55.236769 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 10:54:55.236769 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 10:54:55.236769 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 10:54:55.236769 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:54:55.241114 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 10:54:55.717165 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 10:54:56.043155 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:179:919b:24:19ff:fee6:466e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:919b:24:19ff:fee6:466e/64 assigned by NDisc. Jul 2 10:54:56.043169 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 2 10:54:58.448425 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 10:54:58.450668 ignition[831]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 10:54:58.450668 ignition[831]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 10:54:58.450668 ignition[831]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 2 10:54:58.450668 ignition[831]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 10:54:58.454476 ignition[831]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 10:54:58.454476 ignition[831]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 2 10:54:58.454476 ignition[831]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 10:54:58.454476 ignition[831]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 10:54:58.454476 ignition[831]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 10:54:58.454476 ignition[831]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 10:54:58.461006 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 10:54:58.462019 ignition[831]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 10:54:58.462019 ignition[831]: INFO : files: files passed Jul 2 10:54:58.462019 ignition[831]: INFO : Ignition finished successfully Jul 2 10:54:58.464374 systemd[1]: Finished ignition-files.service. Jul 2 10:54:58.473814 kernel: kauditd_printk_skb: 28 callbacks suppressed Jul 2 10:54:58.473872 kernel: audit: type=1130 audit(1719917698.465:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.467701 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 10:54:58.474441 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 10:54:58.475564 systemd[1]: Starting ignition-quench.service... Jul 2 10:54:58.483754 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 10:54:58.483892 systemd[1]: Finished ignition-quench.service. Jul 2 10:54:58.485250 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 10:54:58.495760 kernel: audit: type=1130 audit(1719917698.485:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.495804 kernel: audit: type=1131 audit(1719917698.485:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.486406 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 10:54:58.501791 kernel: audit: type=1130 audit(1719917698.495:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.496651 systemd[1]: Reached target ignition-complete.target. Jul 2 10:54:58.503461 systemd[1]: Starting initrd-parse-etc.service... Jul 2 10:54:58.520403 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 10:54:58.520550 systemd[1]: Finished initrd-parse-etc.service. Jul 2 10:54:58.531327 kernel: audit: type=1130 audit(1719917698.520:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.531361 kernel: audit: type=1131 audit(1719917698.520:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.522092 systemd[1]: Reached target initrd-fs.target. Jul 2 10:54:58.531924 systemd[1]: Reached target initrd.target. Jul 2 10:54:58.533205 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 10:54:58.534274 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 10:54:58.549390 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 10:54:58.568515 kernel: audit: type=1130 audit(1719917698.549:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.551049 systemd[1]: Starting initrd-cleanup.service... Jul 2 10:54:58.576067 systemd[1]: Stopped target nss-lookup.target. Jul 2 10:54:58.576809 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 10:54:58.578133 systemd[1]: Stopped target timers.target. Jul 2 10:54:58.579296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 10:54:58.585428 kernel: audit: type=1131 audit(1719917698.579:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.579510 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 10:54:58.580659 systemd[1]: Stopped target initrd.target. Jul 2 10:54:58.586272 systemd[1]: Stopped target basic.target. Jul 2 10:54:58.587389 systemd[1]: Stopped target ignition-complete.target. Jul 2 10:54:58.588623 systemd[1]: Stopped target ignition-diskful.target. Jul 2 10:54:58.589764 systemd[1]: Stopped target initrd-root-device.target. Jul 2 10:54:58.591130 systemd[1]: Stopped target remote-fs.target. Jul 2 10:54:58.592321 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 10:54:58.593492 systemd[1]: Stopped target sysinit.target. Jul 2 10:54:58.594711 systemd[1]: Stopped target local-fs.target. Jul 2 10:54:58.595778 systemd[1]: Stopped target local-fs-pre.target. Jul 2 10:54:58.596929 systemd[1]: Stopped target swap.target. Jul 2 10:54:58.605998 kernel: audit: type=1131 audit(1719917698.598:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.597967 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 10:54:58.598186 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 10:54:58.599314 systemd[1]: Stopped target cryptsetup.target. Jul 2 10:54:58.613422 kernel: audit: type=1131 audit(1719917698.607:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.606932 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 10:54:58.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.607309 systemd[1]: Stopped dracut-initqueue.service. Jul 2 10:54:58.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.608783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 10:54:58.609068 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 10:54:58.614355 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 10:54:58.614593 systemd[1]: Stopped ignition-files.service. Jul 2 10:54:58.629893 iscsid[715]: iscsid shutting down. Jul 2 10:54:58.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.617350 systemd[1]: Stopping ignition-mount.service... Jul 2 10:54:58.634062 ignition[869]: INFO : Ignition 2.14.0 Jul 2 10:54:58.634062 ignition[869]: INFO : Stage: umount Jul 2 10:54:58.634062 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:54:58.634062 ignition[869]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:54:58.618461 systemd[1]: Stopping iscsid.service... Jul 2 10:54:58.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.628597 systemd[1]: Stopping sysroot-boot.service... Jul 2 10:54:58.649524 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:54:58.649524 ignition[869]: INFO : umount: umount passed Jul 2 10:54:58.649524 ignition[869]: INFO : Ignition finished successfully Jul 2 10:54:58.629328 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 10:54:58.629669 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 10:54:58.630983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 10:54:58.631254 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 10:54:58.643312 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 10:54:58.643455 systemd[1]: Stopped iscsid.service. Jul 2 10:54:58.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.644558 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 10:54:58.644675 systemd[1]: Stopped ignition-mount.service. Jul 2 10:54:58.645744 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 10:54:58.645878 systemd[1]: Stopped ignition-disks.service. Jul 2 10:54:58.646533 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 10:54:58.646605 systemd[1]: Stopped ignition-kargs.service. Jul 2 10:54:58.647236 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 10:54:58.647290 systemd[1]: Stopped ignition-fetch.service. Jul 2 10:54:58.647892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 10:54:58.655373 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 10:54:58.656666 systemd[1]: Stopped target paths.target. Jul 2 10:54:58.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.658033 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 10:54:58.661148 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 10:54:58.662086 systemd[1]: Stopped target slices.target. Jul 2 10:54:58.663406 systemd[1]: Stopped target sockets.target. Jul 2 10:54:58.664606 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 10:54:58.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.664660 systemd[1]: Closed iscsid.socket. Jul 2 10:54:58.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.665829 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 10:54:58.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.665892 systemd[1]: Stopped ignition-setup.service. Jul 2 10:54:58.667169 systemd[1]: Stopping iscsiuio.service... Jul 2 10:54:58.672346 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 10:54:58.673082 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 10:54:58.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.673204 systemd[1]: Stopped iscsiuio.service. Jul 2 10:54:58.674318 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 10:54:58.674440 systemd[1]: Finished initrd-cleanup.service. Jul 2 10:54:58.675481 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 10:54:58.675594 systemd[1]: Stopped sysroot-boot.service. Jul 2 10:54:58.677302 systemd[1]: Stopped target network.target. Jul 2 10:54:58.678290 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 10:54:58.678341 systemd[1]: Closed iscsiuio.socket. Jul 2 10:54:58.679356 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 10:54:58.679411 systemd[1]: Stopped initrd-setup-root.service. Jul 2 10:54:58.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.680836 systemd[1]: Stopping systemd-networkd.service... Jul 2 10:54:58.682800 systemd[1]: Stopping systemd-resolved.service... Jul 2 10:54:58.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.686010 systemd-networkd[710]: eth0: DHCPv6 lease lost Jul 2 10:54:58.692000 audit: BPF prog-id=9 op=UNLOAD Jul 2 10:54:58.692000 audit: BPF prog-id=6 op=UNLOAD Jul 2 10:54:58.688713 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 10:54:58.689058 systemd[1]: Stopped systemd-networkd.service. Jul 2 10:54:58.690712 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 10:54:58.690855 systemd[1]: Stopped systemd-resolved.service. Jul 2 10:54:58.692878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 10:54:58.693065 systemd[1]: Closed systemd-networkd.socket. Jul 2 10:54:58.695029 systemd[1]: Stopping network-cleanup.service... Jul 2 10:54:58.698809 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 10:54:58.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.698927 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 10:54:58.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.700302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 10:54:58.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.700387 systemd[1]: Stopped systemd-sysctl.service. Jul 2 10:54:58.701869 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 10:54:58.701971 systemd[1]: Stopped systemd-modules-load.service. Jul 2 10:54:58.703130 systemd[1]: Stopping systemd-udevd.service... Jul 2 10:54:58.705581 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 10:54:58.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.708601 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 10:54:58.708816 systemd[1]: Stopped systemd-udevd.service. Jul 2 10:54:58.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.710301 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 10:54:58.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.710434 systemd[1]: Stopped network-cleanup.service. Jul 2 10:54:58.711911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 10:54:58.711998 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 10:54:58.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.712601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 10:54:58.712646 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 10:54:58.713234 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 10:54:58.713289 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 10:54:58.728487 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 10:54:58.728578 systemd[1]: Stopped dracut-cmdline.service. Jul 2 10:54:58.730477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 10:54:58.730535 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 10:54:58.737804 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 10:54:58.753814 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 10:54:58.753962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 10:54:58.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.755845 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 10:54:58.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.755920 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 10:54:58.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.756671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 10:54:58.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:58.756732 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 10:54:58.759245 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 10:54:58.759915 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 10:54:58.760070 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 10:54:58.766458 systemd[1]: Reached target initrd-switch-root.target. Jul 2 10:54:58.768676 systemd[1]: Starting initrd-switch-root.service... Jul 2 10:54:58.789385 systemd[1]: Switching root. Jul 2 10:54:58.812572 systemd-journald[201]: Journal stopped Jul 2 10:55:02.640559 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jul 2 10:55:02.640686 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 10:55:02.640716 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 10:55:02.640736 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 10:55:02.640754 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 10:55:02.640782 kernel: SELinux: policy capability open_perms=1 Jul 2 10:55:02.640807 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 10:55:02.640851 kernel: SELinux: policy capability always_check_network=0 Jul 2 10:55:02.640887 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 10:55:02.640907 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 10:55:02.640929 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 10:55:02.640967 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 10:55:02.640988 systemd[1]: Successfully loaded SELinux policy in 57.831ms. Jul 2 10:55:02.641023 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.862ms. Jul 2 10:55:02.641046 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 10:55:02.641066 systemd[1]: Detected virtualization kvm. Jul 2 10:55:02.641096 systemd[1]: Detected architecture x86-64. Jul 2 10:55:02.641117 systemd[1]: Detected first boot. Jul 2 10:55:02.641142 systemd[1]: Hostname set to . Jul 2 10:55:02.641163 systemd[1]: Initializing machine ID from VM UUID. Jul 2 10:55:02.641187 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 10:55:02.641220 systemd[1]: Populated /etc with preset unit settings. Jul 2 10:55:02.641240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:55:02.641271 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:55:02.641310 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:55:02.641338 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 10:55:02.641358 systemd[1]: Stopped initrd-switch-root.service. Jul 2 10:55:02.641378 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 10:55:02.641403 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 10:55:02.641424 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 10:55:02.641443 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 10:55:02.641472 systemd[1]: Created slice system-getty.slice. Jul 2 10:55:02.641517 systemd[1]: Created slice system-modprobe.slice. Jul 2 10:55:02.641536 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 10:55:02.641555 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 10:55:02.641590 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 10:55:02.641614 systemd[1]: Created slice user.slice. Jul 2 10:55:02.641631 systemd[1]: Started systemd-ask-password-console.path. Jul 2 10:55:02.641649 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 10:55:02.641671 systemd[1]: Set up automount boot.automount. Jul 2 10:55:02.641717 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 10:55:02.641741 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 10:55:02.641760 systemd[1]: Stopped target initrd-fs.target. Jul 2 10:55:02.641797 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 10:55:02.641817 systemd[1]: Reached target integritysetup.target. Jul 2 10:55:02.641844 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 10:55:02.641876 systemd[1]: Reached target remote-fs.target. Jul 2 10:55:02.641897 systemd[1]: Reached target slices.target. Jul 2 10:55:02.641917 systemd[1]: Reached target swap.target. Jul 2 10:55:02.641936 systemd[1]: Reached target torcx.target. Jul 2 10:55:02.647597 systemd[1]: Reached target veritysetup.target. Jul 2 10:55:02.647627 systemd[1]: Listening on systemd-coredump.socket. Jul 2 10:55:02.647648 systemd[1]: Listening on systemd-initctl.socket. Jul 2 10:55:02.647667 systemd[1]: Listening on systemd-networkd.socket. Jul 2 10:55:02.647687 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 10:55:02.647722 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 10:55:02.647750 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 10:55:02.647770 systemd[1]: Mounting dev-hugepages.mount... Jul 2 10:55:02.647790 systemd[1]: Mounting dev-mqueue.mount... Jul 2 10:55:02.647808 systemd[1]: Mounting media.mount... Jul 2 10:55:02.647843 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:55:02.647866 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 10:55:02.647891 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 10:55:02.647911 systemd[1]: Mounting tmp.mount... Jul 2 10:55:02.647966 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 10:55:02.649987 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:55:02.650023 systemd[1]: Starting kmod-static-nodes.service... Jul 2 10:55:02.650046 systemd[1]: Starting modprobe@configfs.service... Jul 2 10:55:02.650066 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:55:02.650085 systemd[1]: Starting modprobe@drm.service... Jul 2 10:55:02.650104 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:55:02.650122 systemd[1]: Starting modprobe@fuse.service... Jul 2 10:55:02.650151 systemd[1]: Starting modprobe@loop.service... Jul 2 10:55:02.650184 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 10:55:02.650206 kernel: fuse: init (API version 7.34) Jul 2 10:55:02.650225 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 10:55:02.650244 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 10:55:02.650263 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 10:55:02.650282 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 10:55:02.650301 systemd[1]: Stopped systemd-journald.service. Jul 2 10:55:02.650320 kernel: loop: module loaded Jul 2 10:55:02.650339 systemd[1]: Starting systemd-journald.service... Jul 2 10:55:02.650357 systemd[1]: Starting systemd-modules-load.service... Jul 2 10:55:02.650389 systemd[1]: Starting systemd-network-generator.service... Jul 2 10:55:02.650409 systemd[1]: Starting systemd-remount-fs.service... Jul 2 10:55:02.650428 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 10:55:02.650448 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 10:55:02.650467 systemd[1]: Stopped verity-setup.service. Jul 2 10:55:02.650486 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:55:02.650505 systemd[1]: Mounted dev-hugepages.mount. Jul 2 10:55:02.650525 systemd[1]: Mounted dev-mqueue.mount. Jul 2 10:55:02.650544 systemd[1]: Mounted media.mount. Jul 2 10:55:02.650574 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 10:55:02.650595 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 10:55:02.650614 systemd[1]: Mounted tmp.mount. Jul 2 10:55:02.650633 systemd[1]: Finished kmod-static-nodes.service. Jul 2 10:55:02.650657 systemd-journald[988]: Journal started Jul 2 10:55:02.650736 systemd-journald[988]: Runtime Journal (/run/log/journal/4b1d47f644aa46839914818b5d049019) is 4.7M, max 38.1M, 33.3M free. Jul 2 10:54:58.997000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 10:54:59.083000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:54:59.083000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:54:59.083000 audit: BPF prog-id=10 op=LOAD Jul 2 10:54:59.083000 audit: BPF prog-id=10 op=UNLOAD Jul 2 10:54:59.083000 audit: BPF prog-id=11 op=LOAD Jul 2 10:54:59.083000 audit: BPF prog-id=11 op=UNLOAD Jul 2 10:54:59.211000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 10:54:59.211000 audit[902]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000d0de0 a2=c0000d90c0 a3=32 items=0 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:54:59.211000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:54:59.213000 audit[902]: AVC avc: denied { associate } for pid=902 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 10:54:59.213000 audit[902]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=885 pid=902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:54:59.213000 audit: CWD cwd="/" Jul 2 10:54:59.213000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:54:59.213000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:54:59.213000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:55:02.394000 audit: BPF prog-id=12 op=LOAD Jul 2 10:55:02.394000 audit: BPF prog-id=3 op=UNLOAD Jul 2 10:55:02.395000 audit: BPF prog-id=13 op=LOAD Jul 2 10:55:02.395000 audit: BPF prog-id=14 op=LOAD Jul 2 10:55:02.395000 audit: BPF prog-id=4 op=UNLOAD Jul 2 10:55:02.395000 audit: BPF prog-id=5 op=UNLOAD Jul 2 10:55:02.397000 audit: BPF prog-id=15 op=LOAD Jul 2 10:55:02.397000 audit: BPF prog-id=12 op=UNLOAD Jul 2 10:55:02.397000 audit: BPF prog-id=16 op=LOAD Jul 2 10:55:02.397000 audit: BPF prog-id=17 op=LOAD Jul 2 10:55:02.397000 audit: BPF prog-id=13 op=UNLOAD Jul 2 10:55:02.397000 audit: BPF prog-id=14 op=UNLOAD Jul 2 10:55:02.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.653015 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 10:55:02.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.407000 audit: BPF prog-id=15 op=UNLOAD Jul 2 10:55:02.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.591000 audit: BPF prog-id=18 op=LOAD Jul 2 10:55:02.591000 audit: BPF prog-id=19 op=LOAD Jul 2 10:55:02.592000 audit: BPF prog-id=20 op=LOAD Jul 2 10:55:02.593000 audit: BPF prog-id=16 op=UNLOAD Jul 2 10:55:02.593000 audit: BPF prog-id=17 op=UNLOAD Jul 2 10:55:02.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.637000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 10:55:02.637000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdb05815b0 a2=4000 a3=7ffdb058164c items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:55:02.637000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 10:55:02.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:59.208426 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:55:02.392637 systemd[1]: Queued start job for default target multi-user.target. Jul 2 10:54:59.209051 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 10:55:02.392667 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 10:55:02.655040 systemd[1]: Finished modprobe@configfs.service. Jul 2 10:55:02.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:59.209088 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 10:55:02.399341 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 10:54:59.209138 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 10:54:59.209155 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 10:54:59.209204 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 10:54:59.209225 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 10:54:59.209547 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 10:54:59.209617 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 10:54:59.209643 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 10:54:59.211350 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 10:54:59.211405 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 10:54:59.211436 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 10:54:59.211461 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 10:54:59.211491 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 10:55:02.657974 systemd[1]: Started systemd-journald.service. Jul 2 10:55:02.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:54:59.211514 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:54:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 10:55:01.842654 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:55:01Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:55:01.843357 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:55:01Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:55:01.843609 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:55:01Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:55:01.844003 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:55:01Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:55:01.844113 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:55:01Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 10:55:01.844243 /usr/lib/systemd/system-generators/torcx-generator[902]: time="2024-07-02T10:55:01Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 10:55:02.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.661202 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 10:55:02.662246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:55:02.662454 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:55:02.663437 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 10:55:02.663655 systemd[1]: Finished modprobe@drm.service. Jul 2 10:55:02.664612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:55:02.664795 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:55:02.665869 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 10:55:02.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.667163 systemd[1]: Finished modprobe@fuse.service. Jul 2 10:55:02.668196 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:55:02.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.668643 systemd[1]: Finished modprobe@loop.service. Jul 2 10:55:02.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.669865 systemd[1]: Finished systemd-modules-load.service. Jul 2 10:55:02.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.671211 systemd[1]: Finished systemd-network-generator.service. Jul 2 10:55:02.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.672304 systemd[1]: Finished systemd-remount-fs.service. Jul 2 10:55:02.673762 systemd[1]: Reached target network-pre.target. Jul 2 10:55:02.676513 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 10:55:02.683044 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 10:55:02.686080 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 10:55:02.688179 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 10:55:02.691066 systemd[1]: Starting systemd-journal-flush.service... Jul 2 10:55:02.691892 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:55:02.693557 systemd[1]: Starting systemd-random-seed.service... Jul 2 10:55:02.694281 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:55:02.699139 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:55:02.702232 systemd[1]: Starting systemd-sysusers.service... Jul 2 10:55:02.708811 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 10:55:02.710442 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 10:55:02.716055 systemd-journald[988]: Time spent on flushing to /var/log/journal/4b1d47f644aa46839914818b5d049019 is 63.095ms for 1298 entries. Jul 2 10:55:02.716055 systemd-journald[988]: System Journal (/var/log/journal/4b1d47f644aa46839914818b5d049019) is 8.0M, max 584.8M, 576.8M free. Jul 2 10:55:02.789354 systemd-journald[988]: Received client request to flush runtime journal. Jul 2 10:55:02.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.725710 systemd[1]: Finished systemd-random-seed.service. Jul 2 10:55:02.726615 systemd[1]: Reached target first-boot-complete.target. Jul 2 10:55:02.759317 systemd[1]: Finished systemd-sysusers.service. Jul 2 10:55:02.760431 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:55:02.763237 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 10:55:02.790863 systemd[1]: Finished systemd-journal-flush.service. Jul 2 10:55:02.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.803421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 10:55:02.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.842365 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 10:55:02.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:02.844717 systemd[1]: Starting systemd-udev-settle.service... Jul 2 10:55:02.855078 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 10:55:03.342427 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 10:55:03.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.343000 audit: BPF prog-id=21 op=LOAD Jul 2 10:55:03.343000 audit: BPF prog-id=22 op=LOAD Jul 2 10:55:03.343000 audit: BPF prog-id=7 op=UNLOAD Jul 2 10:55:03.343000 audit: BPF prog-id=8 op=UNLOAD Jul 2 10:55:03.345142 systemd[1]: Starting systemd-udevd.service... Jul 2 10:55:03.368635 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Jul 2 10:55:03.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.399000 audit: BPF prog-id=23 op=LOAD Jul 2 10:55:03.397231 systemd[1]: Started systemd-udevd.service. Jul 2 10:55:03.400291 systemd[1]: Starting systemd-networkd.service... Jul 2 10:55:03.407000 audit: BPF prog-id=24 op=LOAD Jul 2 10:55:03.408000 audit: BPF prog-id=25 op=LOAD Jul 2 10:55:03.408000 audit: BPF prog-id=26 op=LOAD Jul 2 10:55:03.409941 systemd[1]: Starting systemd-userdbd.service... Jul 2 10:55:03.453643 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 10:55:03.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.472165 systemd[1]: Started systemd-userdbd.service. Jul 2 10:55:03.474421 kernel: kauditd_printk_skb: 112 callbacks suppressed Jul 2 10:55:03.474477 kernel: audit: type=1130 audit(1719917703.471:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.569846 systemd-networkd[1021]: lo: Link UP Jul 2 10:55:03.570228 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 2 10:55:03.569863 systemd-networkd[1021]: lo: Gained carrier Jul 2 10:55:03.571163 systemd-networkd[1021]: Enumeration completed Jul 2 10:55:03.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.571302 systemd[1]: Started systemd-networkd.service. Jul 2 10:55:03.578108 kernel: audit: type=1130 audit(1719917703.570:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.590975 kernel: ACPI: button: Power Button [PWRF] Jul 2 10:55:03.608975 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 10:55:03.642819 systemd-networkd[1021]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:55:03.644006 systemd-networkd[1021]: eth0: Link UP Jul 2 10:55:03.644018 systemd-networkd[1021]: eth0: Gained carrier Jul 2 10:55:03.658115 systemd-networkd[1021]: eth0: DHCPv4 address 10.230.70.110/30, gateway 10.230.70.109 acquired from 10.230.70.109 Jul 2 10:55:03.640000 audit[1022]: AVC avc: denied { confidentiality } for pid=1022 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 10:55:03.673974 kernel: audit: type=1400 audit(1719917703.640:154): avc: denied { confidentiality } for pid=1022 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 10:55:03.681031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 10:55:03.640000 audit[1022]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ecbc6a2760 a1=3207c a2=7f5d21037bc5 a3=5 items=108 ppid=1015 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:55:03.702972 kernel: audit: type=1300 audit(1719917703.640:154): arch=c000003e syscall=175 success=yes exit=0 a0=55ecbc6a2760 a1=3207c a2=7f5d21037bc5 a3=5 items=108 ppid=1015 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:55:03.640000 audit: CWD cwd="/" Jul 2 10:55:03.640000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.712380 kernel: audit: type=1307 audit(1719917703.640:154): cwd="/" Jul 2 10:55:03.712464 kernel: audit: type=1302 audit(1719917703.640:154): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=1 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=2 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.725715 kernel: audit: type=1302 audit(1719917703.640:154): item=1 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.725778 kernel: audit: type=1302 audit(1719917703.640:154): item=2 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=3 name=(null) inode=15584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.739665 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 10:55:03.739721 kernel: audit: type=1302 audit(1719917703.640:154): item=3 name=(null) inode=15584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=4 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=5 name=(null) inode=15585 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=6 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=7 name=(null) inode=15586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=8 name=(null) inode=15586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=9 name=(null) inode=15587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=10 name=(null) inode=15586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=11 name=(null) inode=15588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=12 name=(null) inode=15586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=13 name=(null) inode=15589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=14 name=(null) inode=15586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=15 name=(null) inode=15590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=16 name=(null) inode=15586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=17 name=(null) inode=15591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=18 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=19 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=20 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.742996 kernel: audit: type=1302 audit(1719917703.640:154): item=4 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=21 name=(null) inode=15593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=22 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=23 name=(null) inode=15594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=24 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=25 name=(null) inode=15595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=26 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=27 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=28 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=29 name=(null) inode=15597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=30 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=31 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=32 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=33 name=(null) inode=15599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=34 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=35 name=(null) inode=15600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=36 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=37 name=(null) inode=15601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=38 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=39 name=(null) inode=15602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=40 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=41 name=(null) inode=15603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=42 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=43 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=44 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=45 name=(null) inode=15605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=46 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=47 name=(null) inode=15606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=48 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=49 name=(null) inode=15607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=50 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=51 name=(null) inode=15608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=52 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=53 name=(null) inode=15609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=55 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=56 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=57 name=(null) inode=15611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=58 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=59 name=(null) inode=15612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=60 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=61 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=62 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=63 name=(null) inode=15614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=64 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=65 name=(null) inode=15615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=66 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=67 name=(null) inode=15616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=68 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=69 name=(null) inode=15617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=70 name=(null) inode=15613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=71 name=(null) inode=15618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=72 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=73 name=(null) inode=15619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=74 name=(null) inode=15619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=75 name=(null) inode=15620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=76 name=(null) inode=15619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=77 name=(null) inode=15621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=78 name=(null) inode=15619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=79 name=(null) inode=15622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=80 name=(null) inode=15619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=81 name=(null) inode=15623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=82 name=(null) inode=15619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=83 name=(null) inode=15624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=84 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=85 name=(null) inode=15625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=86 name=(null) inode=15625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=87 name=(null) inode=15626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=88 name=(null) inode=15625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=89 name=(null) inode=15627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=90 name=(null) inode=15625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=91 name=(null) inode=15628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=92 name=(null) inode=15625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=93 name=(null) inode=15629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=94 name=(null) inode=15625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=95 name=(null) inode=15630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=96 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=97 name=(null) inode=15631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=98 name=(null) inode=15631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=99 name=(null) inode=15632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=100 name=(null) inode=15631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=101 name=(null) inode=15633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=102 name=(null) inode=15631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=103 name=(null) inode=15634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=104 name=(null) inode=15631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=105 name=(null) inode=15635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=106 name=(null) inode=15631 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PATH item=107 name=(null) inode=15636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:55:03.640000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 10:55:03.754977 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 2 10:55:03.757967 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 2 10:55:03.758201 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 2 10:55:03.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.877583 systemd[1]: Finished systemd-udev-settle.service. Jul 2 10:55:03.880053 systemd[1]: Starting lvm2-activation-early.service... Jul 2 10:55:03.904457 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 10:55:03.938728 systemd[1]: Finished lvm2-activation-early.service. Jul 2 10:55:03.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.939607 systemd[1]: Reached target cryptsetup.target. Jul 2 10:55:03.941927 systemd[1]: Starting lvm2-activation.service... Jul 2 10:55:03.947240 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 10:55:03.970123 systemd[1]: Finished lvm2-activation.service. Jul 2 10:55:03.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:03.970971 systemd[1]: Reached target local-fs-pre.target. Jul 2 10:55:03.971591 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 10:55:03.971645 systemd[1]: Reached target local-fs.target. Jul 2 10:55:03.972250 systemd[1]: Reached target machines.target. Jul 2 10:55:03.974623 systemd[1]: Starting ldconfig.service... Jul 2 10:55:03.975823 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:55:03.975878 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:03.977693 systemd[1]: Starting systemd-boot-update.service... Jul 2 10:55:03.980400 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 10:55:03.988197 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 10:55:03.990661 systemd[1]: Starting systemd-sysext.service... Jul 2 10:55:03.991835 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Jul 2 10:55:03.993674 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 10:55:04.000117 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 10:55:03.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.024004 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 10:55:04.141093 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 10:55:04.141420 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 10:55:04.155715 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 10:55:04.156466 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 10:55:04.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.159005 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 10:55:04.182978 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 10:55:04.202995 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 10:55:04.219766 (sd-sysext)[1059]: Using extensions 'kubernetes'. Jul 2 10:55:04.222404 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Jul 2 10:55:04.222404 systemd-fsck[1055]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 10:55:04.223732 (sd-sysext)[1059]: Merged extensions into '/usr'. Jul 2 10:55:04.224738 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 10:55:04.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.227333 systemd[1]: Mounting boot.mount... Jul 2 10:55:04.252763 systemd[1]: Mounted boot.mount. Jul 2 10:55:04.263007 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:55:04.271115 systemd[1]: Mounting usr-share-oem.mount... Jul 2 10:55:04.273002 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.275925 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:55:04.279223 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:55:04.284447 systemd[1]: Starting modprobe@loop.service... Jul 2 10:55:04.285365 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.285696 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:04.285887 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:55:04.291143 systemd[1]: Finished systemd-boot-update.service. Jul 2 10:55:04.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.292134 systemd[1]: Mounted usr-share-oem.mount. Jul 2 10:55:04.293208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:55:04.293384 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:55:04.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.294971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:55:04.295354 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:55:04.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.296972 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:55:04.297144 systemd[1]: Finished modprobe@loop.service. Jul 2 10:55:04.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.298465 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:55:04.298613 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.299835 systemd[1]: Finished systemd-sysext.service. Jul 2 10:55:04.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.304282 systemd[1]: Starting ensure-sysext.service... Jul 2 10:55:04.306312 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 10:55:04.313398 systemd[1]: Reloading. Jul 2 10:55:04.339654 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 10:55:04.342579 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 10:55:04.347366 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 10:55:04.433221 /usr/lib/systemd/system-generators/torcx-generator[1086]: time="2024-07-02T10:55:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:55:04.435783 /usr/lib/systemd/system-generators/torcx-generator[1086]: time="2024-07-02T10:55:04Z" level=info msg="torcx already run" Jul 2 10:55:04.523891 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 10:55:04.589524 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:55:04.589553 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:55:04.615161 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:55:04.685000 audit: BPF prog-id=27 op=LOAD Jul 2 10:55:04.685000 audit: BPF prog-id=18 op=UNLOAD Jul 2 10:55:04.685000 audit: BPF prog-id=28 op=LOAD Jul 2 10:55:04.685000 audit: BPF prog-id=29 op=LOAD Jul 2 10:55:04.685000 audit: BPF prog-id=19 op=UNLOAD Jul 2 10:55:04.685000 audit: BPF prog-id=20 op=UNLOAD Jul 2 10:55:04.688000 audit: BPF prog-id=30 op=LOAD Jul 2 10:55:04.688000 audit: BPF prog-id=23 op=UNLOAD Jul 2 10:55:04.690000 audit: BPF prog-id=31 op=LOAD Jul 2 10:55:04.690000 audit: BPF prog-id=32 op=LOAD Jul 2 10:55:04.690000 audit: BPF prog-id=21 op=UNLOAD Jul 2 10:55:04.690000 audit: BPF prog-id=22 op=UNLOAD Jul 2 10:55:04.691000 audit: BPF prog-id=33 op=LOAD Jul 2 10:55:04.691000 audit: BPF prog-id=24 op=UNLOAD Jul 2 10:55:04.691000 audit: BPF prog-id=34 op=LOAD Jul 2 10:55:04.691000 audit: BPF prog-id=35 op=LOAD Jul 2 10:55:04.691000 audit: BPF prog-id=25 op=UNLOAD Jul 2 10:55:04.691000 audit: BPF prog-id=26 op=UNLOAD Jul 2 10:55:04.698087 systemd[1]: Finished ldconfig.service. Jul 2 10:55:04.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.704376 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 10:55:04.710013 systemd[1]: Starting audit-rules.service... Jul 2 10:55:04.712518 systemd[1]: Starting clean-ca-certificates.service... Jul 2 10:55:04.721000 audit: BPF prog-id=36 op=LOAD Jul 2 10:55:04.720211 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 10:55:04.725000 audit: BPF prog-id=37 op=LOAD Jul 2 10:55:04.724302 systemd[1]: Starting systemd-resolved.service... Jul 2 10:55:04.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.730104 systemd[1]: Starting systemd-timesyncd.service... Jul 2 10:55:04.732412 systemd[1]: Starting systemd-update-utmp.service... Jul 2 10:55:04.734532 systemd[1]: Finished clean-ca-certificates.service. Jul 2 10:55:04.743674 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.746684 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:55:04.749372 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:55:04.753864 systemd[1]: Starting modprobe@loop.service... Jul 2 10:55:04.754577 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.754826 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:04.755051 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 10:55:04.756987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:55:04.757188 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:55:04.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.762141 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:55:04.764236 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.766764 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:55:04.767987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.768192 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:04.768381 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 10:55:04.769443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:55:04.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.771034 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:55:04.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.777616 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:55:04.777809 systemd[1]: Finished modprobe@loop.service. Jul 2 10:55:04.779138 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.780709 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:55:04.796000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.800830 systemd[1]: Starting modprobe@drm.service... Jul 2 10:55:04.801779 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.802122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:04.805189 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 10:55:04.806086 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 10:55:04.807620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:55:04.807820 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:55:04.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.809230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:55:04.809719 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:55:04.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.811936 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 10:55:04.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.814177 systemd[1]: Finished modprobe@drm.service. Jul 2 10:55:04.818606 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 10:55:04.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.824181 systemd[1]: Finished ensure-sysext.service. Jul 2 10:55:04.825168 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:55:04.825324 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.827482 systemd[1]: Starting systemd-update-done.service... Jul 2 10:55:04.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.831204 systemd[1]: Finished systemd-update-utmp.service. Jul 2 10:55:04.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:55:04.836849 systemd[1]: Finished systemd-update-done.service. Jul 2 10:55:04.837000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 10:55:04.837000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd58c2e490 a2=420 a3=0 items=0 ppid=1134 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:55:04.837000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 10:55:04.839206 augenrules[1162]: No rules Jul 2 10:55:04.839759 systemd[1]: Finished audit-rules.service. Jul 2 10:55:04.876786 systemd[1]: Started systemd-timesyncd.service. Jul 2 10:55:04.877642 systemd[1]: Reached target time-set.target. Jul 2 10:55:04.890537 systemd-resolved[1140]: Positive Trust Anchors: Jul 2 10:55:04.891079 systemd-resolved[1140]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 10:55:04.891222 systemd-resolved[1140]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 10:55:04.900191 systemd-resolved[1140]: Using system hostname 'srv-f8jck.gb1.brightbox.com'. Jul 2 10:55:04.902801 systemd[1]: Started systemd-resolved.service. Jul 2 10:55:04.903578 systemd[1]: Reached target network.target. Jul 2 10:55:04.904171 systemd[1]: Reached target nss-lookup.target. Jul 2 10:55:04.904744 systemd[1]: Reached target sysinit.target. Jul 2 10:55:04.905418 systemd[1]: Started motdgen.path. Jul 2 10:55:04.906030 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 10:55:04.906935 systemd[1]: Started logrotate.timer. Jul 2 10:55:04.907645 systemd[1]: Started mdadm.timer. Jul 2 10:55:04.908248 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 10:55:04.908922 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 10:55:04.908994 systemd[1]: Reached target paths.target. Jul 2 10:55:04.909551 systemd[1]: Reached target timers.target. Jul 2 10:55:04.910658 systemd[1]: Listening on dbus.socket. Jul 2 10:55:04.912870 systemd[1]: Starting docker.socket... Jul 2 10:55:04.916822 systemd[1]: Listening on sshd.socket. Jul 2 10:55:04.917562 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:04.918159 systemd[1]: Listening on docker.socket. Jul 2 10:55:04.918886 systemd[1]: Reached target sockets.target. Jul 2 10:55:04.919482 systemd[1]: Reached target basic.target. Jul 2 10:55:04.920207 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.920256 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 10:55:04.921671 systemd[1]: Starting containerd.service... Jul 2 10:55:04.923672 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 10:55:04.926335 systemd[1]: Starting dbus.service... Jul 2 10:55:04.928402 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 10:55:04.932131 systemd[1]: Starting extend-filesystems.service... Jul 2 10:55:04.933677 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 10:55:04.938474 systemd[1]: Starting motdgen.service... Jul 2 10:55:04.942046 systemd[1]: Starting prepare-helm.service... Jul 2 10:55:04.947402 jq[1173]: false Jul 2 10:55:04.946446 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 10:55:04.948844 systemd[1]: Starting sshd-keygen.service... Jul 2 10:55:04.956403 systemd[1]: Starting systemd-logind.service... Jul 2 10:55:04.957118 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:55:04.957259 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 10:55:04.957934 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 10:55:04.960714 systemd[1]: Starting update-engine.service... Jul 2 10:55:04.964765 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 10:55:04.971632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 10:55:04.971912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 10:55:04.989929 tar[1190]: linux-amd64/helm Jul 2 10:55:04.992568 jq[1187]: true Jul 2 10:55:05.010770 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 10:55:05.011044 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 10:55:05.029586 dbus-daemon[1172]: [system] SELinux support is enabled Jul 2 10:55:05.030271 systemd[1]: Started dbus.service. Jul 2 10:55:05.033391 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 10:55:05.033449 systemd[1]: Reached target system-config.target. Jul 2 10:55:05.034209 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 10:55:05.034243 systemd[1]: Reached target user-config.target. Jul 2 10:55:05.036271 dbus-daemon[1172]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1021 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 10:55:05.038659 extend-filesystems[1174]: Found loop1 Jul 2 10:55:05.044080 jq[1194]: true Jul 2 10:55:05.049103 extend-filesystems[1174]: Found vda Jul 2 10:55:05.050609 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 10:55:05.050806 extend-filesystems[1174]: Found vda1 Jul 2 10:55:05.051845 extend-filesystems[1174]: Found vda2 Jul 2 10:55:05.052582 extend-filesystems[1174]: Found vda3 Jul 2 10:55:05.055764 extend-filesystems[1174]: Found usr Jul 2 10:55:05.056616 extend-filesystems[1174]: Found vda4 Jul 2 10:55:05.058242 extend-filesystems[1174]: Found vda6 Jul 2 10:55:05.058242 extend-filesystems[1174]: Found vda7 Jul 2 10:55:05.058242 extend-filesystems[1174]: Found vda9 Jul 2 10:55:05.058242 extend-filesystems[1174]: Checking size of /dev/vda9 Jul 2 10:55:05.066318 systemd[1]: Starting systemd-hostnamed.service... Jul 2 10:55:05.084446 systemd[1]: Created slice system-sshd.slice. Jul 2 10:55:05.087323 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 10:55:05.087871 systemd[1]: Finished motdgen.service. Jul 2 10:55:05.100979 update_engine[1184]: I0702 10:55:05.100116 1184 main.cc:92] Flatcar Update Engine starting Jul 2 10:55:05.106159 systemd[1]: Started update-engine.service. Jul 2 10:55:05.106483 update_engine[1184]: I0702 10:55:05.106205 1184 update_check_scheduler.cc:74] Next update check in 6m16s Jul 2 10:55:05.109381 systemd[1]: Started locksmithd.service. Jul 2 10:55:05.114423 extend-filesystems[1174]: Resized partition /dev/vda9 Jul 2 10:55:05.126992 extend-filesystems[1227]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 10:55:05.139985 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 2 10:55:05.141543 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:55:05.141588 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:55:05.167189 env[1191]: time="2024-07-02T10:55:05.167086765Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 10:55:05.209194 systemd-logind[1183]: Watching system buttons on /dev/input/event2 (Power Button) Jul 2 10:55:05.209235 systemd-logind[1183]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 10:55:05.209519 systemd-logind[1183]: New seat seat0. Jul 2 10:55:05.210373 bash[1229]: Updated "/home/core/.ssh/authorized_keys" Jul 2 10:55:05.210834 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 10:55:05.212743 systemd[1]: Started systemd-logind.service. Jul 2 10:55:05.250236 env[1191]: time="2024-07-02T10:55:05.250180351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 10:55:05.250436 env[1191]: time="2024-07-02T10:55:05.250404438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:55:05.253572 env[1191]: time="2024-07-02T10:55:05.253520145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:55:05.253572 env[1191]: time="2024-07-02T10:55:05.253562588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:55:05.254501 env[1191]: time="2024-07-02T10:55:05.254354586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:55:05.254501 env[1191]: time="2024-07-02T10:55:05.254403870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 10:55:05.254501 env[1191]: time="2024-07-02T10:55:05.254425091Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 10:55:05.254501 env[1191]: time="2024-07-02T10:55:05.254440893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 10:55:05.254697 env[1191]: time="2024-07-02T10:55:05.254573933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:55:05.255051 env[1191]: time="2024-07-02T10:55:05.255020348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:55:05.255252 env[1191]: time="2024-07-02T10:55:05.255200180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:55:05.255252 env[1191]: time="2024-07-02T10:55:05.255234231Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 10:55:05.255389 env[1191]: time="2024-07-02T10:55:05.255321340Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 10:55:05.255389 env[1191]: time="2024-07-02T10:55:05.255341658Z" level=info msg="metadata content store policy set" policy=shared Jul 2 10:55:05.269407 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 10:55:05.269593 systemd[1]: Started systemd-hostnamed.service. Jul 2 10:55:05.271095 dbus-daemon[1172]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1210 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 10:55:05.274778 systemd[1]: Starting polkit.service... Jul 2 10:55:05.288602 env[1191]: time="2024-07-02T10:55:05.288524160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 10:55:05.288748 env[1191]: time="2024-07-02T10:55:05.288602723Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 10:55:05.288748 env[1191]: time="2024-07-02T10:55:05.288670364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 10:55:05.288857 env[1191]: time="2024-07-02T10:55:05.288775514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.288960 env[1191]: time="2024-07-02T10:55:05.288907562Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289030 env[1191]: time="2024-07-02T10:55:05.288940556Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289030 env[1191]: time="2024-07-02T10:55:05.288988127Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289030 env[1191]: time="2024-07-02T10:55:05.289012085Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289182 env[1191]: time="2024-07-02T10:55:05.289048641Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289182 env[1191]: time="2024-07-02T10:55:05.289072310Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289182 env[1191]: time="2024-07-02T10:55:05.289092963Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.289182 env[1191]: time="2024-07-02T10:55:05.289136674Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 10:55:05.289405 env[1191]: time="2024-07-02T10:55:05.289349291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 10:55:05.289608 env[1191]: time="2024-07-02T10:55:05.289560225Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 10:55:05.290091 env[1191]: time="2024-07-02T10:55:05.290054184Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 10:55:05.290154 env[1191]: time="2024-07-02T10:55:05.290125134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290201 env[1191]: time="2024-07-02T10:55:05.290151161Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 10:55:05.290378 env[1191]: time="2024-07-02T10:55:05.290326046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290378 env[1191]: time="2024-07-02T10:55:05.290373037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290518 env[1191]: time="2024-07-02T10:55:05.290398390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290518 env[1191]: time="2024-07-02T10:55:05.290442486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290518 env[1191]: time="2024-07-02T10:55:05.290464584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290518 env[1191]: time="2024-07-02T10:55:05.290482368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290679 env[1191]: time="2024-07-02T10:55:05.290516997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290679 env[1191]: time="2024-07-02T10:55:05.290537991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290679 env[1191]: time="2024-07-02T10:55:05.290559474Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 10:55:05.290922 env[1191]: time="2024-07-02T10:55:05.290866829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.290922 env[1191]: time="2024-07-02T10:55:05.290892315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.291097 env[1191]: time="2024-07-02T10:55:05.290930185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.291097 env[1191]: time="2024-07-02T10:55:05.290996519Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 10:55:05.291097 env[1191]: time="2024-07-02T10:55:05.291024003Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 10:55:05.291097 env[1191]: time="2024-07-02T10:55:05.291041504Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 10:55:05.291271 env[1191]: time="2024-07-02T10:55:05.291102635Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 10:55:05.291271 env[1191]: time="2024-07-02T10:55:05.291180530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 10:55:05.291611 env[1191]: time="2024-07-02T10:55:05.291511042Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.291636078Z" level=info msg="Connect containerd service" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.291722233Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293422590Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293592745Z" level=info msg="Start subscribing containerd event" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293676950Z" level=info msg="Start recovering state" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293803897Z" level=info msg="Start event monitor" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293835475Z" level=info msg="Start snapshots syncer" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293856113Z" level=info msg="Start cni network conf syncer for default" Jul 2 10:55:05.295076 env[1191]: time="2024-07-02T10:55:05.293871868Z" level=info msg="Start streaming server" Jul 2 10:55:05.296313 polkitd[1234]: Started polkitd version 121 Jul 2 10:55:05.296832 env[1191]: time="2024-07-02T10:55:05.296781140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 10:55:05.297604 env[1191]: time="2024-07-02T10:55:05.296996626Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 10:55:05.308969 systemd[1]: Started containerd.service. Jul 2 10:55:05.311115 env[1191]: time="2024-07-02T10:55:05.310447716Z" level=info msg="containerd successfully booted in 0.154324s" Jul 2 10:55:05.312792 polkitd[1234]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 10:55:05.312890 polkitd[1234]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 10:55:05.316990 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 10:55:05.320743 polkitd[1234]: Finished loading, compiling and executing 2 rules Jul 2 10:55:05.322449 systemd[1]: Started polkit.service. Jul 2 10:55:05.322279 dbus-daemon[1172]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 10:55:05.323408 polkitd[1234]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 10:55:05.337495 extend-filesystems[1227]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 10:55:05.337495 extend-filesystems[1227]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 10:55:05.337495 extend-filesystems[1227]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 10:55:05.342309 extend-filesystems[1174]: Resized filesystem in /dev/vda9 Jul 2 10:55:05.338042 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 10:55:05.338297 systemd[1]: Finished extend-filesystems.service. Jul 2 10:55:05.350612 systemd-hostnamed[1210]: Hostname set to (static) Jul 2 10:55:06.098031 systemd-timesyncd[1141]: Contacted time server 92.53.243.22:123 (0.flatcar.pool.ntp.org). Jul 2 10:55:06.098112 systemd-timesyncd[1141]: Initial clock synchronization to Tue 2024-07-02 10:55:06.097799 UTC. Jul 2 10:55:06.098478 systemd-resolved[1140]: Clock change detected. Flushing caches. Jul 2 10:55:06.223259 systemd-networkd[1021]: eth0: Gained IPv6LL Jul 2 10:55:06.226908 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 10:55:06.227996 systemd[1]: Reached target network-online.target. Jul 2 10:55:06.231644 systemd[1]: Starting kubelet.service... Jul 2 10:55:06.483758 tar[1190]: linux-amd64/LICENSE Jul 2 10:55:06.484135 tar[1190]: linux-amd64/README.md Jul 2 10:55:06.490682 systemd[1]: Finished prepare-helm.service. Jul 2 10:55:06.722465 locksmithd[1218]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 10:55:07.175553 systemd[1]: Started kubelet.service. Jul 2 10:55:07.341964 systemd-networkd[1021]: eth0: Ignoring DHCPv6 address 2a02:1348:179:919b:24:19ff:fee6:466e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:919b:24:19ff:fee6:466e/64 assigned by NDisc. Jul 2 10:55:07.341978 systemd-networkd[1021]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 2 10:55:07.931289 kubelet[1254]: E0702 10:55:07.931138 1254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:55:07.934300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:55:07.934573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:55:07.935137 systemd[1]: kubelet.service: Consumed 1.060s CPU time. Jul 2 10:55:07.995147 sshd_keygen[1193]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 10:55:08.021292 systemd[1]: Finished sshd-keygen.service. Jul 2 10:55:08.024639 systemd[1]: Starting issuegen.service... Jul 2 10:55:08.027148 systemd[1]: Started sshd@0-10.230.70.110:22-147.75.109.163:56144.service. Jul 2 10:55:08.034632 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 10:55:08.034892 systemd[1]: Finished issuegen.service. Jul 2 10:55:08.037547 systemd[1]: Starting systemd-user-sessions.service... Jul 2 10:55:08.048717 systemd[1]: Finished systemd-user-sessions.service. Jul 2 10:55:08.051467 systemd[1]: Started getty@tty1.service. Jul 2 10:55:08.054272 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 10:55:08.056351 systemd[1]: Reached target getty.target. Jul 2 10:55:08.907893 sshd[1269]: Accepted publickey for core from 147.75.109.163 port 56144 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:08.910231 sshd[1269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:08.934930 systemd[1]: Created slice user-500.slice. Jul 2 10:55:08.938101 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 10:55:08.945820 systemd-logind[1183]: New session 1 of user core. Jul 2 10:55:08.954338 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 10:55:08.958314 systemd[1]: Starting user@500.service... Jul 2 10:55:08.963683 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:09.061974 systemd[1278]: Queued start job for default target default.target. Jul 2 10:55:09.063941 systemd[1278]: Reached target paths.target. Jul 2 10:55:09.064103 systemd[1278]: Reached target sockets.target. Jul 2 10:55:09.064280 systemd[1278]: Reached target timers.target. Jul 2 10:55:09.064452 systemd[1278]: Reached target basic.target. Jul 2 10:55:09.064737 systemd[1]: Started user@500.service. Jul 2 10:55:09.068108 systemd[1278]: Reached target default.target. Jul 2 10:55:09.068319 systemd[1]: Started session-1.scope. Jul 2 10:55:09.068777 systemd[1278]: Startup finished in 96ms. Jul 2 10:55:09.688117 systemd[1]: Started sshd@1-10.230.70.110:22-147.75.109.163:56152.service. Jul 2 10:55:10.607930 sshd[1287]: Accepted publickey for core from 147.75.109.163 port 56152 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:10.610414 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:10.616931 systemd-logind[1183]: New session 2 of user core. Jul 2 10:55:10.617671 systemd[1]: Started session-2.scope. Jul 2 10:55:11.216951 sshd[1287]: pam_unix(sshd:session): session closed for user core Jul 2 10:55:11.221822 systemd-logind[1183]: Session 2 logged out. Waiting for processes to exit. Jul 2 10:55:11.222381 systemd[1]: sshd@1-10.230.70.110:22-147.75.109.163:56152.service: Deactivated successfully. Jul 2 10:55:11.223301 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 10:55:11.224342 systemd-logind[1183]: Removed session 2. Jul 2 10:55:11.359287 systemd[1]: Started sshd@2-10.230.70.110:22-147.75.109.163:56164.service. Jul 2 10:55:12.224142 sshd[1293]: Accepted publickey for core from 147.75.109.163 port 56164 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:12.227128 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:12.233551 systemd-logind[1183]: New session 3 of user core. Jul 2 10:55:12.234282 systemd[1]: Started session-3.scope. Jul 2 10:55:12.743353 coreos-metadata[1171]: Jul 02 10:55:12.743 WARN failed to locate config-drive, using the metadata service API instead Jul 2 10:55:12.798742 coreos-metadata[1171]: Jul 02 10:55:12.798 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 10:55:12.833548 sshd[1293]: pam_unix(sshd:session): session closed for user core Jul 2 10:55:12.837102 systemd[1]: sshd@2-10.230.70.110:22-147.75.109.163:56164.service: Deactivated successfully. Jul 2 10:55:12.838193 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 10:55:12.838976 systemd-logind[1183]: Session 3 logged out. Waiting for processes to exit. Jul 2 10:55:12.840007 systemd-logind[1183]: Removed session 3. Jul 2 10:55:12.857514 coreos-metadata[1171]: Jul 02 10:55:12.857 INFO Fetch successful Jul 2 10:55:12.857962 coreos-metadata[1171]: Jul 02 10:55:12.857 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 10:55:12.908018 coreos-metadata[1171]: Jul 02 10:55:12.907 INFO Fetch successful Jul 2 10:55:12.910508 unknown[1171]: wrote ssh authorized keys file for user: core Jul 2 10:55:12.923517 update-ssh-keys[1300]: Updated "/home/core/.ssh/authorized_keys" Jul 2 10:55:12.924596 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 10:55:12.925804 systemd[1]: Reached target multi-user.target. Jul 2 10:55:12.928578 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 10:55:12.937967 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 10:55:12.938182 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 10:55:12.944454 systemd[1]: Startup finished in 1.102s (kernel) + 9.273s (initrd) + 13.335s (userspace) = 23.711s. Jul 2 10:55:18.186125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 10:55:18.186487 systemd[1]: Stopped kubelet.service. Jul 2 10:55:18.186555 systemd[1]: kubelet.service: Consumed 1.060s CPU time. Jul 2 10:55:18.188765 systemd[1]: Starting kubelet.service... Jul 2 10:55:18.316650 systemd[1]: Started kubelet.service. Jul 2 10:55:18.437074 kubelet[1306]: E0702 10:55:18.436811 1306 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:55:18.442270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:55:18.442532 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:55:22.980829 systemd[1]: Started sshd@3-10.230.70.110:22-147.75.109.163:34656.service. Jul 2 10:55:23.853907 sshd[1314]: Accepted publickey for core from 147.75.109.163 port 34656 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:23.855795 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:23.863525 systemd[1]: Started session-4.scope. Jul 2 10:55:23.864080 systemd-logind[1183]: New session 4 of user core. Jul 2 10:55:24.463153 sshd[1314]: pam_unix(sshd:session): session closed for user core Jul 2 10:55:24.466924 systemd[1]: sshd@3-10.230.70.110:22-147.75.109.163:34656.service: Deactivated successfully. Jul 2 10:55:24.467830 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 10:55:24.468629 systemd-logind[1183]: Session 4 logged out. Waiting for processes to exit. Jul 2 10:55:24.469968 systemd-logind[1183]: Removed session 4. Jul 2 10:55:24.606807 systemd[1]: Started sshd@4-10.230.70.110:22-147.75.109.163:34664.service. Jul 2 10:55:25.474715 sshd[1320]: Accepted publickey for core from 147.75.109.163 port 34664 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:25.476510 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:25.483531 systemd[1]: Started session-5.scope. Jul 2 10:55:25.484253 systemd-logind[1183]: New session 5 of user core. Jul 2 10:55:26.076872 sshd[1320]: pam_unix(sshd:session): session closed for user core Jul 2 10:55:26.080280 systemd[1]: sshd@4-10.230.70.110:22-147.75.109.163:34664.service: Deactivated successfully. Jul 2 10:55:26.081196 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 10:55:26.081866 systemd-logind[1183]: Session 5 logged out. Waiting for processes to exit. Jul 2 10:55:26.082864 systemd-logind[1183]: Removed session 5. Jul 2 10:55:26.217692 systemd[1]: Started sshd@5-10.230.70.110:22-147.75.109.163:34680.service. Jul 2 10:55:27.077213 sshd[1326]: Accepted publickey for core from 147.75.109.163 port 34680 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:27.079591 sshd[1326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:27.086295 systemd[1]: Started session-6.scope. Jul 2 10:55:27.087205 systemd-logind[1183]: New session 6 of user core. Jul 2 10:55:27.679720 sshd[1326]: pam_unix(sshd:session): session closed for user core Jul 2 10:55:27.682993 systemd[1]: sshd@5-10.230.70.110:22-147.75.109.163:34680.service: Deactivated successfully. Jul 2 10:55:27.683874 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 10:55:27.684667 systemd-logind[1183]: Session 6 logged out. Waiting for processes to exit. Jul 2 10:55:27.685881 systemd-logind[1183]: Removed session 6. Jul 2 10:55:27.824828 systemd[1]: Started sshd@6-10.230.70.110:22-147.75.109.163:34688.service. Jul 2 10:55:28.545468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 10:55:28.545898 systemd[1]: Stopped kubelet.service. Jul 2 10:55:28.548599 systemd[1]: Starting kubelet.service... Jul 2 10:55:28.658010 systemd[1]: Started kubelet.service. Jul 2 10:55:28.695032 sshd[1332]: Accepted publickey for core from 147.75.109.163 port 34688 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:55:28.697673 sshd[1332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:55:28.707711 systemd[1]: Started session-7.scope. Jul 2 10:55:28.710090 systemd-logind[1183]: New session 7 of user core. Jul 2 10:55:28.738019 kubelet[1338]: E0702 10:55:28.737941 1338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:55:28.740659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:55:28.740930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:55:29.176333 sudo[1345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 10:55:29.176706 sudo[1345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 10:55:29.214195 systemd[1]: Starting docker.service... Jul 2 10:55:29.274443 env[1355]: time="2024-07-02T10:55:29.274351171Z" level=info msg="Starting up" Jul 2 10:55:29.278746 env[1355]: time="2024-07-02T10:55:29.278708361Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 10:55:29.278942 env[1355]: time="2024-07-02T10:55:29.278912361Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 10:55:29.279070 env[1355]: time="2024-07-02T10:55:29.279038953Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 10:55:29.279217 env[1355]: time="2024-07-02T10:55:29.279189057Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 10:55:29.282934 env[1355]: time="2024-07-02T10:55:29.282894812Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 10:55:29.282934 env[1355]: time="2024-07-02T10:55:29.282924276Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 10:55:29.283091 env[1355]: time="2024-07-02T10:55:29.282943693Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 10:55:29.283091 env[1355]: time="2024-07-02T10:55:29.282957903Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 10:55:29.292801 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1666833630-merged.mount: Deactivated successfully. Jul 2 10:55:29.332542 env[1355]: time="2024-07-02T10:55:29.332482245Z" level=info msg="Loading containers: start." Jul 2 10:55:29.486907 kernel: Initializing XFRM netlink socket Jul 2 10:55:29.542560 env[1355]: time="2024-07-02T10:55:29.542490486Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 10:55:29.639128 systemd-networkd[1021]: docker0: Link UP Jul 2 10:55:29.653632 env[1355]: time="2024-07-02T10:55:29.653570504Z" level=info msg="Loading containers: done." Jul 2 10:55:29.674383 env[1355]: time="2024-07-02T10:55:29.674333288Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 10:55:29.674953 env[1355]: time="2024-07-02T10:55:29.674923949Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 10:55:29.675239 env[1355]: time="2024-07-02T10:55:29.675212109Z" level=info msg="Daemon has completed initialization" Jul 2 10:55:29.693692 systemd[1]: Started docker.service. Jul 2 10:55:29.703869 env[1355]: time="2024-07-02T10:55:29.703764030Z" level=info msg="API listen on /run/docker.sock" Jul 2 10:55:31.160977 env[1191]: time="2024-07-02T10:55:31.160838323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 10:55:31.973499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578211137.mount: Deactivated successfully. Jul 2 10:55:35.325335 env[1191]: time="2024-07-02T10:55:35.325156845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:35.329072 env[1191]: time="2024-07-02T10:55:35.329037932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:35.331341 env[1191]: time="2024-07-02T10:55:35.331246071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:35.333162 env[1191]: time="2024-07-02T10:55:35.333121558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 10:55:35.335334 env[1191]: time="2024-07-02T10:55:35.335300138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:35.347804 env[1191]: time="2024-07-02T10:55:35.347757995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 10:55:37.385512 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 10:55:38.294250 env[1191]: time="2024-07-02T10:55:38.294079847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:38.296390 env[1191]: time="2024-07-02T10:55:38.296341422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:38.298868 env[1191]: time="2024-07-02T10:55:38.298810252Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:38.301231 env[1191]: time="2024-07-02T10:55:38.301184396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:38.302629 env[1191]: time="2024-07-02T10:55:38.302546276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 10:55:38.320327 env[1191]: time="2024-07-02T10:55:38.320285213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 10:55:38.932903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 10:55:38.933380 systemd[1]: Stopped kubelet.service. Jul 2 10:55:38.941535 systemd[1]: Starting kubelet.service... Jul 2 10:55:39.098910 systemd[1]: Started kubelet.service. Jul 2 10:55:39.180358 kubelet[1505]: E0702 10:55:39.180272 1505 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:55:39.182744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:55:39.183019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:55:40.076746 env[1191]: time="2024-07-02T10:55:40.076608703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:40.079140 env[1191]: time="2024-07-02T10:55:40.079100132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:40.096710 env[1191]: time="2024-07-02T10:55:40.096673372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:40.098562 env[1191]: time="2024-07-02T10:55:40.098519998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:40.099786 env[1191]: time="2024-07-02T10:55:40.099713013Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 10:55:40.112768 env[1191]: time="2024-07-02T10:55:40.112717790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 10:55:42.090083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556283664.mount: Deactivated successfully. Jul 2 10:55:42.950564 env[1191]: time="2024-07-02T10:55:42.950494628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:42.952687 env[1191]: time="2024-07-02T10:55:42.952637774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:42.954403 env[1191]: time="2024-07-02T10:55:42.954364332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:42.956091 env[1191]: time="2024-07-02T10:55:42.956053681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:42.956860 env[1191]: time="2024-07-02T10:55:42.956807803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 10:55:42.975878 env[1191]: time="2024-07-02T10:55:42.975763543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 10:55:43.611084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209691226.mount: Deactivated successfully. Jul 2 10:55:44.994143 env[1191]: time="2024-07-02T10:55:44.993981830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:44.996724 env[1191]: time="2024-07-02T10:55:44.996685087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:44.999113 env[1191]: time="2024-07-02T10:55:44.999076921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:45.001654 env[1191]: time="2024-07-02T10:55:45.001613977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:45.003112 env[1191]: time="2024-07-02T10:55:45.003042194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 10:55:45.016743 env[1191]: time="2024-07-02T10:55:45.016679229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 10:55:45.630330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552287949.mount: Deactivated successfully. Jul 2 10:55:45.637246 env[1191]: time="2024-07-02T10:55:45.637173784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:45.639585 env[1191]: time="2024-07-02T10:55:45.639551101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:45.655680 env[1191]: time="2024-07-02T10:55:45.655623650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:45.658254 env[1191]: time="2024-07-02T10:55:45.658215974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:45.659228 env[1191]: time="2024-07-02T10:55:45.659165419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 10:55:45.675376 env[1191]: time="2024-07-02T10:55:45.675326478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 10:55:46.377539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661367626.mount: Deactivated successfully. Jul 2 10:55:49.356829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 10:55:49.357325 systemd[1]: Stopped kubelet.service. Jul 2 10:55:49.361067 systemd[1]: Starting kubelet.service... Jul 2 10:55:49.673658 env[1191]: time="2024-07-02T10:55:49.673404519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:49.679094 env[1191]: time="2024-07-02T10:55:49.679043685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:49.682095 env[1191]: time="2024-07-02T10:55:49.682063200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:49.685870 env[1191]: time="2024-07-02T10:55:49.685825855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:49.688576 env[1191]: time="2024-07-02T10:55:49.687325207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 10:55:50.393241 systemd[1]: Started kubelet.service. Jul 2 10:55:50.491528 kubelet[1543]: E0702 10:55:50.491447 1543 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:55:50.493481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:55:50.493713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:55:51.246785 update_engine[1184]: I0702 10:55:51.246104 1184 update_attempter.cc:509] Updating boot flags... Jul 2 10:55:54.338012 systemd[1]: Stopped kubelet.service. Jul 2 10:55:54.344233 systemd[1]: Starting kubelet.service... Jul 2 10:55:54.374550 systemd[1]: Reloading. Jul 2 10:55:54.503440 /usr/lib/systemd/system-generators/torcx-generator[1642]: time="2024-07-02T10:55:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:55:54.504272 /usr/lib/systemd/system-generators/torcx-generator[1642]: time="2024-07-02T10:55:54Z" level=info msg="torcx already run" Jul 2 10:55:54.647500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:55:54.647855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:55:54.674725 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:55:54.806103 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 10:55:54.806547 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 10:55:54.807051 systemd[1]: Stopped kubelet.service. Jul 2 10:55:54.809741 systemd[1]: Starting kubelet.service... Jul 2 10:55:54.923214 systemd[1]: Started kubelet.service. Jul 2 10:55:55.054647 kubelet[1694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:55:55.054647 kubelet[1694]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 10:55:55.054647 kubelet[1694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:55:55.057203 kubelet[1694]: I0702 10:55:55.054744 1694 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 10:55:55.705637 kubelet[1694]: I0702 10:55:55.705537 1694 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 10:55:55.705637 kubelet[1694]: I0702 10:55:55.705605 1694 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 10:55:55.706019 kubelet[1694]: I0702 10:55:55.706005 1694 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 10:55:55.737164 kubelet[1694]: E0702 10:55:55.737123 1694 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.738418 kubelet[1694]: I0702 10:55:55.738351 1694 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:55:55.757286 kubelet[1694]: I0702 10:55:55.757254 1694 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 10:55:55.758070 kubelet[1694]: I0702 10:55:55.758047 1694 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 10:55:55.758474 kubelet[1694]: I0702 10:55:55.758445 1694 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 10:55:55.758794 kubelet[1694]: I0702 10:55:55.758769 1694 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 10:55:55.758934 kubelet[1694]: I0702 10:55:55.758912 1694 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 10:55:55.759236 kubelet[1694]: I0702 10:55:55.759213 1694 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:55:55.759550 kubelet[1694]: I0702 10:55:55.759528 1694 kubelet.go:396] "Attempting to sync node with API server" Jul 2 10:55:55.759739 kubelet[1694]: I0702 10:55:55.759715 1694 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 10:55:55.759941 kubelet[1694]: I0702 10:55:55.759917 1694 kubelet.go:312] "Adding apiserver pod source" Jul 2 10:55:55.760106 kubelet[1694]: I0702 10:55:55.760083 1694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 10:55:55.761091 kubelet[1694]: W0702 10:55:55.760991 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f8jck.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.761191 kubelet[1694]: E0702 10:55:55.761093 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f8jck.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.762100 kubelet[1694]: W0702 10:55:55.761994 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.762258 kubelet[1694]: E0702 10:55:55.762235 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.762523 kubelet[1694]: I0702 10:55:55.762498 1694 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 10:55:55.770636 kubelet[1694]: I0702 10:55:55.770605 1694 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 10:55:55.773513 kubelet[1694]: W0702 10:55:55.773485 1694 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 10:55:55.775205 kubelet[1694]: I0702 10:55:55.775173 1694 server.go:1256] "Started kubelet" Jul 2 10:55:55.784489 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 10:55:55.784920 kubelet[1694]: I0702 10:55:55.784893 1694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 10:55:55.791238 kubelet[1694]: E0702 10:55:55.791182 1694 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.70.110:6443/api/v1/namespaces/default/events\": dial tcp 10.230.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-f8jck.gb1.brightbox.com.17de60161bdd0d19 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-f8jck.gb1.brightbox.com,UID:srv-f8jck.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-f8jck.gb1.brightbox.com,},FirstTimestamp:2024-07-02 10:55:55.775126809 +0000 UTC m=+0.845650880,LastTimestamp:2024-07-02 10:55:55.775126809 +0000 UTC m=+0.845650880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-f8jck.gb1.brightbox.com,}" Jul 2 10:55:55.793435 kubelet[1694]: E0702 10:55:55.793405 1694 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 10:55:55.793698 kubelet[1694]: I0702 10:55:55.793670 1694 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 10:55:55.795051 kubelet[1694]: I0702 10:55:55.795023 1694 server.go:461] "Adding debug handlers to kubelet server" Jul 2 10:55:55.796704 kubelet[1694]: I0702 10:55:55.796675 1694 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 10:55:55.799233 kubelet[1694]: I0702 10:55:55.799205 1694 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 10:55:55.799499 kubelet[1694]: I0702 10:55:55.799475 1694 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 10:55:55.800063 kubelet[1694]: I0702 10:55:55.800032 1694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 10:55:55.800468 kubelet[1694]: W0702 10:55:55.800424 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.800624 kubelet[1694]: E0702 10:55:55.800600 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.800868 kubelet[1694]: E0702 10:55:55.800828 1694 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f8jck.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.70.110:6443: connect: connection refused" interval="200ms" Jul 2 10:55:55.801011 kubelet[1694]: I0702 10:55:55.800888 1694 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 10:55:55.807496 kubelet[1694]: I0702 10:55:55.807461 1694 factory.go:221] Registration of the containerd container factory successfully Jul 2 10:55:55.807721 kubelet[1694]: I0702 10:55:55.807699 1694 factory.go:221] Registration of the systemd container factory successfully Jul 2 10:55:55.808010 kubelet[1694]: I0702 10:55:55.807972 1694 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 10:55:55.823078 kubelet[1694]: I0702 10:55:55.823037 1694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 10:55:55.824521 kubelet[1694]: I0702 10:55:55.824497 1694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 10:55:55.824709 kubelet[1694]: I0702 10:55:55.824684 1694 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 10:55:55.824883 kubelet[1694]: I0702 10:55:55.824849 1694 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 10:55:55.825101 kubelet[1694]: E0702 10:55:55.825078 1694 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 10:55:55.832415 kubelet[1694]: W0702 10:55:55.832344 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.832652 kubelet[1694]: E0702 10:55:55.832629 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:55.844378 kubelet[1694]: I0702 10:55:55.844303 1694 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 10:55:55.844616 kubelet[1694]: I0702 10:55:55.844594 1694 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 10:55:55.844779 kubelet[1694]: I0702 10:55:55.844758 1694 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:55:55.846395 kubelet[1694]: I0702 10:55:55.846372 1694 policy_none.go:49] "None policy: Start" Jul 2 10:55:55.847494 kubelet[1694]: I0702 10:55:55.847458 1694 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 10:55:55.847618 kubelet[1694]: I0702 10:55:55.847510 1694 state_mem.go:35] "Initializing new in-memory state store" Jul 2 10:55:55.858324 systemd[1]: Created slice kubepods.slice. Jul 2 10:55:55.866307 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 10:55:55.870583 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 10:55:55.876310 kubelet[1694]: I0702 10:55:55.876279 1694 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 10:55:55.882180 kubelet[1694]: I0702 10:55:55.882153 1694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 10:55:55.882363 kubelet[1694]: E0702 10:55:55.882337 1694 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-f8jck.gb1.brightbox.com\" not found" Jul 2 10:55:55.900581 kubelet[1694]: I0702 10:55:55.900528 1694 kubelet_node_status.go:73] "Attempting to register node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:55.901020 kubelet[1694]: E0702 10:55:55.900998 1694 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.70.110:6443/api/v1/nodes\": dial tcp 10.230.70.110:6443: connect: connection refused" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:55.926384 kubelet[1694]: I0702 10:55:55.926329 1694 topology_manager.go:215] "Topology Admit Handler" podUID="edd58fb4b6226eea1737eec0ed8bac21" podNamespace="kube-system" podName="kube-scheduler-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:55.929269 kubelet[1694]: I0702 10:55:55.929245 1694 topology_manager.go:215] "Topology Admit Handler" podUID="76746f02abc9d2683446438e1f8ddeae" podNamespace="kube-system" podName="kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:55.931934 kubelet[1694]: I0702 10:55:55.931904 1694 topology_manager.go:215] "Topology Admit Handler" podUID="779908cd839ac405ad30c0fc2e3dd5fd" podNamespace="kube-system" podName="kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:55.939134 systemd[1]: Created slice kubepods-burstable-podedd58fb4b6226eea1737eec0ed8bac21.slice. Jul 2 10:55:55.960605 systemd[1]: Created slice kubepods-burstable-pod76746f02abc9d2683446438e1f8ddeae.slice. Jul 2 10:55:55.971523 systemd[1]: Created slice kubepods-burstable-pod779908cd839ac405ad30c0fc2e3dd5fd.slice. Jul 2 10:55:56.002421 kubelet[1694]: E0702 10:55:56.002374 1694 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f8jck.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.70.110:6443: connect: connection refused" interval="400ms" Jul 2 10:55:56.101810 kubelet[1694]: I0702 10:55:56.101626 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edd58fb4b6226eea1737eec0ed8bac21-kubeconfig\") pod \"kube-scheduler-srv-f8jck.gb1.brightbox.com\" (UID: \"edd58fb4b6226eea1737eec0ed8bac21\") " pod="kube-system/kube-scheduler-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102636 kubelet[1694]: I0702 10:55:56.102606 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76746f02abc9d2683446438e1f8ddeae-ca-certs\") pod \"kube-apiserver-srv-f8jck.gb1.brightbox.com\" (UID: \"76746f02abc9d2683446438e1f8ddeae\") " pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102733 kubelet[1694]: I0702 10:55:56.102701 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76746f02abc9d2683446438e1f8ddeae-k8s-certs\") pod \"kube-apiserver-srv-f8jck.gb1.brightbox.com\" (UID: \"76746f02abc9d2683446438e1f8ddeae\") " pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102811 kubelet[1694]: I0702 10:55:56.102747 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-flexvolume-dir\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102811 kubelet[1694]: I0702 10:55:56.102792 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76746f02abc9d2683446438e1f8ddeae-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f8jck.gb1.brightbox.com\" (UID: \"76746f02abc9d2683446438e1f8ddeae\") " pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102945 kubelet[1694]: I0702 10:55:56.102825 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-ca-certs\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102945 kubelet[1694]: I0702 10:55:56.102887 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-k8s-certs\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.102945 kubelet[1694]: I0702 10:55:56.102921 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-kubeconfig\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.103121 kubelet[1694]: I0702 10:55:56.102953 1694 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.104585 kubelet[1694]: I0702 10:55:56.104519 1694 kubelet_node_status.go:73] "Attempting to register node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.105203 kubelet[1694]: E0702 10:55:56.105178 1694 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.70.110:6443/api/v1/nodes\": dial tcp 10.230.70.110:6443: connect: connection refused" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.261366 env[1191]: time="2024-07-02T10:55:56.260609982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f8jck.gb1.brightbox.com,Uid:edd58fb4b6226eea1737eec0ed8bac21,Namespace:kube-system,Attempt:0,}" Jul 2 10:55:56.270085 env[1191]: time="2024-07-02T10:55:56.270033237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f8jck.gb1.brightbox.com,Uid:76746f02abc9d2683446438e1f8ddeae,Namespace:kube-system,Attempt:0,}" Jul 2 10:55:56.277327 env[1191]: time="2024-07-02T10:55:56.277277203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f8jck.gb1.brightbox.com,Uid:779908cd839ac405ad30c0fc2e3dd5fd,Namespace:kube-system,Attempt:0,}" Jul 2 10:55:56.404593 kubelet[1694]: E0702 10:55:56.404524 1694 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f8jck.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.70.110:6443: connect: connection refused" interval="800ms" Jul 2 10:55:56.509898 kubelet[1694]: I0702 10:55:56.509765 1694 kubelet_node_status.go:73] "Attempting to register node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.510936 kubelet[1694]: E0702 10:55:56.510903 1694 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.70.110:6443/api/v1/nodes\": dial tcp 10.230.70.110:6443: connect: connection refused" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:56.891318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594465282.mount: Deactivated successfully. Jul 2 10:55:56.899783 env[1191]: time="2024-07-02T10:55:56.899724901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.902191 env[1191]: time="2024-07-02T10:55:56.902155252Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.903953 env[1191]: time="2024-07-02T10:55:56.903919068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.905049 env[1191]: time="2024-07-02T10:55:56.905016829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.907051 env[1191]: time="2024-07-02T10:55:56.907015147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.909662 env[1191]: time="2024-07-02T10:55:56.909621789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.923055 env[1191]: time="2024-07-02T10:55:56.921819306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.942951 env[1191]: time="2024-07-02T10:55:56.942877062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.944393 env[1191]: time="2024-07-02T10:55:56.944315506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.945811 env[1191]: time="2024-07-02T10:55:56.945766144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.949016 env[1191]: time="2024-07-02T10:55:56.948970655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.954211 env[1191]: time="2024-07-02T10:55:56.954159900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:55:56.960507 env[1191]: time="2024-07-02T10:55:56.960034192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:55:56.960507 env[1191]: time="2024-07-02T10:55:56.960113921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:55:56.960507 env[1191]: time="2024-07-02T10:55:56.960131559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:55:56.961072 env[1191]: time="2024-07-02T10:55:56.961014171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/778c481ecf9048eddca9dd6a163336e2c2ae717e0941a62877598427b70264be pid=1735 runtime=io.containerd.runc.v2 Jul 2 10:55:57.005973 env[1191]: time="2024-07-02T10:55:57.004924671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:55:57.005973 env[1191]: time="2024-07-02T10:55:57.005098993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:55:57.005973 env[1191]: time="2024-07-02T10:55:57.005172374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:55:57.005973 env[1191]: time="2024-07-02T10:55:57.005453239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf53e6cd13f1d419651e23d2bf2f6b4d2f3726137f5f49c3921f8d3a5a36ed8c pid=1763 runtime=io.containerd.runc.v2 Jul 2 10:55:57.006469 kubelet[1694]: W0702 10:55:57.006123 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f8jck.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.006469 kubelet[1694]: E0702 10:55:57.006252 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f8jck.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.014258 systemd[1]: Started cri-containerd-778c481ecf9048eddca9dd6a163336e2c2ae717e0941a62877598427b70264be.scope. Jul 2 10:55:57.038438 env[1191]: time="2024-07-02T10:55:57.038263123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:55:57.039160 env[1191]: time="2024-07-02T10:55:57.039105378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:55:57.039347 env[1191]: time="2024-07-02T10:55:57.039297646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:55:57.040123 env[1191]: time="2024-07-02T10:55:57.040035885Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5a8e00a97e9da64d3c96037ba9335dfe814445312a42fadaf08be18ba641de8 pid=1769 runtime=io.containerd.runc.v2 Jul 2 10:55:57.060082 systemd[1]: Started cri-containerd-cf53e6cd13f1d419651e23d2bf2f6b4d2f3726137f5f49c3921f8d3a5a36ed8c.scope. Jul 2 10:55:57.104281 systemd[1]: Started cri-containerd-d5a8e00a97e9da64d3c96037ba9335dfe814445312a42fadaf08be18ba641de8.scope. Jul 2 10:55:57.195695 env[1191]: time="2024-07-02T10:55:57.194872969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f8jck.gb1.brightbox.com,Uid:edd58fb4b6226eea1737eec0ed8bac21,Namespace:kube-system,Attempt:0,} returns sandbox id \"778c481ecf9048eddca9dd6a163336e2c2ae717e0941a62877598427b70264be\"" Jul 2 10:55:57.203255 env[1191]: time="2024-07-02T10:55:57.202619703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f8jck.gb1.brightbox.com,Uid:76746f02abc9d2683446438e1f8ddeae,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf53e6cd13f1d419651e23d2bf2f6b4d2f3726137f5f49c3921f8d3a5a36ed8c\"" Jul 2 10:55:57.205414 kubelet[1694]: E0702 10:55:57.205381 1694 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f8jck.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.70.110:6443: connect: connection refused" interval="1.6s" Jul 2 10:55:57.207037 env[1191]: time="2024-07-02T10:55:57.206982900Z" level=info msg="CreateContainer within sandbox \"778c481ecf9048eddca9dd6a163336e2c2ae717e0941a62877598427b70264be\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 10:55:57.209635 env[1191]: time="2024-07-02T10:55:57.209583378Z" level=info msg="CreateContainer within sandbox \"cf53e6cd13f1d419651e23d2bf2f6b4d2f3726137f5f49c3921f8d3a5a36ed8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 10:55:57.232109 env[1191]: time="2024-07-02T10:55:57.232015306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f8jck.gb1.brightbox.com,Uid:779908cd839ac405ad30c0fc2e3dd5fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5a8e00a97e9da64d3c96037ba9335dfe814445312a42fadaf08be18ba641de8\"" Jul 2 10:55:57.236284 env[1191]: time="2024-07-02T10:55:57.236240858Z" level=info msg="CreateContainer within sandbox \"d5a8e00a97e9da64d3c96037ba9335dfe814445312a42fadaf08be18ba641de8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 10:55:57.238406 env[1191]: time="2024-07-02T10:55:57.238322703Z" level=info msg="CreateContainer within sandbox \"cf53e6cd13f1d419651e23d2bf2f6b4d2f3726137f5f49c3921f8d3a5a36ed8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb2c2ca44282f5bbb0222931efa1e76d403a8e888f788a6b1b80a5c1fc5a4534\"" Jul 2 10:55:57.239264 env[1191]: time="2024-07-02T10:55:57.239229229Z" level=info msg="StartContainer for \"fb2c2ca44282f5bbb0222931efa1e76d403a8e888f788a6b1b80a5c1fc5a4534\"" Jul 2 10:55:57.244020 env[1191]: time="2024-07-02T10:55:57.243966204Z" level=info msg="CreateContainer within sandbox \"778c481ecf9048eddca9dd6a163336e2c2ae717e0941a62877598427b70264be\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"079750a1c6c88b7e6740a246fe638b04e9f3f68cfec3fd5dc7fc573615ffe6d6\"" Jul 2 10:55:57.244625 env[1191]: time="2024-07-02T10:55:57.244547323Z" level=info msg="StartContainer for \"079750a1c6c88b7e6740a246fe638b04e9f3f68cfec3fd5dc7fc573615ffe6d6\"" Jul 2 10:55:57.255129 env[1191]: time="2024-07-02T10:55:57.255073457Z" level=info msg="CreateContainer within sandbox \"d5a8e00a97e9da64d3c96037ba9335dfe814445312a42fadaf08be18ba641de8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2dba358919de7fa5823ee434092f00a57f0c4d8667410e566a75f0123958c8fe\"" Jul 2 10:55:57.256349 env[1191]: time="2024-07-02T10:55:57.256303516Z" level=info msg="StartContainer for \"2dba358919de7fa5823ee434092f00a57f0c4d8667410e566a75f0123958c8fe\"" Jul 2 10:55:57.283513 systemd[1]: Started cri-containerd-079750a1c6c88b7e6740a246fe638b04e9f3f68cfec3fd5dc7fc573615ffe6d6.scope. Jul 2 10:55:57.284909 systemd[1]: Started cri-containerd-fb2c2ca44282f5bbb0222931efa1e76d403a8e888f788a6b1b80a5c1fc5a4534.scope. Jul 2 10:55:57.299121 systemd[1]: Started cri-containerd-2dba358919de7fa5823ee434092f00a57f0c4d8667410e566a75f0123958c8fe.scope. Jul 2 10:55:57.306190 kubelet[1694]: W0702 10:55:57.306131 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.306190 kubelet[1694]: E0702 10:55:57.306184 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.312471 kubelet[1694]: W0702 10:55:57.312398 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.312471 kubelet[1694]: E0702 10:55:57.312476 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.314148 kubelet[1694]: I0702 10:55:57.314120 1694 kubelet_node_status.go:73] "Attempting to register node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:57.314453 kubelet[1694]: E0702 10:55:57.314430 1694 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.70.110:6443/api/v1/nodes\": dial tcp 10.230.70.110:6443: connect: connection refused" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:57.328364 kubelet[1694]: W0702 10:55:57.328289 1694 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.328364 kubelet[1694]: E0702 10:55:57.328357 1694 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:57.397895 env[1191]: time="2024-07-02T10:55:57.397818493Z" level=info msg="StartContainer for \"2dba358919de7fa5823ee434092f00a57f0c4d8667410e566a75f0123958c8fe\" returns successfully" Jul 2 10:55:57.416961 env[1191]: time="2024-07-02T10:55:57.416784482Z" level=info msg="StartContainer for \"fb2c2ca44282f5bbb0222931efa1e76d403a8e888f788a6b1b80a5c1fc5a4534\" returns successfully" Jul 2 10:55:57.426016 env[1191]: time="2024-07-02T10:55:57.425970753Z" level=info msg="StartContainer for \"079750a1c6c88b7e6740a246fe638b04e9f3f68cfec3fd5dc7fc573615ffe6d6\" returns successfully" Jul 2 10:55:57.902709 kubelet[1694]: E0702 10:55:57.902599 1694 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.70.110:6443: connect: connection refused Jul 2 10:55:58.917529 kubelet[1694]: I0702 10:55:58.917407 1694 kubelet_node_status.go:73] "Attempting to register node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:59.918073 kubelet[1694]: E0702 10:55:59.917992 1694 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-f8jck.gb1.brightbox.com\" not found" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:59.978612 kubelet[1694]: I0702 10:55:59.978565 1694 kubelet_node_status.go:76] "Successfully registered node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:55:59.991067 kubelet[1694]: E0702 10:55:59.991022 1694 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-f8jck.gb1.brightbox.com\" not found" Jul 2 10:56:00.092319 kubelet[1694]: E0702 10:56:00.092217 1694 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-f8jck.gb1.brightbox.com\" not found" Jul 2 10:56:00.192709 kubelet[1694]: E0702 10:56:00.192649 1694 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-f8jck.gb1.brightbox.com\" not found" Jul 2 10:56:00.763628 kubelet[1694]: I0702 10:56:00.763562 1694 apiserver.go:52] "Watching apiserver" Jul 2 10:56:00.799835 kubelet[1694]: I0702 10:56:00.799777 1694 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 10:56:00.910303 kubelet[1694]: W0702 10:56:00.910255 1694 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:56:02.668085 systemd[1]: Reloading. Jul 2 10:56:02.799373 /usr/lib/systemd/system-generators/torcx-generator[1985]: time="2024-07-02T10:56:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:56:02.799430 /usr/lib/systemd/system-generators/torcx-generator[1985]: time="2024-07-02T10:56:02Z" level=info msg="torcx already run" Jul 2 10:56:02.914478 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:56:02.914957 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:56:02.943983 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:56:03.124858 systemd[1]: Stopping kubelet.service... Jul 2 10:56:03.136564 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 10:56:03.136946 systemd[1]: Stopped kubelet.service. Jul 2 10:56:03.137039 systemd[1]: kubelet.service: Consumed 1.256s CPU time. Jul 2 10:56:03.139969 systemd[1]: Starting kubelet.service... Jul 2 10:56:04.290498 systemd[1]: Started kubelet.service. Jul 2 10:56:04.411577 kubelet[2036]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:56:04.411577 kubelet[2036]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 10:56:04.411577 kubelet[2036]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:56:04.411577 kubelet[2036]: I0702 10:56:04.409669 2036 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 10:56:04.418527 sudo[2048]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 10:56:04.420418 kubelet[2036]: I0702 10:56:04.418532 2036 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 10:56:04.420418 kubelet[2036]: I0702 10:56:04.418560 2036 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 10:56:04.420418 kubelet[2036]: I0702 10:56:04.418808 2036 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 10:56:04.418970 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 10:56:04.421351 kubelet[2036]: I0702 10:56:04.421274 2036 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 10:56:04.426198 kubelet[2036]: I0702 10:56:04.426089 2036 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:56:04.443293 kubelet[2036]: I0702 10:56:04.442393 2036 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 10:56:04.443293 kubelet[2036]: I0702 10:56:04.442920 2036 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 10:56:04.443293 kubelet[2036]: I0702 10:56:04.443160 2036 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 10:56:04.443293 kubelet[2036]: I0702 10:56:04.443213 2036 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 10:56:04.443293 kubelet[2036]: I0702 10:56:04.443230 2036 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 10:56:04.443934 kubelet[2036]: I0702 10:56:04.443313 2036 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:56:04.443934 kubelet[2036]: I0702 10:56:04.443504 2036 kubelet.go:396] "Attempting to sync node with API server" Jul 2 10:56:04.448594 kubelet[2036]: I0702 10:56:04.444612 2036 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 10:56:04.448594 kubelet[2036]: I0702 10:56:04.444681 2036 kubelet.go:312] "Adding apiserver pod source" Jul 2 10:56:04.449901 kubelet[2036]: I0702 10:56:04.444702 2036 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 10:56:04.461752 kubelet[2036]: I0702 10:56:04.457440 2036 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 10:56:04.461752 kubelet[2036]: I0702 10:56:04.457733 2036 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 10:56:04.461752 kubelet[2036]: I0702 10:56:04.459708 2036 server.go:1256] "Started kubelet" Jul 2 10:56:04.467895 kubelet[2036]: I0702 10:56:04.463924 2036 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 10:56:04.475876 kubelet[2036]: I0702 10:56:04.475832 2036 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 10:56:04.477949 kubelet[2036]: I0702 10:56:04.477924 2036 server.go:461] "Adding debug handlers to kubelet server" Jul 2 10:56:04.482387 kubelet[2036]: I0702 10:56:04.482359 2036 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 10:56:04.484210 kubelet[2036]: I0702 10:56:04.484171 2036 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 10:56:04.491923 kubelet[2036]: I0702 10:56:04.491280 2036 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 10:56:04.493409 kubelet[2036]: I0702 10:56:04.493382 2036 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 10:56:04.494228 kubelet[2036]: I0702 10:56:04.493727 2036 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 10:56:04.511213 kubelet[2036]: I0702 10:56:04.511176 2036 factory.go:221] Registration of the systemd container factory successfully Jul 2 10:56:04.511553 kubelet[2036]: I0702 10:56:04.511522 2036 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 10:56:04.513868 kubelet[2036]: I0702 10:56:04.512648 2036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 10:56:04.516866 kubelet[2036]: I0702 10:56:04.514450 2036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 10:56:04.516866 kubelet[2036]: I0702 10:56:04.514499 2036 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 10:56:04.516866 kubelet[2036]: I0702 10:56:04.514527 2036 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 10:56:04.516866 kubelet[2036]: E0702 10:56:04.514635 2036 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 10:56:04.527197 kubelet[2036]: E0702 10:56:04.527163 2036 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 10:56:04.528826 kubelet[2036]: I0702 10:56:04.528648 2036 factory.go:221] Registration of the containerd container factory successfully Jul 2 10:56:04.607828 kubelet[2036]: I0702 10:56:04.607581 2036 kubelet_node_status.go:73] "Attempting to register node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.615212 kubelet[2036]: E0702 10:56:04.615174 2036 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 10:56:04.642713 kubelet[2036]: I0702 10:56:04.642672 2036 kubelet_node_status.go:112] "Node was previously registered" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.645647 kubelet[2036]: I0702 10:56:04.645623 2036 kubelet_node_status.go:76] "Successfully registered node" node="srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.661285 kubelet[2036]: I0702 10:56:04.661246 2036 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 10:56:04.661285 kubelet[2036]: I0702 10:56:04.661276 2036 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 10:56:04.661480 kubelet[2036]: I0702 10:56:04.661308 2036 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:56:04.661569 kubelet[2036]: I0702 10:56:04.661547 2036 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 10:56:04.661660 kubelet[2036]: I0702 10:56:04.661590 2036 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 10:56:04.661660 kubelet[2036]: I0702 10:56:04.661613 2036 policy_none.go:49] "None policy: Start" Jul 2 10:56:04.667420 kubelet[2036]: I0702 10:56:04.667386 2036 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 10:56:04.667544 kubelet[2036]: I0702 10:56:04.667439 2036 state_mem.go:35] "Initializing new in-memory state store" Jul 2 10:56:04.667661 kubelet[2036]: I0702 10:56:04.667635 2036 state_mem.go:75] "Updated machine memory state" Jul 2 10:56:04.688765 kubelet[2036]: I0702 10:56:04.688728 2036 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 10:56:04.690982 kubelet[2036]: I0702 10:56:04.690958 2036 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 10:56:04.815664 kubelet[2036]: I0702 10:56:04.815610 2036 topology_manager.go:215] "Topology Admit Handler" podUID="779908cd839ac405ad30c0fc2e3dd5fd" podNamespace="kube-system" podName="kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.816090 kubelet[2036]: I0702 10:56:04.816064 2036 topology_manager.go:215] "Topology Admit Handler" podUID="edd58fb4b6226eea1737eec0ed8bac21" podNamespace="kube-system" podName="kube-scheduler-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.816284 kubelet[2036]: I0702 10:56:04.816258 2036 topology_manager.go:215] "Topology Admit Handler" podUID="76746f02abc9d2683446438e1f8ddeae" podNamespace="kube-system" podName="kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.826124 kubelet[2036]: W0702 10:56:04.823550 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:56:04.826640 kubelet[2036]: W0702 10:56:04.826336 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:56:04.826640 kubelet[2036]: E0702 10:56:04.826435 2036 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.826804 kubelet[2036]: W0702 10:56:04.826699 2036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:56:04.895657 kubelet[2036]: I0702 10:56:04.895479 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76746f02abc9d2683446438e1f8ddeae-ca-certs\") pod \"kube-apiserver-srv-f8jck.gb1.brightbox.com\" (UID: \"76746f02abc9d2683446438e1f8ddeae\") " pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.895657 kubelet[2036]: I0702 10:56:04.895545 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76746f02abc9d2683446438e1f8ddeae-k8s-certs\") pod \"kube-apiserver-srv-f8jck.gb1.brightbox.com\" (UID: \"76746f02abc9d2683446438e1f8ddeae\") " pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.895657 kubelet[2036]: I0702 10:56:04.895600 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-kubeconfig\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.895657 kubelet[2036]: I0702 10:56:04.895658 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/edd58fb4b6226eea1737eec0ed8bac21-kubeconfig\") pod \"kube-scheduler-srv-f8jck.gb1.brightbox.com\" (UID: \"edd58fb4b6226eea1737eec0ed8bac21\") " pod="kube-system/kube-scheduler-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.896086 kubelet[2036]: I0702 10:56:04.895696 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76746f02abc9d2683446438e1f8ddeae-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f8jck.gb1.brightbox.com\" (UID: \"76746f02abc9d2683446438e1f8ddeae\") " pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.896086 kubelet[2036]: I0702 10:56:04.895726 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-ca-certs\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.896086 kubelet[2036]: I0702 10:56:04.895755 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-flexvolume-dir\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.896086 kubelet[2036]: I0702 10:56:04.895785 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-k8s-certs\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:04.896086 kubelet[2036]: I0702 10:56:04.895831 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/779908cd839ac405ad30c0fc2e3dd5fd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f8jck.gb1.brightbox.com\" (UID: \"779908cd839ac405ad30c0fc2e3dd5fd\") " pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" Jul 2 10:56:05.262139 sudo[2048]: pam_unix(sudo:session): session closed for user root Jul 2 10:56:05.450519 kubelet[2036]: I0702 10:56:05.450464 2036 apiserver.go:52] "Watching apiserver" Jul 2 10:56:05.494104 kubelet[2036]: I0702 10:56:05.494045 2036 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 10:56:05.585333 kubelet[2036]: I0702 10:56:05.585189 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-f8jck.gb1.brightbox.com" podStartSLOduration=1.585115508 podStartE2EDuration="1.585115508s" podCreationTimestamp="2024-07-02 10:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:56:05.574816567 +0000 UTC m=+1.260226548" watchObservedRunningTime="2024-07-02 10:56:05.585115508 +0000 UTC m=+1.270525487" Jul 2 10:56:05.597894 kubelet[2036]: I0702 10:56:05.597835 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-f8jck.gb1.brightbox.com" podStartSLOduration=5.597761537 podStartE2EDuration="5.597761537s" podCreationTimestamp="2024-07-02 10:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:56:05.58625951 +0000 UTC m=+1.271669511" watchObservedRunningTime="2024-07-02 10:56:05.597761537 +0000 UTC m=+1.283171538" Jul 2 10:56:05.610700 kubelet[2036]: I0702 10:56:05.610658 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-f8jck.gb1.brightbox.com" podStartSLOduration=1.610607275 podStartE2EDuration="1.610607275s" podCreationTimestamp="2024-07-02 10:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:56:05.599047747 +0000 UTC m=+1.284457735" watchObservedRunningTime="2024-07-02 10:56:05.610607275 +0000 UTC m=+1.296017263" Jul 2 10:56:07.424546 sudo[1345]: pam_unix(sudo:session): session closed for user root Jul 2 10:56:07.568294 sshd[1332]: pam_unix(sshd:session): session closed for user core Jul 2 10:56:07.574272 systemd[1]: sshd@6-10.230.70.110:22-147.75.109.163:34688.service: Deactivated successfully. Jul 2 10:56:07.575556 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 10:56:07.575769 systemd[1]: session-7.scope: Consumed 6.931s CPU time. Jul 2 10:56:07.576737 systemd-logind[1183]: Session 7 logged out. Waiting for processes to exit. Jul 2 10:56:07.578331 systemd-logind[1183]: Removed session 7. Jul 2 10:56:16.917086 kubelet[2036]: I0702 10:56:16.917045 2036 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 10:56:16.918929 env[1191]: time="2024-07-02T10:56:16.918828262Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 10:56:16.919654 kubelet[2036]: I0702 10:56:16.919620 2036 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 10:56:17.915271 kubelet[2036]: I0702 10:56:17.915208 2036 topology_manager.go:215] "Topology Admit Handler" podUID="aed19e31-e57a-4e31-87af-36e08561b90f" podNamespace="kube-system" podName="kube-proxy-xgssr" Jul 2 10:56:17.926649 systemd[1]: Created slice kubepods-besteffort-podaed19e31_e57a_4e31_87af_36e08561b90f.slice. Jul 2 10:56:17.957151 kubelet[2036]: I0702 10:56:17.957074 2036 topology_manager.go:215] "Topology Admit Handler" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" podNamespace="kube-system" podName="cilium-h65vw" Jul 2 10:56:17.965403 systemd[1]: Created slice kubepods-burstable-podf1359ec9_e740_4152_97d6_5e1b98b2bf55.slice. Jul 2 10:56:18.090416 kubelet[2036]: I0702 10:56:18.090351 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-kernel\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.090416 kubelet[2036]: I0702 10:56:18.090430 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aed19e31-e57a-4e31-87af-36e08561b90f-kube-proxy\") pod \"kube-proxy-xgssr\" (UID: \"aed19e31-e57a-4e31-87af-36e08561b90f\") " pod="kube-system/kube-proxy-xgssr" Jul 2 10:56:18.090813 kubelet[2036]: I0702 10:56:18.090466 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hostproc\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.090813 kubelet[2036]: I0702 10:56:18.090497 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aed19e31-e57a-4e31-87af-36e08561b90f-xtables-lock\") pod \"kube-proxy-xgssr\" (UID: \"aed19e31-e57a-4e31-87af-36e08561b90f\") " pod="kube-system/kube-proxy-xgssr" Jul 2 10:56:18.090813 kubelet[2036]: I0702 10:56:18.090526 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-lib-modules\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.090813 kubelet[2036]: I0702 10:56:18.090554 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29gfb\" (UniqueName: \"kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-kube-api-access-29gfb\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.090813 kubelet[2036]: I0702 10:56:18.090582 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cni-path\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.090813 kubelet[2036]: I0702 10:56:18.090609 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-etc-cni-netd\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091419 kubelet[2036]: I0702 10:56:18.090636 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-net\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091419 kubelet[2036]: I0702 10:56:18.090674 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-cgroup\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091419 kubelet[2036]: I0702 10:56:18.090706 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1359ec9-e740-4152-97d6-5e1b98b2bf55-clustermesh-secrets\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091419 kubelet[2036]: I0702 10:56:18.090737 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-config-path\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091419 kubelet[2036]: I0702 10:56:18.090765 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aed19e31-e57a-4e31-87af-36e08561b90f-lib-modules\") pod \"kube-proxy-xgssr\" (UID: \"aed19e31-e57a-4e31-87af-36e08561b90f\") " pod="kube-system/kube-proxy-xgssr" Jul 2 10:56:18.091918 kubelet[2036]: I0702 10:56:18.090794 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-xtables-lock\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091918 kubelet[2036]: I0702 10:56:18.090824 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-bpf-maps\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091918 kubelet[2036]: I0702 10:56:18.090878 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-558kq\" (UniqueName: \"kubernetes.io/projected/aed19e31-e57a-4e31-87af-36e08561b90f-kube-api-access-558kq\") pod \"kube-proxy-xgssr\" (UID: \"aed19e31-e57a-4e31-87af-36e08561b90f\") " pod="kube-system/kube-proxy-xgssr" Jul 2 10:56:18.091918 kubelet[2036]: I0702 10:56:18.090910 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-run\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.091918 kubelet[2036]: I0702 10:56:18.090953 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hubble-tls\") pod \"cilium-h65vw\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " pod="kube-system/cilium-h65vw" Jul 2 10:56:18.122982 kubelet[2036]: I0702 10:56:18.122910 2036 topology_manager.go:215] "Topology Admit Handler" podUID="1ae1bfeb-74a8-4215-bd74-3f6923abe07c" podNamespace="kube-system" podName="cilium-operator-5cc964979-4q8vg" Jul 2 10:56:18.129891 systemd[1]: Created slice kubepods-besteffort-pod1ae1bfeb_74a8_4215_bd74_3f6923abe07c.slice. Jul 2 10:56:18.242550 env[1191]: time="2024-07-02T10:56:18.242455180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgssr,Uid:aed19e31-e57a-4e31-87af-36e08561b90f,Namespace:kube-system,Attempt:0,}" Jul 2 10:56:18.271824 env[1191]: time="2024-07-02T10:56:18.271762558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h65vw,Uid:f1359ec9-e740-4152-97d6-5e1b98b2bf55,Namespace:kube-system,Attempt:0,}" Jul 2 10:56:18.273982 env[1191]: time="2024-07-02T10:56:18.273796491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:56:18.274093 env[1191]: time="2024-07-02T10:56:18.274018457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:56:18.274165 env[1191]: time="2024-07-02T10:56:18.274098086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:56:18.274613 env[1191]: time="2024-07-02T10:56:18.274550871Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/961b40f2c2514dec219df94fa2fad168c845afc75bf6ee9d30566382570f2f56 pid=2122 runtime=io.containerd.runc.v2 Jul 2 10:56:18.294042 kubelet[2036]: I0702 10:56:18.293986 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-cilium-config-path\") pod \"cilium-operator-5cc964979-4q8vg\" (UID: \"1ae1bfeb-74a8-4215-bd74-3f6923abe07c\") " pod="kube-system/cilium-operator-5cc964979-4q8vg" Jul 2 10:56:18.294244 kubelet[2036]: I0702 10:56:18.294101 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxz6q\" (UniqueName: \"kubernetes.io/projected/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-kube-api-access-kxz6q\") pod \"cilium-operator-5cc964979-4q8vg\" (UID: \"1ae1bfeb-74a8-4215-bd74-3f6923abe07c\") " pod="kube-system/cilium-operator-5cc964979-4q8vg" Jul 2 10:56:18.307812 env[1191]: time="2024-07-02T10:56:18.307685615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:56:18.307812 env[1191]: time="2024-07-02T10:56:18.307758559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:56:18.307812 env[1191]: time="2024-07-02T10:56:18.307776020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:56:18.308373 env[1191]: time="2024-07-02T10:56:18.308307213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6 pid=2146 runtime=io.containerd.runc.v2 Jul 2 10:56:18.316357 systemd[1]: Started cri-containerd-961b40f2c2514dec219df94fa2fad168c845afc75bf6ee9d30566382570f2f56.scope. Jul 2 10:56:18.340927 systemd[1]: Started cri-containerd-22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6.scope. Jul 2 10:56:18.383032 env[1191]: time="2024-07-02T10:56:18.382971569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xgssr,Uid:aed19e31-e57a-4e31-87af-36e08561b90f,Namespace:kube-system,Attempt:0,} returns sandbox id \"961b40f2c2514dec219df94fa2fad168c845afc75bf6ee9d30566382570f2f56\"" Jul 2 10:56:18.391080 env[1191]: time="2024-07-02T10:56:18.390514648Z" level=info msg="CreateContainer within sandbox \"961b40f2c2514dec219df94fa2fad168c845afc75bf6ee9d30566382570f2f56\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 10:56:18.426242 env[1191]: time="2024-07-02T10:56:18.426184230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h65vw,Uid:f1359ec9-e740-4152-97d6-5e1b98b2bf55,Namespace:kube-system,Attempt:0,} returns sandbox id \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\"" Jul 2 10:56:18.430417 env[1191]: time="2024-07-02T10:56:18.430368574Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 10:56:18.434672 env[1191]: time="2024-07-02T10:56:18.434585438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4q8vg,Uid:1ae1bfeb-74a8-4215-bd74-3f6923abe07c,Namespace:kube-system,Attempt:0,}" Jul 2 10:56:18.437038 env[1191]: time="2024-07-02T10:56:18.436997763Z" level=info msg="CreateContainer within sandbox \"961b40f2c2514dec219df94fa2fad168c845afc75bf6ee9d30566382570f2f56\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"214575187870c7bdb3041d6f58f00fb2747c86ae10e1b9ca1ce486ef17763455\"" Jul 2 10:56:18.439247 env[1191]: time="2024-07-02T10:56:18.438470280Z" level=info msg="StartContainer for \"214575187870c7bdb3041d6f58f00fb2747c86ae10e1b9ca1ce486ef17763455\"" Jul 2 10:56:18.464780 systemd[1]: Started cri-containerd-214575187870c7bdb3041d6f58f00fb2747c86ae10e1b9ca1ce486ef17763455.scope. Jul 2 10:56:18.470945 env[1191]: time="2024-07-02T10:56:18.470809800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:56:18.471155 env[1191]: time="2024-07-02T10:56:18.470970022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:56:18.471155 env[1191]: time="2024-07-02T10:56:18.471017973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:56:18.471452 env[1191]: time="2024-07-02T10:56:18.471400951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819 pid=2220 runtime=io.containerd.runc.v2 Jul 2 10:56:18.488831 systemd[1]: Started cri-containerd-dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819.scope. Jul 2 10:56:18.556836 env[1191]: time="2024-07-02T10:56:18.556764184Z" level=info msg="StartContainer for \"214575187870c7bdb3041d6f58f00fb2747c86ae10e1b9ca1ce486ef17763455\" returns successfully" Jul 2 10:56:18.588312 kubelet[2036]: I0702 10:56:18.588251 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xgssr" podStartSLOduration=1.5881673969999999 podStartE2EDuration="1.588167397s" podCreationTimestamp="2024-07-02 10:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:56:18.587881923 +0000 UTC m=+14.273291925" watchObservedRunningTime="2024-07-02 10:56:18.588167397 +0000 UTC m=+14.273577377" Jul 2 10:56:18.595738 env[1191]: time="2024-07-02T10:56:18.595683882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-4q8vg,Uid:1ae1bfeb-74a8-4215-bd74-3f6923abe07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819\"" Jul 2 10:56:26.172154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183179120.mount: Deactivated successfully. Jul 2 10:56:30.632767 env[1191]: time="2024-07-02T10:56:30.632568179Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:56:30.636114 env[1191]: time="2024-07-02T10:56:30.636064658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:56:30.639494 env[1191]: time="2024-07-02T10:56:30.639419253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:56:30.639823 env[1191]: time="2024-07-02T10:56:30.639770906Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 10:56:30.643555 env[1191]: time="2024-07-02T10:56:30.642724703Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 10:56:30.646795 env[1191]: time="2024-07-02T10:56:30.646229905Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:56:30.663238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388331992.mount: Deactivated successfully. Jul 2 10:56:30.672878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305369010.mount: Deactivated successfully. Jul 2 10:56:30.680885 env[1191]: time="2024-07-02T10:56:30.680811538Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\"" Jul 2 10:56:30.683579 env[1191]: time="2024-07-02T10:56:30.683534912Z" level=info msg="StartContainer for \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\"" Jul 2 10:56:30.725330 systemd[1]: Started cri-containerd-93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c.scope. Jul 2 10:56:30.782283 env[1191]: time="2024-07-02T10:56:30.782230649Z" level=info msg="StartContainer for \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\" returns successfully" Jul 2 10:56:30.793907 systemd[1]: cri-containerd-93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c.scope: Deactivated successfully. Jul 2 10:56:30.895557 env[1191]: time="2024-07-02T10:56:30.895367225Z" level=info msg="shim disconnected" id=93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c Jul 2 10:56:30.895557 env[1191]: time="2024-07-02T10:56:30.895460061Z" level=warning msg="cleaning up after shim disconnected" id=93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c namespace=k8s.io Jul 2 10:56:30.895557 env[1191]: time="2024-07-02T10:56:30.895480030Z" level=info msg="cleaning up dead shim" Jul 2 10:56:30.909173 env[1191]: time="2024-07-02T10:56:30.909098198Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:56:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2446 runtime=io.containerd.runc.v2\n" Jul 2 10:56:31.649483 env[1191]: time="2024-07-02T10:56:31.649426719Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 10:56:31.659148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c-rootfs.mount: Deactivated successfully. Jul 2 10:56:31.672127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84715863.mount: Deactivated successfully. Jul 2 10:56:31.697799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675572525.mount: Deactivated successfully. Jul 2 10:56:31.704504 env[1191]: time="2024-07-02T10:56:31.704430528Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\"" Jul 2 10:56:31.706874 env[1191]: time="2024-07-02T10:56:31.706372706Z" level=info msg="StartContainer for \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\"" Jul 2 10:56:31.735256 systemd[1]: Started cri-containerd-563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc.scope. Jul 2 10:56:31.787130 env[1191]: time="2024-07-02T10:56:31.787027224Z" level=info msg="StartContainer for \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\" returns successfully" Jul 2 10:56:31.814972 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 10:56:31.815353 systemd[1]: Stopped systemd-sysctl.service. Jul 2 10:56:31.817144 systemd[1]: Stopping systemd-sysctl.service... Jul 2 10:56:31.820023 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:56:31.825326 systemd[1]: cri-containerd-563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc.scope: Deactivated successfully. Jul 2 10:56:31.841770 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:56:31.861771 env[1191]: time="2024-07-02T10:56:31.861696966Z" level=info msg="shim disconnected" id=563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc Jul 2 10:56:31.861771 env[1191]: time="2024-07-02T10:56:31.861769477Z" level=warning msg="cleaning up after shim disconnected" id=563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc namespace=k8s.io Jul 2 10:56:31.862101 env[1191]: time="2024-07-02T10:56:31.861786766Z" level=info msg="cleaning up dead shim" Jul 2 10:56:31.872703 env[1191]: time="2024-07-02T10:56:31.872658871Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:56:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2510 runtime=io.containerd.runc.v2\n" Jul 2 10:56:32.664826 env[1191]: time="2024-07-02T10:56:32.663103409Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 10:56:32.684722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052124177.mount: Deactivated successfully. Jul 2 10:56:32.695054 env[1191]: time="2024-07-02T10:56:32.694958195Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\"" Jul 2 10:56:32.697874 env[1191]: time="2024-07-02T10:56:32.696096399Z" level=info msg="StartContainer for \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\"" Jul 2 10:56:32.753675 systemd[1]: Started cri-containerd-2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb.scope. Jul 2 10:56:32.832308 env[1191]: time="2024-07-02T10:56:32.832248626Z" level=info msg="StartContainer for \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\" returns successfully" Jul 2 10:56:32.845058 systemd[1]: cri-containerd-2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb.scope: Deactivated successfully. Jul 2 10:56:32.935108 env[1191]: time="2024-07-02T10:56:32.935056333Z" level=info msg="shim disconnected" id=2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb Jul 2 10:56:32.935467 env[1191]: time="2024-07-02T10:56:32.935435552Z" level=warning msg="cleaning up after shim disconnected" id=2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb namespace=k8s.io Jul 2 10:56:32.935621 env[1191]: time="2024-07-02T10:56:32.935593238Z" level=info msg="cleaning up dead shim" Jul 2 10:56:32.958932 env[1191]: time="2024-07-02T10:56:32.958868350Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:56:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2570 runtime=io.containerd.runc.v2\n" Jul 2 10:56:33.474068 env[1191]: time="2024-07-02T10:56:33.474012942Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:56:33.475636 env[1191]: time="2024-07-02T10:56:33.475604135Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:56:33.477271 env[1191]: time="2024-07-02T10:56:33.477233131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:56:33.478238 env[1191]: time="2024-07-02T10:56:33.478196046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 10:56:33.481171 env[1191]: time="2024-07-02T10:56:33.481132000Z" level=info msg="CreateContainer within sandbox \"dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 10:56:33.502234 env[1191]: time="2024-07-02T10:56:33.502176302Z" level=info msg="CreateContainer within sandbox \"dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\"" Jul 2 10:56:33.505074 env[1191]: time="2024-07-02T10:56:33.503152801Z" level=info msg="StartContainer for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\"" Jul 2 10:56:33.527964 systemd[1]: Started cri-containerd-9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd.scope. Jul 2 10:56:33.577249 env[1191]: time="2024-07-02T10:56:33.577188665Z" level=info msg="StartContainer for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" returns successfully" Jul 2 10:56:33.654889 env[1191]: time="2024-07-02T10:56:33.654688064Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 10:56:33.681070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597494242.mount: Deactivated successfully. Jul 2 10:56:33.686639 env[1191]: time="2024-07-02T10:56:33.686585063Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\"" Jul 2 10:56:33.687812 env[1191]: time="2024-07-02T10:56:33.687767591Z" level=info msg="StartContainer for \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\"" Jul 2 10:56:33.733192 systemd[1]: Started cri-containerd-20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456.scope. Jul 2 10:56:33.803371 systemd[1]: cri-containerd-20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456.scope: Deactivated successfully. Jul 2 10:56:33.805086 env[1191]: time="2024-07-02T10:56:33.805037751Z" level=info msg="StartContainer for \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\" returns successfully" Jul 2 10:56:33.923070 env[1191]: time="2024-07-02T10:56:33.923008856Z" level=info msg="shim disconnected" id=20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456 Jul 2 10:56:33.923070 env[1191]: time="2024-07-02T10:56:33.923063698Z" level=warning msg="cleaning up after shim disconnected" id=20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456 namespace=k8s.io Jul 2 10:56:33.923070 env[1191]: time="2024-07-02T10:56:33.923079982Z" level=info msg="cleaning up dead shim" Jul 2 10:56:33.943445 env[1191]: time="2024-07-02T10:56:33.943384213Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:56:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2658 runtime=io.containerd.runc.v2\n" Jul 2 10:56:34.661038 systemd[1]: run-containerd-runc-k8s.io-20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456-runc.pDGBTM.mount: Deactivated successfully. Jul 2 10:56:34.661183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456-rootfs.mount: Deactivated successfully. Jul 2 10:56:34.665628 env[1191]: time="2024-07-02T10:56:34.665565392Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 10:56:34.686318 env[1191]: time="2024-07-02T10:56:34.686261485Z" level=info msg="CreateContainer within sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\"" Jul 2 10:56:34.686975 env[1191]: time="2024-07-02T10:56:34.686943752Z" level=info msg="StartContainer for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\"" Jul 2 10:56:34.742462 systemd[1]: Started cri-containerd-16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8.scope. Jul 2 10:56:34.775826 kubelet[2036]: I0702 10:56:34.775772 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-4q8vg" podStartSLOduration=1.896008212 podStartE2EDuration="16.77562062s" podCreationTimestamp="2024-07-02 10:56:18 +0000 UTC" firstStartedPulling="2024-07-02 10:56:18.59897069 +0000 UTC m=+14.284380671" lastFinishedPulling="2024-07-02 10:56:33.478583098 +0000 UTC m=+29.163993079" observedRunningTime="2024-07-02 10:56:33.741080334 +0000 UTC m=+29.426490328" watchObservedRunningTime="2024-07-02 10:56:34.77562062 +0000 UTC m=+30.461030613" Jul 2 10:56:34.889886 env[1191]: time="2024-07-02T10:56:34.888143510Z" level=info msg="StartContainer for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" returns successfully" Jul 2 10:56:35.151011 kubelet[2036]: I0702 10:56:35.150970 2036 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 10:56:35.187135 kubelet[2036]: I0702 10:56:35.187090 2036 topology_manager.go:215] "Topology Admit Handler" podUID="17073280-2f15-408f-b251-007506aae95e" podNamespace="kube-system" podName="coredns-76f75df574-m6lq4" Jul 2 10:56:35.197294 systemd[1]: Created slice kubepods-burstable-pod17073280_2f15_408f_b251_007506aae95e.slice. Jul 2 10:56:35.201305 kubelet[2036]: I0702 10:56:35.201263 2036 topology_manager.go:215] "Topology Admit Handler" podUID="9f5a93c9-8ae0-4630-a090-4a2832445862" podNamespace="kube-system" podName="coredns-76f75df574-zq4nt" Jul 2 10:56:35.207171 systemd[1]: Created slice kubepods-burstable-pod9f5a93c9_8ae0_4630_a090_4a2832445862.slice. Jul 2 10:56:35.327407 kubelet[2036]: I0702 10:56:35.327324 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f5a93c9-8ae0-4630-a090-4a2832445862-config-volume\") pod \"coredns-76f75df574-zq4nt\" (UID: \"9f5a93c9-8ae0-4630-a090-4a2832445862\") " pod="kube-system/coredns-76f75df574-zq4nt" Jul 2 10:56:35.327881 kubelet[2036]: I0702 10:56:35.327832 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9k4\" (UniqueName: \"kubernetes.io/projected/17073280-2f15-408f-b251-007506aae95e-kube-api-access-ml9k4\") pod \"coredns-76f75df574-m6lq4\" (UID: \"17073280-2f15-408f-b251-007506aae95e\") " pod="kube-system/coredns-76f75df574-m6lq4" Jul 2 10:56:35.328087 kubelet[2036]: I0702 10:56:35.328056 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj69f\" (UniqueName: \"kubernetes.io/projected/9f5a93c9-8ae0-4630-a090-4a2832445862-kube-api-access-dj69f\") pod \"coredns-76f75df574-zq4nt\" (UID: \"9f5a93c9-8ae0-4630-a090-4a2832445862\") " pod="kube-system/coredns-76f75df574-zq4nt" Jul 2 10:56:35.328281 kubelet[2036]: I0702 10:56:35.328249 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17073280-2f15-408f-b251-007506aae95e-config-volume\") pod \"coredns-76f75df574-m6lq4\" (UID: \"17073280-2f15-408f-b251-007506aae95e\") " pod="kube-system/coredns-76f75df574-m6lq4" Jul 2 10:56:35.502319 env[1191]: time="2024-07-02T10:56:35.501618701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m6lq4,Uid:17073280-2f15-408f-b251-007506aae95e,Namespace:kube-system,Attempt:0,}" Jul 2 10:56:35.514115 env[1191]: time="2024-07-02T10:56:35.514056183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zq4nt,Uid:9f5a93c9-8ae0-4630-a090-4a2832445862,Namespace:kube-system,Attempt:0,}" Jul 2 10:56:35.671363 systemd[1]: run-containerd-runc-k8s.io-16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8-runc.jYudbS.mount: Deactivated successfully. Jul 2 10:56:35.694422 kubelet[2036]: I0702 10:56:35.694364 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h65vw" podStartSLOduration=6.480709863 podStartE2EDuration="18.694304832s" podCreationTimestamp="2024-07-02 10:56:17 +0000 UTC" firstStartedPulling="2024-07-02 10:56:18.427747135 +0000 UTC m=+14.113157116" lastFinishedPulling="2024-07-02 10:56:30.641342104 +0000 UTC m=+26.326752085" observedRunningTime="2024-07-02 10:56:35.691564548 +0000 UTC m=+31.376974550" watchObservedRunningTime="2024-07-02 10:56:35.694304832 +0000 UTC m=+31.379714813" Jul 2 10:56:37.733176 systemd-networkd[1021]: cilium_host: Link UP Jul 2 10:56:37.745329 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 10:56:37.745506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 10:56:37.742291 systemd-networkd[1021]: cilium_net: Link UP Jul 2 10:56:37.743594 systemd-networkd[1021]: cilium_net: Gained carrier Jul 2 10:56:37.745499 systemd-networkd[1021]: cilium_host: Gained carrier Jul 2 10:56:37.910865 systemd-networkd[1021]: cilium_vxlan: Link UP Jul 2 10:56:37.910885 systemd-networkd[1021]: cilium_vxlan: Gained carrier Jul 2 10:56:38.454066 kernel: NET: Registered PF_ALG protocol family Jul 2 10:56:38.655778 systemd-networkd[1021]: cilium_net: Gained IPv6LL Jul 2 10:56:38.656232 systemd-networkd[1021]: cilium_host: Gained IPv6LL Jul 2 10:56:39.087297 systemd-networkd[1021]: cilium_vxlan: Gained IPv6LL Jul 2 10:56:39.553389 systemd-networkd[1021]: lxc_health: Link UP Jul 2 10:56:39.586143 systemd-networkd[1021]: lxc_health: Gained carrier Jul 2 10:56:39.586888 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 10:56:40.072293 systemd-networkd[1021]: lxc40a1ec5d3c98: Link UP Jul 2 10:56:40.077965 kernel: eth0: renamed from tmp6fe8c Jul 2 10:56:40.082619 systemd-networkd[1021]: lxcb428d1f197da: Link UP Jul 2 10:56:40.093026 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc40a1ec5d3c98: link becomes ready Jul 2 10:56:40.093484 kernel: eth0: renamed from tmpb6565 Jul 2 10:56:40.091956 systemd-networkd[1021]: lxc40a1ec5d3c98: Gained carrier Jul 2 10:56:40.105655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb428d1f197da: link becomes ready Jul 2 10:56:40.104368 systemd-networkd[1021]: lxcb428d1f197da: Gained carrier Jul 2 10:56:41.391300 systemd-networkd[1021]: lxc40a1ec5d3c98: Gained IPv6LL Jul 2 10:56:41.584133 systemd-networkd[1021]: lxc_health: Gained IPv6LL Jul 2 10:56:41.775589 systemd-networkd[1021]: lxcb428d1f197da: Gained IPv6LL Jul 2 10:56:45.676048 env[1191]: time="2024-07-02T10:56:45.668899954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:56:45.676048 env[1191]: time="2024-07-02T10:56:45.669064880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:56:45.676048 env[1191]: time="2024-07-02T10:56:45.669112588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:56:45.676048 env[1191]: time="2024-07-02T10:56:45.669362117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b65657ce9d86f33d523fafde73b77ef71aecaaf60722131dcf6c9fd8efb0b156 pid=3208 runtime=io.containerd.runc.v2 Jul 2 10:56:45.717635 systemd[1]: run-containerd-runc-k8s.io-b65657ce9d86f33d523fafde73b77ef71aecaaf60722131dcf6c9fd8efb0b156-runc.qoe462.mount: Deactivated successfully. Jul 2 10:56:45.732778 systemd[1]: Started cri-containerd-b65657ce9d86f33d523fafde73b77ef71aecaaf60722131dcf6c9fd8efb0b156.scope. Jul 2 10:56:45.735757 env[1191]: time="2024-07-02T10:56:45.735573234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:56:45.735757 env[1191]: time="2024-07-02T10:56:45.735649960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:56:45.735757 env[1191]: time="2024-07-02T10:56:45.735680862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:56:45.737700 env[1191]: time="2024-07-02T10:56:45.736308368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fe8c550d082c967cc08b87b930b5e32afd05dc96948b4d37f64530414ad01d7 pid=3217 runtime=io.containerd.runc.v2 Jul 2 10:56:45.784721 systemd[1]: Started cri-containerd-6fe8c550d082c967cc08b87b930b5e32afd05dc96948b4d37f64530414ad01d7.scope. Jul 2 10:56:45.897090 env[1191]: time="2024-07-02T10:56:45.897010910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m6lq4,Uid:17073280-2f15-408f-b251-007506aae95e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b65657ce9d86f33d523fafde73b77ef71aecaaf60722131dcf6c9fd8efb0b156\"" Jul 2 10:56:45.912805 env[1191]: time="2024-07-02T10:56:45.912720637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zq4nt,Uid:9f5a93c9-8ae0-4630-a090-4a2832445862,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fe8c550d082c967cc08b87b930b5e32afd05dc96948b4d37f64530414ad01d7\"" Jul 2 10:56:45.913496 env[1191]: time="2024-07-02T10:56:45.913460218Z" level=info msg="CreateContainer within sandbox \"b65657ce9d86f33d523fafde73b77ef71aecaaf60722131dcf6c9fd8efb0b156\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 10:56:45.919580 env[1191]: time="2024-07-02T10:56:45.919542797Z" level=info msg="CreateContainer within sandbox \"6fe8c550d082c967cc08b87b930b5e32afd05dc96948b4d37f64530414ad01d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 10:56:45.947938 env[1191]: time="2024-07-02T10:56:45.947877648Z" level=info msg="CreateContainer within sandbox \"b65657ce9d86f33d523fafde73b77ef71aecaaf60722131dcf6c9fd8efb0b156\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee158a7ba0ce4211e757a7c9b16c19276c758839e6b783547110c43a1f2ca216\"" Jul 2 10:56:45.949482 env[1191]: time="2024-07-02T10:56:45.949450069Z" level=info msg="StartContainer for \"ee158a7ba0ce4211e757a7c9b16c19276c758839e6b783547110c43a1f2ca216\"" Jul 2 10:56:45.950173 env[1191]: time="2024-07-02T10:56:45.950133732Z" level=info msg="CreateContainer within sandbox \"6fe8c550d082c967cc08b87b930b5e32afd05dc96948b4d37f64530414ad01d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96213c47bc047a3fd82fd64246ecff9c801126c5abe6622c546af89d8fb6da8e\"" Jul 2 10:56:45.950579 env[1191]: time="2024-07-02T10:56:45.950526943Z" level=info msg="StartContainer for \"96213c47bc047a3fd82fd64246ecff9c801126c5abe6622c546af89d8fb6da8e\"" Jul 2 10:56:45.998403 systemd[1]: Started cri-containerd-96213c47bc047a3fd82fd64246ecff9c801126c5abe6622c546af89d8fb6da8e.scope. Jul 2 10:56:46.012991 systemd[1]: Started cri-containerd-ee158a7ba0ce4211e757a7c9b16c19276c758839e6b783547110c43a1f2ca216.scope. Jul 2 10:56:46.095237 env[1191]: time="2024-07-02T10:56:46.095152794Z" level=info msg="StartContainer for \"96213c47bc047a3fd82fd64246ecff9c801126c5abe6622c546af89d8fb6da8e\" returns successfully" Jul 2 10:56:46.099377 env[1191]: time="2024-07-02T10:56:46.098550337Z" level=info msg="StartContainer for \"ee158a7ba0ce4211e757a7c9b16c19276c758839e6b783547110c43a1f2ca216\" returns successfully" Jul 2 10:56:46.746650 kubelet[2036]: I0702 10:56:46.746602 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zq4nt" podStartSLOduration=28.7465049 podStartE2EDuration="28.7465049s" podCreationTimestamp="2024-07-02 10:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:56:46.745541149 +0000 UTC m=+42.430951136" watchObservedRunningTime="2024-07-02 10:56:46.7465049 +0000 UTC m=+42.431914880" Jul 2 10:56:46.763934 kubelet[2036]: I0702 10:56:46.763888 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m6lq4" podStartSLOduration=28.763817348 podStartE2EDuration="28.763817348s" podCreationTimestamp="2024-07-02 10:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:56:46.762106045 +0000 UTC m=+42.447516039" watchObservedRunningTime="2024-07-02 10:56:46.763817348 +0000 UTC m=+42.449227335" Jul 2 10:57:17.705651 systemd[1]: Started sshd@7-10.230.70.110:22-147.75.109.163:59190.service. Jul 2 10:57:18.598467 sshd[3375]: Accepted publickey for core from 147.75.109.163 port 59190 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:18.602218 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:18.611391 systemd-logind[1183]: New session 8 of user core. Jul 2 10:57:18.612534 systemd[1]: Started session-8.scope. Jul 2 10:57:19.382235 sshd[3375]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:19.387757 systemd[1]: sshd@7-10.230.70.110:22-147.75.109.163:59190.service: Deactivated successfully. Jul 2 10:57:19.389202 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 10:57:19.390268 systemd-logind[1183]: Session 8 logged out. Waiting for processes to exit. Jul 2 10:57:19.391813 systemd-logind[1183]: Removed session 8. Jul 2 10:57:24.525789 systemd[1]: Started sshd@8-10.230.70.110:22-147.75.109.163:58584.service. Jul 2 10:57:25.396926 sshd[3392]: Accepted publickey for core from 147.75.109.163 port 58584 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:25.399046 sshd[3392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:25.405913 systemd-logind[1183]: New session 9 of user core. Jul 2 10:57:25.406810 systemd[1]: Started session-9.scope. Jul 2 10:57:26.120724 sshd[3392]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:26.124751 systemd[1]: sshd@8-10.230.70.110:22-147.75.109.163:58584.service: Deactivated successfully. Jul 2 10:57:26.125970 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 10:57:26.126891 systemd-logind[1183]: Session 9 logged out. Waiting for processes to exit. Jul 2 10:57:26.128414 systemd-logind[1183]: Removed session 9. Jul 2 10:57:31.266490 systemd[1]: Started sshd@9-10.230.70.110:22-147.75.109.163:58598.service. Jul 2 10:57:32.143005 sshd[3405]: Accepted publickey for core from 147.75.109.163 port 58598 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:32.145114 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:32.154091 systemd[1]: Started session-10.scope. Jul 2 10:57:32.154826 systemd-logind[1183]: New session 10 of user core. Jul 2 10:57:32.853338 sshd[3405]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:32.857127 systemd[1]: sshd@9-10.230.70.110:22-147.75.109.163:58598.service: Deactivated successfully. Jul 2 10:57:32.858206 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 10:57:32.860159 systemd-logind[1183]: Session 10 logged out. Waiting for processes to exit. Jul 2 10:57:32.861762 systemd-logind[1183]: Removed session 10. Jul 2 10:57:38.001499 systemd[1]: Started sshd@10-10.230.70.110:22-147.75.109.163:49010.service. Jul 2 10:57:38.872907 sshd[3418]: Accepted publickey for core from 147.75.109.163 port 49010 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:38.874558 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:38.882373 systemd[1]: Started session-11.scope. Jul 2 10:57:38.882906 systemd-logind[1183]: New session 11 of user core. Jul 2 10:57:39.584993 sshd[3418]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:39.588988 systemd-logind[1183]: Session 11 logged out. Waiting for processes to exit. Jul 2 10:57:39.589914 systemd[1]: sshd@10-10.230.70.110:22-147.75.109.163:49010.service: Deactivated successfully. Jul 2 10:57:39.591030 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 10:57:39.592200 systemd-logind[1183]: Removed session 11. Jul 2 10:57:39.731594 systemd[1]: Started sshd@11-10.230.70.110:22-147.75.109.163:49022.service. Jul 2 10:57:40.606064 sshd[3430]: Accepted publickey for core from 147.75.109.163 port 49022 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:40.608372 sshd[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:40.615705 systemd[1]: Started session-12.scope. Jul 2 10:57:40.616630 systemd-logind[1183]: New session 12 of user core. Jul 2 10:57:41.383363 sshd[3430]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:41.390523 systemd[1]: sshd@11-10.230.70.110:22-147.75.109.163:49022.service: Deactivated successfully. Jul 2 10:57:41.391647 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 10:57:41.392645 systemd-logind[1183]: Session 12 logged out. Waiting for processes to exit. Jul 2 10:57:41.394339 systemd-logind[1183]: Removed session 12. Jul 2 10:57:41.523433 systemd[1]: Started sshd@12-10.230.70.110:22-147.75.109.163:49036.service. Jul 2 10:57:42.397338 sshd[3439]: Accepted publickey for core from 147.75.109.163 port 49036 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:42.399745 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:42.407008 systemd-logind[1183]: New session 13 of user core. Jul 2 10:57:42.407355 systemd[1]: Started session-13.scope. Jul 2 10:57:43.102889 sshd[3439]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:43.107033 systemd[1]: sshd@12-10.230.70.110:22-147.75.109.163:49036.service: Deactivated successfully. Jul 2 10:57:43.108141 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 10:57:43.109095 systemd-logind[1183]: Session 13 logged out. Waiting for processes to exit. Jul 2 10:57:43.110488 systemd-logind[1183]: Removed session 13. Jul 2 10:57:48.252999 systemd[1]: Started sshd@13-10.230.70.110:22-147.75.109.163:52390.service. Jul 2 10:57:49.131322 sshd[3451]: Accepted publickey for core from 147.75.109.163 port 52390 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:49.133916 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:49.143137 systemd-logind[1183]: New session 14 of user core. Jul 2 10:57:49.144320 systemd[1]: Started session-14.scope. Jul 2 10:57:49.850233 sshd[3451]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:49.854627 systemd[1]: sshd@13-10.230.70.110:22-147.75.109.163:52390.service: Deactivated successfully. Jul 2 10:57:49.856084 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 10:57:49.857016 systemd-logind[1183]: Session 14 logged out. Waiting for processes to exit. Jul 2 10:57:49.858423 systemd-logind[1183]: Removed session 14. Jul 2 10:57:54.997012 systemd[1]: Started sshd@14-10.230.70.110:22-147.75.109.163:33544.service. Jul 2 10:57:55.864365 sshd[3465]: Accepted publickey for core from 147.75.109.163 port 33544 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:55.867163 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:55.874569 systemd[1]: Started session-15.scope. Jul 2 10:57:55.875524 systemd-logind[1183]: New session 15 of user core. Jul 2 10:57:56.578978 sshd[3465]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:56.582950 systemd[1]: sshd@14-10.230.70.110:22-147.75.109.163:33544.service: Deactivated successfully. Jul 2 10:57:56.584139 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 10:57:56.585548 systemd-logind[1183]: Session 15 logged out. Waiting for processes to exit. Jul 2 10:57:56.586679 systemd-logind[1183]: Removed session 15. Jul 2 10:57:56.723112 systemd[1]: Started sshd@15-10.230.70.110:22-147.75.109.163:33552.service. Jul 2 10:57:57.587614 sshd[3477]: Accepted publickey for core from 147.75.109.163 port 33552 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:57.590299 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:57.599041 systemd[1]: Started session-16.scope. Jul 2 10:57:57.599589 systemd-logind[1183]: New session 16 of user core. Jul 2 10:57:58.802816 sshd[3477]: pam_unix(sshd:session): session closed for user core Jul 2 10:57:58.807808 systemd[1]: sshd@15-10.230.70.110:22-147.75.109.163:33552.service: Deactivated successfully. Jul 2 10:57:58.808865 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 10:57:58.809625 systemd-logind[1183]: Session 16 logged out. Waiting for processes to exit. Jul 2 10:57:58.811012 systemd-logind[1183]: Removed session 16. Jul 2 10:57:58.948175 systemd[1]: Started sshd@16-10.230.70.110:22-147.75.109.163:33568.service. Jul 2 10:57:59.821146 sshd[3487]: Accepted publickey for core from 147.75.109.163 port 33568 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:57:59.823611 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:57:59.829435 systemd-logind[1183]: New session 17 of user core. Jul 2 10:57:59.833304 systemd[1]: Started session-17.scope. Jul 2 10:58:02.529083 sshd[3487]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:02.536275 systemd[1]: sshd@16-10.230.70.110:22-147.75.109.163:33568.service: Deactivated successfully. Jul 2 10:58:02.537574 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 10:58:02.540299 systemd-logind[1183]: Session 17 logged out. Waiting for processes to exit. Jul 2 10:58:02.542355 systemd-logind[1183]: Removed session 17. Jul 2 10:58:02.675339 systemd[1]: Started sshd@17-10.230.70.110:22-147.75.109.163:40622.service. Jul 2 10:58:03.557895 sshd[3504]: Accepted publickey for core from 147.75.109.163 port 40622 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:03.559807 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:03.568282 systemd[1]: Started session-18.scope. Jul 2 10:58:03.569336 systemd-logind[1183]: New session 18 of user core. Jul 2 10:58:04.476286 sshd[3504]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:04.480536 systemd[1]: sshd@17-10.230.70.110:22-147.75.109.163:40622.service: Deactivated successfully. Jul 2 10:58:04.481762 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 10:58:04.482579 systemd-logind[1183]: Session 18 logged out. Waiting for processes to exit. Jul 2 10:58:04.483917 systemd-logind[1183]: Removed session 18. Jul 2 10:58:04.619116 systemd[1]: Started sshd@18-10.230.70.110:22-147.75.109.163:40630.service. Jul 2 10:58:05.487973 sshd[3516]: Accepted publickey for core from 147.75.109.163 port 40630 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:05.489593 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:05.496885 systemd[1]: Started session-19.scope. Jul 2 10:58:05.497626 systemd-logind[1183]: New session 19 of user core. Jul 2 10:58:06.197390 sshd[3516]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:06.201928 systemd[1]: sshd@18-10.230.70.110:22-147.75.109.163:40630.service: Deactivated successfully. Jul 2 10:58:06.203283 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 10:58:06.204183 systemd-logind[1183]: Session 19 logged out. Waiting for processes to exit. Jul 2 10:58:06.205389 systemd-logind[1183]: Removed session 19. Jul 2 10:58:11.343219 systemd[1]: Started sshd@19-10.230.70.110:22-147.75.109.163:40634.service. Jul 2 10:58:12.210297 sshd[3528]: Accepted publickey for core from 147.75.109.163 port 40634 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:12.213012 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:12.220748 systemd[1]: Started session-20.scope. Jul 2 10:58:12.221316 systemd-logind[1183]: New session 20 of user core. Jul 2 10:58:12.904886 sshd[3528]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:12.909092 systemd-logind[1183]: Session 20 logged out. Waiting for processes to exit. Jul 2 10:58:12.909566 systemd[1]: sshd@19-10.230.70.110:22-147.75.109.163:40634.service: Deactivated successfully. Jul 2 10:58:12.910631 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 10:58:12.912095 systemd-logind[1183]: Removed session 20. Jul 2 10:58:18.050322 systemd[1]: Started sshd@20-10.230.70.110:22-147.75.109.163:54140.service. Jul 2 10:58:18.925207 sshd[3544]: Accepted publickey for core from 147.75.109.163 port 54140 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:18.928648 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:18.936979 systemd-logind[1183]: New session 21 of user core. Jul 2 10:58:18.937933 systemd[1]: Started session-21.scope. Jul 2 10:58:19.625361 sshd[3544]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:19.630135 systemd[1]: sshd@20-10.230.70.110:22-147.75.109.163:54140.service: Deactivated successfully. Jul 2 10:58:19.631455 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 10:58:19.632371 systemd-logind[1183]: Session 21 logged out. Waiting for processes to exit. Jul 2 10:58:19.634333 systemd-logind[1183]: Removed session 21. Jul 2 10:58:24.771979 systemd[1]: Started sshd@21-10.230.70.110:22-147.75.109.163:33750.service. Jul 2 10:58:25.643544 sshd[3558]: Accepted publickey for core from 147.75.109.163 port 33750 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:25.645920 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:25.652417 systemd-logind[1183]: New session 22 of user core. Jul 2 10:58:25.654197 systemd[1]: Started session-22.scope. Jul 2 10:58:26.382750 sshd[3558]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:26.386616 systemd-logind[1183]: Session 22 logged out. Waiting for processes to exit. Jul 2 10:58:26.387672 systemd[1]: sshd@21-10.230.70.110:22-147.75.109.163:33750.service: Deactivated successfully. Jul 2 10:58:26.388646 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 10:58:26.389683 systemd-logind[1183]: Removed session 22. Jul 2 10:58:26.525984 systemd[1]: Started sshd@22-10.230.70.110:22-147.75.109.163:33754.service. Jul 2 10:58:27.406484 sshd[3570]: Accepted publickey for core from 147.75.109.163 port 33754 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:27.409022 sshd[3570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:27.415910 systemd-logind[1183]: New session 23 of user core. Jul 2 10:58:27.416906 systemd[1]: Started session-23.scope. Jul 2 10:58:29.332886 env[1191]: time="2024-07-02T10:58:29.332666360Z" level=info msg="StopContainer for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" with timeout 30 (s)" Jul 2 10:58:29.334876 env[1191]: time="2024-07-02T10:58:29.334211414Z" level=info msg="Stop container \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" with signal terminated" Jul 2 10:58:29.368500 systemd[1]: run-containerd-runc-k8s.io-16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8-runc.1ldHXv.mount: Deactivated successfully. Jul 2 10:58:29.370521 systemd[1]: cri-containerd-9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd.scope: Deactivated successfully. Jul 2 10:58:29.411836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd-rootfs.mount: Deactivated successfully. Jul 2 10:58:29.418619 env[1191]: time="2024-07-02T10:58:29.418546322Z" level=info msg="shim disconnected" id=9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd Jul 2 10:58:29.418947 env[1191]: time="2024-07-02T10:58:29.418903355Z" level=warning msg="cleaning up after shim disconnected" id=9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd namespace=k8s.io Jul 2 10:58:29.419094 env[1191]: time="2024-07-02T10:58:29.419066688Z" level=info msg="cleaning up dead shim" Jul 2 10:58:29.423473 env[1191]: time="2024-07-02T10:58:29.423322563Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 10:58:29.430047 env[1191]: time="2024-07-02T10:58:29.430011030Z" level=info msg="StopContainer for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" with timeout 2 (s)" Jul 2 10:58:29.430567 env[1191]: time="2024-07-02T10:58:29.430532697Z" level=info msg="Stop container \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" with signal terminated" Jul 2 10:58:29.434616 env[1191]: time="2024-07-02T10:58:29.434582244Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3613 runtime=io.containerd.runc.v2\n" Jul 2 10:58:29.437966 env[1191]: time="2024-07-02T10:58:29.437929923Z" level=info msg="StopContainer for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" returns successfully" Jul 2 10:58:29.438932 env[1191]: time="2024-07-02T10:58:29.438900382Z" level=info msg="StopPodSandbox for \"dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819\"" Jul 2 10:58:29.439183 env[1191]: time="2024-07-02T10:58:29.439149497Z" level=info msg="Container to stop \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:29.443625 systemd-networkd[1021]: lxc_health: Link DOWN Jul 2 10:58:29.443637 systemd-networkd[1021]: lxc_health: Lost carrier Jul 2 10:58:29.450754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819-shm.mount: Deactivated successfully. Jul 2 10:58:29.487604 systemd[1]: cri-containerd-dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819.scope: Deactivated successfully. Jul 2 10:58:29.498035 systemd[1]: cri-containerd-16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8.scope: Deactivated successfully. Jul 2 10:58:29.498441 systemd[1]: cri-containerd-16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8.scope: Consumed 10.010s CPU time. Jul 2 10:58:29.523865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819-rootfs.mount: Deactivated successfully. Jul 2 10:58:29.541473 env[1191]: time="2024-07-02T10:58:29.541414199Z" level=info msg="shim disconnected" id=16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8 Jul 2 10:58:29.541862 env[1191]: time="2024-07-02T10:58:29.541819805Z" level=warning msg="cleaning up after shim disconnected" id=16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8 namespace=k8s.io Jul 2 10:58:29.543098 env[1191]: time="2024-07-02T10:58:29.542901351Z" level=info msg="cleaning up dead shim" Jul 2 10:58:29.543356 env[1191]: time="2024-07-02T10:58:29.542102441Z" level=info msg="shim disconnected" id=dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819 Jul 2 10:58:29.543585 env[1191]: time="2024-07-02T10:58:29.543555006Z" level=warning msg="cleaning up after shim disconnected" id=dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819 namespace=k8s.io Jul 2 10:58:29.543789 env[1191]: time="2024-07-02T10:58:29.543764399Z" level=info msg="cleaning up dead shim" Jul 2 10:58:29.555931 env[1191]: time="2024-07-02T10:58:29.555871036Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3671 runtime=io.containerd.runc.v2\n" Jul 2 10:58:29.558035 env[1191]: time="2024-07-02T10:58:29.557991935Z" level=info msg="StopContainer for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" returns successfully" Jul 2 10:58:29.558902 env[1191]: time="2024-07-02T10:58:29.558869623Z" level=info msg="StopPodSandbox for \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\"" Jul 2 10:58:29.559246 env[1191]: time="2024-07-02T10:58:29.559072037Z" level=info msg="Container to stop \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:29.559246 env[1191]: time="2024-07-02T10:58:29.559118386Z" level=info msg="Container to stop \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:29.559246 env[1191]: time="2024-07-02T10:58:29.559140777Z" level=info msg="Container to stop \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:29.559511 env[1191]: time="2024-07-02T10:58:29.559171207Z" level=info msg="Container to stop \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:29.559683 env[1191]: time="2024-07-02T10:58:29.559652115Z" level=info msg="Container to stop \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:29.563402 env[1191]: time="2024-07-02T10:58:29.563348834Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3672 runtime=io.containerd.runc.v2\n" Jul 2 10:58:29.563787 env[1191]: time="2024-07-02T10:58:29.563751195Z" level=info msg="TearDown network for sandbox \"dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819\" successfully" Jul 2 10:58:29.563929 env[1191]: time="2024-07-02T10:58:29.563784201Z" level=info msg="StopPodSandbox for \"dcacac4b373178fa01213b1c3accef75e2940d455f378fa55ec4a83af467e819\" returns successfully" Jul 2 10:58:29.574068 kubelet[2036]: I0702 10:58:29.573883 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-cilium-config-path\") pod \"1ae1bfeb-74a8-4215-bd74-3f6923abe07c\" (UID: \"1ae1bfeb-74a8-4215-bd74-3f6923abe07c\") " Jul 2 10:58:29.574068 kubelet[2036]: I0702 10:58:29.573945 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxz6q\" (UniqueName: \"kubernetes.io/projected/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-kube-api-access-kxz6q\") pod \"1ae1bfeb-74a8-4215-bd74-3f6923abe07c\" (UID: \"1ae1bfeb-74a8-4215-bd74-3f6923abe07c\") " Jul 2 10:58:29.583946 systemd[1]: cri-containerd-22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6.scope: Deactivated successfully. Jul 2 10:58:29.618779 kubelet[2036]: I0702 10:58:29.612535 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ae1bfeb-74a8-4215-bd74-3f6923abe07c" (UID: "1ae1bfeb-74a8-4215-bd74-3f6923abe07c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:58:29.618779 kubelet[2036]: I0702 10:58:29.618596 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-kube-api-access-kxz6q" (OuterVolumeSpecName: "kube-api-access-kxz6q") pod "1ae1bfeb-74a8-4215-bd74-3f6923abe07c" (UID: "1ae1bfeb-74a8-4215-bd74-3f6923abe07c"). InnerVolumeSpecName "kube-api-access-kxz6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:58:29.635510 env[1191]: time="2024-07-02T10:58:29.635441088Z" level=info msg="shim disconnected" id=22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6 Jul 2 10:58:29.635711 env[1191]: time="2024-07-02T10:58:29.635509544Z" level=warning msg="cleaning up after shim disconnected" id=22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6 namespace=k8s.io Jul 2 10:58:29.635711 env[1191]: time="2024-07-02T10:58:29.635526409Z" level=info msg="cleaning up dead shim" Jul 2 10:58:29.647964 env[1191]: time="2024-07-02T10:58:29.647897242Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3717 runtime=io.containerd.runc.v2\n" Jul 2 10:58:29.648394 env[1191]: time="2024-07-02T10:58:29.648346645Z" level=info msg="TearDown network for sandbox \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" successfully" Jul 2 10:58:29.648543 env[1191]: time="2024-07-02T10:58:29.648397410Z" level=info msg="StopPodSandbox for \"22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6\" returns successfully" Jul 2 10:58:29.674754 kubelet[2036]: I0702 10:58:29.674632 2036 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kxz6q\" (UniqueName: \"kubernetes.io/projected/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-kube-api-access-kxz6q\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.674754 kubelet[2036]: I0702 10:58:29.674706 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ae1bfeb-74a8-4215-bd74-3f6923abe07c-cilium-config-path\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.734191 kubelet[2036]: E0702 10:58:29.734114 2036 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:58:29.775574 kubelet[2036]: I0702 10:58:29.775505 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-run\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.775754 kubelet[2036]: I0702 10:58:29.775585 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hubble-tls\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.775754 kubelet[2036]: I0702 10:58:29.775649 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-kernel\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.775754 kubelet[2036]: I0702 10:58:29.775698 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-net\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.775754 kubelet[2036]: I0702 10:58:29.775740 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-cgroup\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776030 kubelet[2036]: I0702 10:58:29.775797 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29gfb\" (UniqueName: \"kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-kube-api-access-29gfb\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776030 kubelet[2036]: I0702 10:58:29.775827 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-bpf-maps\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776030 kubelet[2036]: I0702 10:58:29.775884 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-lib-modules\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776030 kubelet[2036]: I0702 10:58:29.775921 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-config-path\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776030 kubelet[2036]: I0702 10:58:29.775988 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-etc-cni-netd\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776317 kubelet[2036]: I0702 10:58:29.776057 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1359ec9-e740-4152-97d6-5e1b98b2bf55-clustermesh-secrets\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776317 kubelet[2036]: I0702 10:58:29.776087 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hostproc\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776317 kubelet[2036]: I0702 10:58:29.776135 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-xtables-lock\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776317 kubelet[2036]: I0702 10:58:29.776162 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cni-path\") pod \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\" (UID: \"f1359ec9-e740-4152-97d6-5e1b98b2bf55\") " Jul 2 10:58:29.776549 kubelet[2036]: I0702 10:58:29.776493 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.776806 kubelet[2036]: I0702 10:58:29.776649 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.776806 kubelet[2036]: I0702 10:58:29.776677 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.776806 kubelet[2036]: I0702 10:58:29.776647 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.776806 kubelet[2036]: I0702 10:58:29.776715 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.776806 kubelet[2036]: I0702 10:58:29.776744 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.777361 kubelet[2036]: I0702 10:58:29.777186 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.777361 kubelet[2036]: I0702 10:58:29.777305 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.777533 kubelet[2036]: I0702 10:58:29.777391 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.777533 kubelet[2036]: I0702 10:58:29.777493 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:29.782595 kubelet[2036]: I0702 10:58:29.782563 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1359ec9-e740-4152-97d6-5e1b98b2bf55-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:58:29.783716 kubelet[2036]: I0702 10:58:29.783638 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:58:29.785347 kubelet[2036]: I0702 10:58:29.785317 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:58:29.787726 kubelet[2036]: I0702 10:58:29.787694 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-kube-api-access-29gfb" (OuterVolumeSpecName: "kube-api-access-29gfb") pod "f1359ec9-e740-4152-97d6-5e1b98b2bf55" (UID: "f1359ec9-e740-4152-97d6-5e1b98b2bf55"). InnerVolumeSpecName "kube-api-access-29gfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:58:29.878713 kubelet[2036]: I0702 10:58:29.877150 2036 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hubble-tls\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.878928 kubelet[2036]: I0702 10:58:29.878899 2036 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-kernel\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879026 kubelet[2036]: I0702 10:58:29.878932 2036 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-host-proc-sys-net\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879026 kubelet[2036]: I0702 10:58:29.878952 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-cgroup\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879026 kubelet[2036]: I0702 10:58:29.878979 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-run\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879026 kubelet[2036]: I0702 10:58:29.879001 2036 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-29gfb\" (UniqueName: \"kubernetes.io/projected/f1359ec9-e740-4152-97d6-5e1b98b2bf55-kube-api-access-29gfb\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879026 kubelet[2036]: I0702 10:58:29.879017 2036 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-bpf-maps\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879033 2036 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-lib-modules\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879050 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cilium-config-path\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879065 2036 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-etc-cni-netd\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879081 2036 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1359ec9-e740-4152-97d6-5e1b98b2bf55-clustermesh-secrets\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879096 2036 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-hostproc\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879111 2036 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-xtables-lock\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:29.879324 kubelet[2036]: I0702 10:58:29.879137 2036 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1359ec9-e740-4152-97d6-5e1b98b2bf55-cni-path\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:30.020994 kubelet[2036]: I0702 10:58:30.020929 2036 scope.go:117] "RemoveContainer" containerID="16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8" Jul 2 10:58:30.024679 systemd[1]: Removed slice kubepods-burstable-podf1359ec9_e740_4152_97d6_5e1b98b2bf55.slice. Jul 2 10:58:30.024822 systemd[1]: kubepods-burstable-podf1359ec9_e740_4152_97d6_5e1b98b2bf55.slice: Consumed 10.184s CPU time. Jul 2 10:58:30.034666 env[1191]: time="2024-07-02T10:58:30.034613560Z" level=info msg="RemoveContainer for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\"" Jul 2 10:58:30.041564 systemd[1]: Removed slice kubepods-besteffort-pod1ae1bfeb_74a8_4215_bd74_3f6923abe07c.slice. Jul 2 10:58:30.042481 env[1191]: time="2024-07-02T10:58:30.042443223Z" level=info msg="RemoveContainer for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" returns successfully" Jul 2 10:58:30.042893 kubelet[2036]: I0702 10:58:30.042818 2036 scope.go:117] "RemoveContainer" containerID="20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456" Jul 2 10:58:30.044226 env[1191]: time="2024-07-02T10:58:30.044088707Z" level=info msg="RemoveContainer for \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\"" Jul 2 10:58:30.048025 env[1191]: time="2024-07-02T10:58:30.046875206Z" level=info msg="RemoveContainer for \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\" returns successfully" Jul 2 10:58:30.048125 kubelet[2036]: I0702 10:58:30.047052 2036 scope.go:117] "RemoveContainer" containerID="2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb" Jul 2 10:58:30.053359 env[1191]: time="2024-07-02T10:58:30.053320261Z" level=info msg="RemoveContainer for \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\"" Jul 2 10:58:30.060963 env[1191]: time="2024-07-02T10:58:30.059252695Z" level=info msg="RemoveContainer for \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\" returns successfully" Jul 2 10:58:30.061329 kubelet[2036]: I0702 10:58:30.061297 2036 scope.go:117] "RemoveContainer" containerID="563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc" Jul 2 10:58:30.066908 env[1191]: time="2024-07-02T10:58:30.066866681Z" level=info msg="RemoveContainer for \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\"" Jul 2 10:58:30.070723 env[1191]: time="2024-07-02T10:58:30.070675833Z" level=info msg="RemoveContainer for \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\" returns successfully" Jul 2 10:58:30.071144 kubelet[2036]: I0702 10:58:30.071044 2036 scope.go:117] "RemoveContainer" containerID="93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c" Jul 2 10:58:30.073867 env[1191]: time="2024-07-02T10:58:30.073795539Z" level=info msg="RemoveContainer for \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\"" Jul 2 10:58:30.079158 env[1191]: time="2024-07-02T10:58:30.079096560Z" level=info msg="RemoveContainer for \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\" returns successfully" Jul 2 10:58:30.079482 kubelet[2036]: I0702 10:58:30.079451 2036 scope.go:117] "RemoveContainer" containerID="16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8" Jul 2 10:58:30.079884 env[1191]: time="2024-07-02T10:58:30.079720303Z" level=error msg="ContainerStatus for \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\": not found" Jul 2 10:58:30.082277 kubelet[2036]: E0702 10:58:30.081817 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\": not found" containerID="16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8" Jul 2 10:58:30.082277 kubelet[2036]: I0702 10:58:30.082024 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8"} err="failed to get container status \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\": rpc error: code = NotFound desc = an error occurred when try to find container \"16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8\": not found" Jul 2 10:58:30.082277 kubelet[2036]: I0702 10:58:30.082104 2036 scope.go:117] "RemoveContainer" containerID="20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456" Jul 2 10:58:30.083161 env[1191]: time="2024-07-02T10:58:30.083010630Z" level=error msg="ContainerStatus for \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\": not found" Jul 2 10:58:30.083532 kubelet[2036]: E0702 10:58:30.083507 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\": not found" containerID="20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456" Jul 2 10:58:30.083633 kubelet[2036]: I0702 10:58:30.083549 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456"} err="failed to get container status \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\": rpc error: code = NotFound desc = an error occurred when try to find container \"20849e2ef393381ee796ddd494f167a163386125e4fa8cd3e6fb08ab283ed456\": not found" Jul 2 10:58:30.083633 kubelet[2036]: I0702 10:58:30.083570 2036 scope.go:117] "RemoveContainer" containerID="2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb" Jul 2 10:58:30.083881 env[1191]: time="2024-07-02T10:58:30.083800172Z" level=error msg="ContainerStatus for \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\": not found" Jul 2 10:58:30.084535 kubelet[2036]: E0702 10:58:30.084234 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\": not found" containerID="2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb" Jul 2 10:58:30.084535 kubelet[2036]: I0702 10:58:30.084316 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb"} err="failed to get container status \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f455ffe92fed5a2266cc0b56561def27552fc57f75271e2961e9ea29eb77ecb\": not found" Jul 2 10:58:30.084535 kubelet[2036]: I0702 10:58:30.084345 2036 scope.go:117] "RemoveContainer" containerID="563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc" Jul 2 10:58:30.084783 env[1191]: time="2024-07-02T10:58:30.084638178Z" level=error msg="ContainerStatus for \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\": not found" Jul 2 10:58:30.085313 kubelet[2036]: E0702 10:58:30.085030 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\": not found" containerID="563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc" Jul 2 10:58:30.085313 kubelet[2036]: I0702 10:58:30.085081 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc"} err="failed to get container status \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"563ce357c588b53e12fa0745d0281381fd20a1be2076e775cfc0f76462bc87fc\": not found" Jul 2 10:58:30.085313 kubelet[2036]: I0702 10:58:30.085146 2036 scope.go:117] "RemoveContainer" containerID="93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c" Jul 2 10:58:30.085750 env[1191]: time="2024-07-02T10:58:30.085346848Z" level=error msg="ContainerStatus for \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\": not found" Jul 2 10:58:30.086340 kubelet[2036]: E0702 10:58:30.086106 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\": not found" containerID="93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c" Jul 2 10:58:30.086340 kubelet[2036]: I0702 10:58:30.086141 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c"} err="failed to get container status \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"93d6d5ac37ac37d188005bec9e8dc385b7041bb32177ad75ef9a438d544bbf0c\": not found" Jul 2 10:58:30.086340 kubelet[2036]: I0702 10:58:30.086202 2036 scope.go:117] "RemoveContainer" containerID="9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd" Jul 2 10:58:30.087756 env[1191]: time="2024-07-02T10:58:30.087703721Z" level=info msg="RemoveContainer for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\"" Jul 2 10:58:30.090649 env[1191]: time="2024-07-02T10:58:30.090590342Z" level=info msg="RemoveContainer for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" returns successfully" Jul 2 10:58:30.090827 kubelet[2036]: I0702 10:58:30.090791 2036 scope.go:117] "RemoveContainer" containerID="9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd" Jul 2 10:58:30.091163 env[1191]: time="2024-07-02T10:58:30.091097825Z" level=error msg="ContainerStatus for \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\": not found" Jul 2 10:58:30.091502 kubelet[2036]: E0702 10:58:30.091469 2036 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\": not found" containerID="9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd" Jul 2 10:58:30.091589 kubelet[2036]: I0702 10:58:30.091526 2036 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd"} err="failed to get container status \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a40da0b43081c3f7b9f7dd7e031dcf63cb005b50983e42b5245e00b48919ccd\": not found" Jul 2 10:58:30.363140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e59c4a3fcb2428c514bd708680af09fd7a5c0d60a3472a02f04181834a9fe8-rootfs.mount: Deactivated successfully. Jul 2 10:58:30.363298 systemd[1]: var-lib-kubelet-pods-1ae1bfeb\x2d74a8\x2d4215\x2dbd74\x2d3f6923abe07c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxz6q.mount: Deactivated successfully. Jul 2 10:58:30.363434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6-rootfs.mount: Deactivated successfully. Jul 2 10:58:30.363546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22a6202f063f6cfcb643f5e2df673a6b1840e220bb1ab29bd90e347da40f7ad6-shm.mount: Deactivated successfully. Jul 2 10:58:30.363641 systemd[1]: var-lib-kubelet-pods-f1359ec9\x2de740\x2d4152\x2d97d6\x2d5e1b98b2bf55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29gfb.mount: Deactivated successfully. Jul 2 10:58:30.363742 systemd[1]: var-lib-kubelet-pods-f1359ec9\x2de740\x2d4152\x2d97d6\x2d5e1b98b2bf55-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 10:58:30.363857 systemd[1]: var-lib-kubelet-pods-f1359ec9\x2de740\x2d4152\x2d97d6\x2d5e1b98b2bf55-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 10:58:30.519022 kubelet[2036]: I0702 10:58:30.518974 2036 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1ae1bfeb-74a8-4215-bd74-3f6923abe07c" path="/var/lib/kubelet/pods/1ae1bfeb-74a8-4215-bd74-3f6923abe07c/volumes" Jul 2 10:58:30.520673 kubelet[2036]: I0702 10:58:30.520649 2036 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" path="/var/lib/kubelet/pods/f1359ec9-e740-4152-97d6-5e1b98b2bf55/volumes" Jul 2 10:58:31.407483 sshd[3570]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:31.412029 systemd-logind[1183]: Session 23 logged out. Waiting for processes to exit. Jul 2 10:58:31.412370 systemd[1]: sshd@22-10.230.70.110:22-147.75.109.163:33754.service: Deactivated successfully. Jul 2 10:58:31.413581 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 10:58:31.414765 systemd-logind[1183]: Removed session 23. Jul 2 10:58:31.551731 systemd[1]: Started sshd@23-10.230.70.110:22-147.75.109.163:33762.service. Jul 2 10:58:32.415940 sshd[3734]: Accepted publickey for core from 147.75.109.163 port 33762 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:32.418616 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:32.426222 systemd-logind[1183]: New session 24 of user core. Jul 2 10:58:32.426949 systemd[1]: Started session-24.scope. Jul 2 10:58:33.753879 kubelet[2036]: I0702 10:58:33.753796 2036 topology_manager.go:215] "Topology Admit Handler" podUID="91f0970d-7da0-497b-bd15-185f0853dbc0" podNamespace="kube-system" podName="cilium-bmwrm" Jul 2 10:58:33.755456 kubelet[2036]: E0702 10:58:33.755416 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" containerName="mount-cgroup" Jul 2 10:58:33.755597 kubelet[2036]: E0702 10:58:33.755575 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" containerName="apply-sysctl-overwrites" Jul 2 10:58:33.755754 kubelet[2036]: E0702 10:58:33.755732 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" containerName="mount-bpf-fs" Jul 2 10:58:33.755926 kubelet[2036]: E0702 10:58:33.755904 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" containerName="clean-cilium-state" Jul 2 10:58:33.756081 kubelet[2036]: E0702 10:58:33.756059 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" containerName="cilium-agent" Jul 2 10:58:33.756221 kubelet[2036]: E0702 10:58:33.756200 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1ae1bfeb-74a8-4215-bd74-3f6923abe07c" containerName="cilium-operator" Jul 2 10:58:33.756451 kubelet[2036]: I0702 10:58:33.756429 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1359ec9-e740-4152-97d6-5e1b98b2bf55" containerName="cilium-agent" Jul 2 10:58:33.756595 kubelet[2036]: I0702 10:58:33.756574 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae1bfeb-74a8-4215-bd74-3f6923abe07c" containerName="cilium-operator" Jul 2 10:58:33.766960 systemd[1]: Created slice kubepods-burstable-pod91f0970d_7da0_497b_bd15_185f0853dbc0.slice. Jul 2 10:58:33.785725 kubelet[2036]: W0702 10:58:33.785677 2036 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-f8jck.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-f8jck.gb1.brightbox.com' and this object Jul 2 10:58:33.786075 kubelet[2036]: E0702 10:58:33.786051 2036 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-f8jck.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-f8jck.gb1.brightbox.com' and this object Jul 2 10:58:33.787748 kubelet[2036]: W0702 10:58:33.787712 2036 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-f8jck.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-f8jck.gb1.brightbox.com' and this object Jul 2 10:58:33.787872 kubelet[2036]: E0702 10:58:33.787755 2036 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-f8jck.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-f8jck.gb1.brightbox.com' and this object Jul 2 10:58:33.890510 sshd[3734]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:33.895944 systemd[1]: sshd@23-10.230.70.110:22-147.75.109.163:33762.service: Deactivated successfully. Jul 2 10:58:33.897025 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 10:58:33.898451 systemd-logind[1183]: Session 24 logged out. Waiting for processes to exit. Jul 2 10:58:33.899743 systemd-logind[1183]: Removed session 24. Jul 2 10:58:33.900212 kubelet[2036]: I0702 10:58:33.899692 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-hostproc\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900320 kubelet[2036]: I0702 10:58:33.900257 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-net\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900320 kubelet[2036]: I0702 10:58:33.900303 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-kernel\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900447 kubelet[2036]: I0702 10:58:33.900346 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-etc-cni-netd\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900447 kubelet[2036]: I0702 10:58:33.900390 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-clustermesh-secrets\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900447 kubelet[2036]: I0702 10:58:33.900420 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-lib-modules\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900447 kubelet[2036]: I0702 10:58:33.900447 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-xtables-lock\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900688 kubelet[2036]: I0702 10:58:33.900477 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cni-path\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900688 kubelet[2036]: I0702 10:58:33.900505 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g6vb\" (UniqueName: \"kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-kube-api-access-8g6vb\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900688 kubelet[2036]: I0702 10:58:33.900533 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-bpf-maps\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900688 kubelet[2036]: I0702 10:58:33.900562 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-cgroup\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900688 kubelet[2036]: I0702 10:58:33.900596 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-hubble-tls\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.900688 kubelet[2036]: I0702 10:58:33.900630 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-ipsec-secrets\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.901114 kubelet[2036]: I0702 10:58:33.900662 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-run\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:33.901114 kubelet[2036]: I0702 10:58:33.900692 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-config-path\") pod \"cilium-bmwrm\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " pod="kube-system/cilium-bmwrm" Jul 2 10:58:34.033833 systemd[1]: Started sshd@24-10.230.70.110:22-147.75.109.163:41282.service. Jul 2 10:58:34.735429 kubelet[2036]: E0702 10:58:34.735387 2036 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:58:34.900406 sshd[3746]: Accepted publickey for core from 147.75.109.163 port 41282 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:34.902338 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:34.909708 systemd[1]: Started session-25.scope. Jul 2 10:58:34.910772 systemd-logind[1183]: New session 25 of user core. Jul 2 10:58:35.007731 kubelet[2036]: E0702 10:58:35.007591 2036 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 10:58:35.009605 kubelet[2036]: E0702 10:58:35.008558 2036 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-bmwrm: failed to sync secret cache: timed out waiting for the condition Jul 2 10:58:35.010268 kubelet[2036]: E0702 10:58:35.010241 2036 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-hubble-tls podName:91f0970d-7da0-497b-bd15-185f0853dbc0 nodeName:}" failed. No retries permitted until 2024-07-02 10:58:35.509810113 +0000 UTC m=+151.195220094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-hubble-tls") pod "cilium-bmwrm" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0") : failed to sync secret cache: timed out waiting for the condition Jul 2 10:58:35.556621 env[1191]: time="2024-07-02T10:58:35.556517174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmwrm,Uid:91f0970d-7da0-497b-bd15-185f0853dbc0,Namespace:kube-system,Attempt:0,}" Jul 2 10:58:35.579904 env[1191]: time="2024-07-02T10:58:35.579683245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:58:35.579904 env[1191]: time="2024-07-02T10:58:35.579741261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:58:35.579904 env[1191]: time="2024-07-02T10:58:35.579757841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:58:35.580210 env[1191]: time="2024-07-02T10:58:35.580141223Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4 pid=3765 runtime=io.containerd.runc.v2 Jul 2 10:58:35.616107 systemd[1]: Started cri-containerd-0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4.scope. Jul 2 10:58:35.661439 sshd[3746]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:35.666231 env[1191]: time="2024-07-02T10:58:35.666183325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmwrm,Uid:91f0970d-7da0-497b-bd15-185f0853dbc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4\"" Jul 2 10:58:35.666449 systemd[1]: sshd@24-10.230.70.110:22-147.75.109.163:41282.service: Deactivated successfully. Jul 2 10:58:35.668032 systemd-logind[1183]: Session 25 logged out. Waiting for processes to exit. Jul 2 10:58:35.668082 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 10:58:35.669976 systemd-logind[1183]: Removed session 25. Jul 2 10:58:35.676103 env[1191]: time="2024-07-02T10:58:35.675483709Z" level=info msg="CreateContainer within sandbox \"0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:58:35.688466 env[1191]: time="2024-07-02T10:58:35.688407697Z" level=info msg="CreateContainer within sandbox \"0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\"" Jul 2 10:58:35.691082 env[1191]: time="2024-07-02T10:58:35.691044633Z" level=info msg="StartContainer for \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\"" Jul 2 10:58:35.712018 systemd[1]: Started cri-containerd-136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8.scope. Jul 2 10:58:35.729005 systemd[1]: cri-containerd-136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8.scope: Deactivated successfully. Jul 2 10:58:35.746467 env[1191]: time="2024-07-02T10:58:35.746282630Z" level=info msg="shim disconnected" id=136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8 Jul 2 10:58:35.746467 env[1191]: time="2024-07-02T10:58:35.746434945Z" level=warning msg="cleaning up after shim disconnected" id=136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8 namespace=k8s.io Jul 2 10:58:35.746791 env[1191]: time="2024-07-02T10:58:35.746509270Z" level=info msg="cleaning up dead shim" Jul 2 10:58:35.759414 env[1191]: time="2024-07-02T10:58:35.759294855Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3828 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T10:58:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 10:58:35.759828 env[1191]: time="2024-07-02T10:58:35.759660244Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Jul 2 10:58:35.760667 env[1191]: time="2024-07-02T10:58:35.760587525Z" level=error msg="Failed to pipe stdout of container \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\"" error="reading from a closed fifo" Jul 2 10:58:35.760742 env[1191]: time="2024-07-02T10:58:35.760694169Z" level=error msg="Failed to pipe stderr of container \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\"" error="reading from a closed fifo" Jul 2 10:58:35.762056 env[1191]: time="2024-07-02T10:58:35.761993257Z" level=error msg="StartContainer for \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 10:58:35.762432 kubelet[2036]: E0702 10:58:35.762392 2036 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8" Jul 2 10:58:35.768524 kubelet[2036]: E0702 10:58:35.768486 2036 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 10:58:35.768524 kubelet[2036]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 10:58:35.768524 kubelet[2036]: rm /hostbin/cilium-mount Jul 2 10:58:35.768727 kubelet[2036]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8g6vb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bmwrm_kube-system(91f0970d-7da0-497b-bd15-185f0853dbc0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 10:58:35.769035 kubelet[2036]: E0702 10:58:35.769008 2036 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bmwrm" podUID="91f0970d-7da0-497b-bd15-185f0853dbc0" Jul 2 10:58:35.804326 systemd[1]: Started sshd@25-10.230.70.110:22-147.75.109.163:41290.service. Jul 2 10:58:36.059709 env[1191]: time="2024-07-02T10:58:36.059634728Z" level=info msg="StopPodSandbox for \"0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4\"" Jul 2 10:58:36.059940 env[1191]: time="2024-07-02T10:58:36.059738318Z" level=info msg="Container to stop \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:58:36.078104 systemd[1]: cri-containerd-0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4.scope: Deactivated successfully. Jul 2 10:58:36.113504 env[1191]: time="2024-07-02T10:58:36.113439260Z" level=info msg="shim disconnected" id=0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4 Jul 2 10:58:36.114008 env[1191]: time="2024-07-02T10:58:36.113978514Z" level=warning msg="cleaning up after shim disconnected" id=0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4 namespace=k8s.io Jul 2 10:58:36.114142 env[1191]: time="2024-07-02T10:58:36.114114549Z" level=info msg="cleaning up dead shim" Jul 2 10:58:36.124438 env[1191]: time="2024-07-02T10:58:36.124372689Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3863 runtime=io.containerd.runc.v2\n" Jul 2 10:58:36.124834 env[1191]: time="2024-07-02T10:58:36.124796986Z" level=info msg="TearDown network for sandbox \"0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4\" successfully" Jul 2 10:58:36.124955 env[1191]: time="2024-07-02T10:58:36.124834081Z" level=info msg="StopPodSandbox for \"0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4\" returns successfully" Jul 2 10:58:36.320081 kubelet[2036]: I0702 10:58:36.319934 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-bpf-maps\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320081 kubelet[2036]: I0702 10:58:36.320031 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-ipsec-secrets\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320081 kubelet[2036]: I0702 10:58:36.320061 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-run\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320114 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-net\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320144 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-config-path\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320184 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-lib-modules\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320225 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cni-path\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320275 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g6vb\" (UniqueName: \"kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-kube-api-access-8g6vb\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320305 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-hostproc\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320378 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-clustermesh-secrets\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320408 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-etc-cni-netd\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320453 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-hubble-tls\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320489 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-kernel\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320536 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-cgroup\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320577 2036 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-xtables-lock\") pod \"91f0970d-7da0-497b-bd15-185f0853dbc0\" (UID: \"91f0970d-7da0-497b-bd15-185f0853dbc0\") " Jul 2 10:58:36.320784 kubelet[2036]: I0702 10:58:36.320706 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.322245 kubelet[2036]: I0702 10:58:36.321565 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.322245 kubelet[2036]: I0702 10:58:36.321849 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-hostproc" (OuterVolumeSpecName: "hostproc") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.322773 kubelet[2036]: I0702 10:58:36.322668 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.324935 kubelet[2036]: I0702 10:58:36.322897 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.324935 kubelet[2036]: I0702 10:58:36.322951 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.325352 kubelet[2036]: I0702 10:58:36.325297 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.325534 kubelet[2036]: I0702 10:58:36.325374 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.328798 kubelet[2036]: I0702 10:58:36.328764 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:58:36.328909 kubelet[2036]: I0702 10:58:36.328813 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.329004 kubelet[2036]: I0702 10:58:36.328955 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cni-path" (OuterVolumeSpecName: "cni-path") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:58:36.329425 kubelet[2036]: I0702 10:58:36.329395 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-kube-api-access-8g6vb" (OuterVolumeSpecName: "kube-api-access-8g6vb") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "kube-api-access-8g6vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:58:36.330539 kubelet[2036]: I0702 10:58:36.330495 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:58:36.331537 kubelet[2036]: I0702 10:58:36.331504 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:58:36.333113 kubelet[2036]: I0702 10:58:36.333072 2036 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "91f0970d-7da0-497b-bd15-185f0853dbc0" (UID: "91f0970d-7da0-497b-bd15-185f0853dbc0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:58:36.421607 kubelet[2036]: I0702 10:58:36.421531 2036 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8g6vb\" (UniqueName: \"kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-kube-api-access-8g6vb\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.421607 kubelet[2036]: I0702 10:58:36.421600 2036 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-hostproc\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.421607 kubelet[2036]: I0702 10:58:36.421638 2036 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-clustermesh-secrets\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421654 2036 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-etc-cni-netd\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421688 2036 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91f0970d-7da0-497b-bd15-185f0853dbc0-hubble-tls\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421706 2036 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-kernel\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421725 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-cgroup\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421741 2036 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-xtables-lock\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421757 2036 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-host-proc-sys-net\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421773 2036 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-bpf-maps\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421790 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-ipsec-secrets\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421808 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-run\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421824 2036 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-lib-modules\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421851 2036 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91f0970d-7da0-497b-bd15-185f0853dbc0-cni-path\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.422000 kubelet[2036]: I0702 10:58:36.421867 2036 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91f0970d-7da0-497b-bd15-185f0853dbc0-cilium-config-path\") on node \"srv-f8jck.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:58:36.517997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4-rootfs.mount: Deactivated successfully. Jul 2 10:58:36.518157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0116e690221baed86edcb585d3f248da3d1dc59d34e0d0550c06bb1eaa95d5d4-shm.mount: Deactivated successfully. Jul 2 10:58:36.518298 systemd[1]: var-lib-kubelet-pods-91f0970d\x2d7da0\x2d497b\x2dbd15\x2d185f0853dbc0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 10:58:36.518409 systemd[1]: var-lib-kubelet-pods-91f0970d\x2d7da0\x2d497b\x2dbd15\x2d185f0853dbc0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 10:58:36.518508 systemd[1]: var-lib-kubelet-pods-91f0970d\x2d7da0\x2d497b\x2dbd15\x2d185f0853dbc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8g6vb.mount: Deactivated successfully. Jul 2 10:58:36.518612 systemd[1]: var-lib-kubelet-pods-91f0970d\x2d7da0\x2d497b\x2dbd15\x2d185f0853dbc0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 10:58:36.534677 systemd[1]: Removed slice kubepods-burstable-pod91f0970d_7da0_497b_bd15_185f0853dbc0.slice. Jul 2 10:58:36.668579 sshd[3843]: Accepted publickey for core from 147.75.109.163 port 41290 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:58:36.670787 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:58:36.676900 systemd-logind[1183]: New session 26 of user core. Jul 2 10:58:36.677630 systemd[1]: Started session-26.scope. Jul 2 10:58:37.059704 kubelet[2036]: I0702 10:58:37.059659 2036 scope.go:117] "RemoveContainer" containerID="136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8" Jul 2 10:58:37.063335 env[1191]: time="2024-07-02T10:58:37.062968513Z" level=info msg="RemoveContainer for \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\"" Jul 2 10:58:37.068733 env[1191]: time="2024-07-02T10:58:37.068693638Z" level=info msg="RemoveContainer for \"136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8\" returns successfully" Jul 2 10:58:37.104488 kubelet[2036]: I0702 10:58:37.104431 2036 topology_manager.go:215] "Topology Admit Handler" podUID="439b6ef3-e9b2-464b-b749-80d75995019d" podNamespace="kube-system" podName="cilium-wpvhg" Jul 2 10:58:37.104488 kubelet[2036]: E0702 10:58:37.104500 2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91f0970d-7da0-497b-bd15-185f0853dbc0" containerName="mount-cgroup" Jul 2 10:58:37.104769 kubelet[2036]: I0702 10:58:37.104540 2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f0970d-7da0-497b-bd15-185f0853dbc0" containerName="mount-cgroup" Jul 2 10:58:37.112860 systemd[1]: Created slice kubepods-burstable-pod439b6ef3_e9b2_464b_b749_80d75995019d.slice. Jul 2 10:58:37.228368 kubelet[2036]: I0702 10:58:37.228300 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-cilium-run\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228368 kubelet[2036]: I0702 10:58:37.228379 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-hostproc\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228634 kubelet[2036]: I0702 10:58:37.228414 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/439b6ef3-e9b2-464b-b749-80d75995019d-cilium-config-path\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228634 kubelet[2036]: I0702 10:58:37.228449 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/439b6ef3-e9b2-464b-b749-80d75995019d-hubble-tls\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228634 kubelet[2036]: I0702 10:58:37.228505 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-etc-cni-netd\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228634 kubelet[2036]: I0702 10:58:37.228542 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-xtables-lock\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228634 kubelet[2036]: I0702 10:58:37.228574 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-host-proc-sys-kernel\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.228634 kubelet[2036]: I0702 10:58:37.228602 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-cni-path\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228640 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-cilium-cgroup\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228683 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-lib-modules\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228713 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvh9\" (UniqueName: \"kubernetes.io/projected/439b6ef3-e9b2-464b-b749-80d75995019d-kube-api-access-cxvh9\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228745 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-host-proc-sys-net\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228800 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/439b6ef3-e9b2-464b-b749-80d75995019d-cilium-ipsec-secrets\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228832 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/439b6ef3-e9b2-464b-b749-80d75995019d-bpf-maps\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.229003 kubelet[2036]: I0702 10:58:37.228883 2036 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/439b6ef3-e9b2-464b-b749-80d75995019d-clustermesh-secrets\") pod \"cilium-wpvhg\" (UID: \"439b6ef3-e9b2-464b-b749-80d75995019d\") " pod="kube-system/cilium-wpvhg" Jul 2 10:58:37.420644 env[1191]: time="2024-07-02T10:58:37.418904518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpvhg,Uid:439b6ef3-e9b2-464b-b749-80d75995019d,Namespace:kube-system,Attempt:0,}" Jul 2 10:58:37.440934 env[1191]: time="2024-07-02T10:58:37.440746499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:58:37.440934 env[1191]: time="2024-07-02T10:58:37.440800334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:58:37.440934 env[1191]: time="2024-07-02T10:58:37.440816588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:58:37.446330 env[1191]: time="2024-07-02T10:58:37.443640999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40 pid=3896 runtime=io.containerd.runc.v2 Jul 2 10:58:37.477804 systemd[1]: Started cri-containerd-7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40.scope. Jul 2 10:58:37.550622 env[1191]: time="2024-07-02T10:58:37.550556249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wpvhg,Uid:439b6ef3-e9b2-464b-b749-80d75995019d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\"" Jul 2 10:58:37.569421 env[1191]: time="2024-07-02T10:58:37.569326072Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:58:37.581593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225015034.mount: Deactivated successfully. Jul 2 10:58:37.589886 env[1191]: time="2024-07-02T10:58:37.589598604Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039\"" Jul 2 10:58:37.593877 env[1191]: time="2024-07-02T10:58:37.591102678Z" level=info msg="StartContainer for \"08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039\"" Jul 2 10:58:37.622859 systemd[1]: Started cri-containerd-08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039.scope. Jul 2 10:58:37.708463 env[1191]: time="2024-07-02T10:58:37.708410117Z" level=info msg="StartContainer for \"08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039\" returns successfully" Jul 2 10:58:37.731060 systemd[1]: cri-containerd-08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039.scope: Deactivated successfully. Jul 2 10:58:37.770505 env[1191]: time="2024-07-02T10:58:37.770447184Z" level=info msg="shim disconnected" id=08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039 Jul 2 10:58:37.771156 env[1191]: time="2024-07-02T10:58:37.771123893Z" level=warning msg="cleaning up after shim disconnected" id=08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039 namespace=k8s.io Jul 2 10:58:37.771290 env[1191]: time="2024-07-02T10:58:37.771262334Z" level=info msg="cleaning up dead shim" Jul 2 10:58:37.783452 env[1191]: time="2024-07-02T10:58:37.783360247Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3978 runtime=io.containerd.runc.v2\n" Jul 2 10:58:38.072512 env[1191]: time="2024-07-02T10:58:38.071928868Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 10:58:38.093052 env[1191]: time="2024-07-02T10:58:38.092995073Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae\"" Jul 2 10:58:38.094170 env[1191]: time="2024-07-02T10:58:38.094129939Z" level=info msg="StartContainer for \"dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae\"" Jul 2 10:58:38.105371 kubelet[2036]: I0702 10:58:38.105336 2036 setters.go:568] "Node became not ready" node="srv-f8jck.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T10:58:38Z","lastTransitionTime":"2024-07-02T10:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 10:58:38.121877 systemd[1]: Started cri-containerd-dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae.scope. Jul 2 10:58:38.168020 env[1191]: time="2024-07-02T10:58:38.167967932Z" level=info msg="StartContainer for \"dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae\" returns successfully" Jul 2 10:58:38.187272 systemd[1]: cri-containerd-dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae.scope: Deactivated successfully. Jul 2 10:58:38.214416 env[1191]: time="2024-07-02T10:58:38.214348677Z" level=info msg="shim disconnected" id=dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae Jul 2 10:58:38.214884 env[1191]: time="2024-07-02T10:58:38.214822883Z" level=warning msg="cleaning up after shim disconnected" id=dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae namespace=k8s.io Jul 2 10:58:38.215064 env[1191]: time="2024-07-02T10:58:38.215025232Z" level=info msg="cleaning up dead shim" Jul 2 10:58:38.226526 env[1191]: time="2024-07-02T10:58:38.226423893Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" Jul 2 10:58:38.521480 kubelet[2036]: I0702 10:58:38.520757 2036 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="91f0970d-7da0-497b-bd15-185f0853dbc0" path="/var/lib/kubelet/pods/91f0970d-7da0-497b-bd15-185f0853dbc0/volumes" Jul 2 10:58:38.521094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039-rootfs.mount: Deactivated successfully. Jul 2 10:58:38.865137 kubelet[2036]: W0702 10:58:38.864955 2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91f0970d_7da0_497b_bd15_185f0853dbc0.slice/cri-containerd-136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8.scope WatchSource:0}: container "136b57dd2c733f206e8d1e59f517c2fe8c1cd4de0edaf729323a41412e7c33a8" in namespace "k8s.io": not found Jul 2 10:58:39.074626 env[1191]: time="2024-07-02T10:58:39.074520371Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 10:58:39.094409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058280263.mount: Deactivated successfully. Jul 2 10:58:39.101779 env[1191]: time="2024-07-02T10:58:39.101725360Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306\"" Jul 2 10:58:39.102788 env[1191]: time="2024-07-02T10:58:39.102754641Z" level=info msg="StartContainer for \"7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306\"" Jul 2 10:58:39.135738 systemd[1]: Started cri-containerd-7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306.scope. Jul 2 10:58:39.189294 env[1191]: time="2024-07-02T10:58:39.189238307Z" level=info msg="StartContainer for \"7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306\" returns successfully" Jul 2 10:58:39.196390 systemd[1]: cri-containerd-7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306.scope: Deactivated successfully. Jul 2 10:58:39.232814 env[1191]: time="2024-07-02T10:58:39.232741746Z" level=info msg="shim disconnected" id=7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306 Jul 2 10:58:39.232814 env[1191]: time="2024-07-02T10:58:39.232819445Z" level=warning msg="cleaning up after shim disconnected" id=7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306 namespace=k8s.io Jul 2 10:58:39.233155 env[1191]: time="2024-07-02T10:58:39.232837436Z" level=info msg="cleaning up dead shim" Jul 2 10:58:39.246268 env[1191]: time="2024-07-02T10:58:39.246207955Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4098 runtime=io.containerd.runc.v2\n" Jul 2 10:58:39.520910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306-rootfs.mount: Deactivated successfully. Jul 2 10:58:39.737580 kubelet[2036]: E0702 10:58:39.737526 2036 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:58:40.078243 env[1191]: time="2024-07-02T10:58:40.078190802Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 10:58:40.094961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081866503.mount: Deactivated successfully. Jul 2 10:58:40.105176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007736929.mount: Deactivated successfully. Jul 2 10:58:40.105473 env[1191]: time="2024-07-02T10:58:40.105424057Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3\"" Jul 2 10:58:40.106539 env[1191]: time="2024-07-02T10:58:40.106503459Z" level=info msg="StartContainer for \"2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3\"" Jul 2 10:58:40.131630 systemd[1]: Started cri-containerd-2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3.scope. Jul 2 10:58:40.170745 systemd[1]: cri-containerd-2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3.scope: Deactivated successfully. Jul 2 10:58:40.172049 env[1191]: time="2024-07-02T10:58:40.171704303Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439b6ef3_e9b2_464b_b749_80d75995019d.slice/cri-containerd-2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3.scope/memory.events\": no such file or directory" Jul 2 10:58:40.174923 env[1191]: time="2024-07-02T10:58:40.174780604Z" level=info msg="StartContainer for \"2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3\" returns successfully" Jul 2 10:58:40.205107 env[1191]: time="2024-07-02T10:58:40.205042544Z" level=info msg="shim disconnected" id=2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3 Jul 2 10:58:40.205107 env[1191]: time="2024-07-02T10:58:40.205105106Z" level=warning msg="cleaning up after shim disconnected" id=2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3 namespace=k8s.io Jul 2 10:58:40.205437 env[1191]: time="2024-07-02T10:58:40.205121467Z" level=info msg="cleaning up dead shim" Jul 2 10:58:40.217052 env[1191]: time="2024-07-02T10:58:40.216985422Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:58:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4158 runtime=io.containerd.runc.v2\n" Jul 2 10:58:41.085018 env[1191]: time="2024-07-02T10:58:41.084954419Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 10:58:41.102773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585267239.mount: Deactivated successfully. Jul 2 10:58:41.110726 env[1191]: time="2024-07-02T10:58:41.110673904Z" level=info msg="CreateContainer within sandbox \"7887f8c68492fef85b326d882212c2541e6e198683dd66aeda6942c7ee0cfc40\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81\"" Jul 2 10:58:41.111507 env[1191]: time="2024-07-02T10:58:41.111453181Z" level=info msg="StartContainer for \"1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81\"" Jul 2 10:58:41.141191 systemd[1]: Started cri-containerd-1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81.scope. Jul 2 10:58:41.194873 env[1191]: time="2024-07-02T10:58:41.191908776Z" level=info msg="StartContainer for \"1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81\" returns successfully" Jul 2 10:58:41.905893 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 10:58:41.979902 kubelet[2036]: W0702 10:58:41.979798 2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439b6ef3_e9b2_464b_b749_80d75995019d.slice/cri-containerd-08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039.scope WatchSource:0}: task 08ee3cf5db91dc70cb7b6be5a0b6dfe98bce710f147f1ecce4e84f0d7c401039 not found: not found Jul 2 10:58:43.506443 systemd[1]: run-containerd-runc-k8s.io-1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81-runc.vE5HWn.mount: Deactivated successfully. Jul 2 10:58:45.088792 kubelet[2036]: W0702 10:58:45.088716 2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439b6ef3_e9b2_464b_b749_80d75995019d.slice/cri-containerd-dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae.scope WatchSource:0}: task dd8a90c158ccc5ff63bc64c1068bb1acb90145278645d36a2bfbbb9d0bd8c0ae not found: not found Jul 2 10:58:45.292958 systemd-networkd[1021]: lxc_health: Link UP Jul 2 10:58:45.300879 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 10:58:45.300583 systemd-networkd[1021]: lxc_health: Gained carrier Jul 2 10:58:45.444540 kubelet[2036]: I0702 10:58:45.444496 2036 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wpvhg" podStartSLOduration=8.444368518 podStartE2EDuration="8.444368518s" podCreationTimestamp="2024-07-02 10:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:58:42.142175176 +0000 UTC m=+157.827585165" watchObservedRunningTime="2024-07-02 10:58:45.444368518 +0000 UTC m=+161.129778505" Jul 2 10:58:45.779443 systemd[1]: run-containerd-runc-k8s.io-1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81-runc.AoMgJR.mount: Deactivated successfully. Jul 2 10:58:46.916753 systemd-networkd[1021]: lxc_health: Gained IPv6LL Jul 2 10:58:48.141424 systemd[1]: run-containerd-runc-k8s.io-1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81-runc.fUyuus.mount: Deactivated successfully. Jul 2 10:58:48.205893 kubelet[2036]: W0702 10:58:48.203557 2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439b6ef3_e9b2_464b_b749_80d75995019d.slice/cri-containerd-7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306.scope WatchSource:0}: task 7a770fd59b29baf12a9c68f6c95e0441e1fa63c9b64a69c3926edfcb8fc54306 not found: not found Jul 2 10:58:50.377070 systemd[1]: run-containerd-runc-k8s.io-1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81-runc.YYlbik.mount: Deactivated successfully. Jul 2 10:58:51.320005 kubelet[2036]: W0702 10:58:51.319938 2036 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439b6ef3_e9b2_464b_b749_80d75995019d.slice/cri-containerd-2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3.scope WatchSource:0}: task 2fcf69aa34a32f39cc0b72caa58a2e09704e3f400d038118835677599fd308b3 not found: not found Jul 2 10:58:52.602730 systemd[1]: run-containerd-runc-k8s.io-1fe2376c9d3168506e1345833e09208d24b8b3a04b62dc31e5a6f03e48692d81-runc.D0q6oO.mount: Deactivated successfully. Jul 2 10:58:52.856333 sshd[3843]: pam_unix(sshd:session): session closed for user core Jul 2 10:58:52.861738 systemd[1]: sshd@25-10.230.70.110:22-147.75.109.163:41290.service: Deactivated successfully. Jul 2 10:58:52.862963 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 10:58:52.863893 systemd-logind[1183]: Session 26 logged out. Waiting for processes to exit. Jul 2 10:58:52.865366 systemd-logind[1183]: Removed session 26.