Jul 2 10:28:45.923348 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 10:28:45.923398 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:28:45.923418 kernel: BIOS-provided physical RAM map: Jul 2 10:28:45.923428 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 10:28:45.923437 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 10:28:45.923446 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 10:28:45.923457 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 2 10:28:45.923466 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 2 10:28:45.923475 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 2 10:28:45.923485 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 2 10:28:45.923507 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 10:28:45.923518 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 10:28:45.923527 kernel: NX (Execute Disable) protection: active Jul 2 10:28:45.923536 kernel: SMBIOS 2.8 present. Jul 2 10:28:45.923548 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 2 10:28:45.923558 kernel: Hypervisor detected: KVM Jul 2 10:28:45.923572 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 10:28:45.923582 kernel: kvm-clock: cpu 0, msr 65192001, primary cpu clock Jul 2 10:28:45.923593 kernel: kvm-clock: using sched offset of 14457506652 cycles Jul 2 10:28:45.923604 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 10:28:45.923614 kernel: tsc: Detected 2799.998 MHz processor Jul 2 10:28:45.923624 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 10:28:45.923634 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 10:28:45.923644 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 2 10:28:45.923655 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 10:28:45.923668 kernel: Using GB pages for direct mapping Jul 2 10:28:45.923679 kernel: ACPI: Early table checksum verification disabled Jul 2 10:28:45.923689 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 2 10:28:45.923699 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923709 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923719 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923729 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 2 10:28:45.923739 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923749 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923762 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923772 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 10:28:45.923782 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 2 10:28:45.923792 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 2 10:28:45.923802 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 2 10:28:45.923813 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 2 10:28:45.923828 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 2 10:28:45.923842 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 2 10:28:45.923853 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 2 10:28:45.923864 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 10:28:45.923874 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 10:28:45.923885 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 2 10:28:45.923895 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 2 10:28:45.923906 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 2 10:28:45.923920 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 2 10:28:45.923931 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 2 10:28:45.923941 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 2 10:28:45.923951 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 2 10:28:45.923962 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 2 10:28:45.923972 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 2 10:28:45.923983 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 2 10:28:45.923993 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 2 10:28:45.924004 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 2 10:28:45.924014 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 2 10:28:45.924029 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 2 10:28:45.924039 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 10:28:45.924050 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 10:28:45.924060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 2 10:28:45.924071 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 2 10:28:45.924082 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 2 10:28:45.924093 kernel: Zone ranges: Jul 2 10:28:45.924103 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 10:28:45.924114 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 2 10:28:45.924128 kernel: Normal empty Jul 2 10:28:45.924139 kernel: Movable zone start for each node Jul 2 10:28:45.924149 kernel: Early memory node ranges Jul 2 10:28:45.924160 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 10:28:45.924170 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 2 10:28:45.924181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 2 10:28:45.924192 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 10:28:45.924244 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 10:28:45.924255 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 2 10:28:45.924271 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 10:28:45.924282 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 10:28:45.924292 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 10:28:45.924303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 10:28:45.924314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 10:28:45.924325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 10:28:45.924336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 10:28:45.924346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 10:28:45.924357 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 10:28:45.924371 kernel: TSC deadline timer available Jul 2 10:28:45.924382 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 2 10:28:45.924392 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 2 10:28:45.924403 kernel: Booting paravirtualized kernel on KVM Jul 2 10:28:45.924414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 10:28:45.924425 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 10:28:45.924435 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 10:28:45.924446 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 10:28:45.924456 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 10:28:45.924471 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Jul 2 10:28:45.924481 kernel: kvm-guest: PV spinlocks enabled Jul 2 10:28:45.924492 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 10:28:45.924514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 2 10:28:45.924525 kernel: Policy zone: DMA32 Jul 2 10:28:45.924538 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:28:45.924549 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 10:28:45.924560 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 10:28:45.924575 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 10:28:45.924586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 10:28:45.924597 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 192524K reserved, 0K cma-reserved) Jul 2 10:28:45.924608 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 10:28:45.924618 kernel: Kernel/User page tables isolation: enabled Jul 2 10:28:45.924629 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 10:28:45.924640 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 10:28:45.924650 kernel: rcu: Hierarchical RCU implementation. Jul 2 10:28:45.924662 kernel: rcu: RCU event tracing is enabled. Jul 2 10:28:45.924676 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 10:28:45.924688 kernel: Rude variant of Tasks RCU enabled. Jul 2 10:28:45.924698 kernel: Tracing variant of Tasks RCU enabled. Jul 2 10:28:45.924709 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 10:28:45.924720 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 10:28:45.924731 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 2 10:28:45.924741 kernel: random: crng init done Jul 2 10:28:45.924764 kernel: Console: colour VGA+ 80x25 Jul 2 10:28:45.924775 kernel: printk: console [tty0] enabled Jul 2 10:28:45.924786 kernel: printk: console [ttyS0] enabled Jul 2 10:28:45.924798 kernel: ACPI: Core revision 20210730 Jul 2 10:28:45.924809 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 10:28:45.924823 kernel: x2apic enabled Jul 2 10:28:45.924834 kernel: Switched APIC routing to physical x2apic. Jul 2 10:28:45.924845 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jul 2 10:28:45.924857 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jul 2 10:28:45.924868 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 10:28:45.924883 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 10:28:45.924894 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 10:28:45.924905 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 10:28:45.924916 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 10:28:45.924927 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 10:28:45.924938 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 10:28:45.924949 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 2 10:28:45.924960 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 10:28:45.924971 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 10:28:45.924982 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 10:28:45.924993 kernel: MMIO Stale Data: Unknown: No mitigations Jul 2 10:28:45.925008 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 2 10:28:45.925019 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 10:28:45.925030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 10:28:45.925041 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 10:28:45.925052 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 10:28:45.925063 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 10:28:45.925074 kernel: Freeing SMP alternatives memory: 32K Jul 2 10:28:45.925086 kernel: pid_max: default: 32768 minimum: 301 Jul 2 10:28:45.925097 kernel: LSM: Security Framework initializing Jul 2 10:28:45.925108 kernel: SELinux: Initializing. Jul 2 10:28:45.925119 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 10:28:45.925134 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 10:28:45.925145 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 2 10:28:45.925156 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 2 10:28:45.925167 kernel: signal: max sigframe size: 1776 Jul 2 10:28:45.925178 kernel: rcu: Hierarchical SRCU implementation. Jul 2 10:28:45.925190 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 10:28:45.925213 kernel: smp: Bringing up secondary CPUs ... Jul 2 10:28:45.925225 kernel: x86: Booting SMP configuration: Jul 2 10:28:45.925236 kernel: .... node #0, CPUs: #1 Jul 2 10:28:45.925252 kernel: kvm-clock: cpu 1, msr 65192041, secondary cpu clock Jul 2 10:28:45.925263 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 2 10:28:45.925274 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Jul 2 10:28:45.925286 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 10:28:45.925297 kernel: smpboot: Max logical packages: 16 Jul 2 10:28:45.925308 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jul 2 10:28:45.925319 kernel: devtmpfs: initialized Jul 2 10:28:45.925331 kernel: x86/mm: Memory block size: 128MB Jul 2 10:28:45.925342 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 10:28:45.925353 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 10:28:45.925368 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 10:28:45.925380 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 10:28:45.925391 kernel: audit: initializing netlink subsys (disabled) Jul 2 10:28:45.925402 kernel: audit: type=2000 audit(1719916124.475:1): state=initialized audit_enabled=0 res=1 Jul 2 10:28:45.925413 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 10:28:45.925424 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 10:28:45.925435 kernel: cpuidle: using governor menu Jul 2 10:28:45.925446 kernel: ACPI: bus type PCI registered Jul 2 10:28:45.925457 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 10:28:45.925472 kernel: dca service started, version 1.12.1 Jul 2 10:28:45.925484 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 2 10:28:45.925495 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Jul 2 10:28:45.925517 kernel: PCI: Using configuration type 1 for base access Jul 2 10:28:45.925528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 10:28:45.925539 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 10:28:45.925551 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 10:28:45.925562 kernel: ACPI: Added _OSI(Module Device) Jul 2 10:28:45.925577 kernel: ACPI: Added _OSI(Processor Device) Jul 2 10:28:45.925589 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 10:28:45.925600 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 10:28:45.925611 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 10:28:45.925622 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 10:28:45.925633 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 10:28:45.925645 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 10:28:45.925656 kernel: ACPI: Interpreter enabled Jul 2 10:28:45.925667 kernel: ACPI: PM: (supports S0 S5) Jul 2 10:28:45.925678 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 10:28:45.925693 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 10:28:45.925705 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 2 10:28:45.925716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 10:28:45.925996 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 10:28:45.926152 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 10:28:45.926324 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 10:28:45.926342 kernel: PCI host bridge to bus 0000:00 Jul 2 10:28:45.926528 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 10:28:45.926667 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 10:28:45.926803 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 10:28:45.926936 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 2 10:28:45.927074 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 2 10:28:45.927223 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 2 10:28:45.927358 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 10:28:45.927555 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 2 10:28:45.927740 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 2 10:28:45.927902 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 2 10:28:45.928062 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 2 10:28:45.928234 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 2 10:28:45.928394 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 10:28:45.928595 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.928747 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 2 10:28:45.928926 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.929087 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 2 10:28:45.929284 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.929446 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 2 10:28:45.929730 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.929901 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 2 10:28:45.930084 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.930256 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 2 10:28:45.930421 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.930600 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 2 10:28:45.930774 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.930934 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 2 10:28:45.931113 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 2 10:28:45.931286 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 2 10:28:45.931459 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 10:28:45.931633 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 2 10:28:45.931783 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 2 10:28:45.931952 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 2 10:28:45.932111 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 2 10:28:45.932458 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 10:28:45.932632 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 10:28:45.932793 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 2 10:28:45.932951 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 2 10:28:45.933128 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 2 10:28:45.933393 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 2 10:28:45.933580 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 2 10:28:45.933728 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 2 10:28:45.933873 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 2 10:28:45.934043 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 2 10:28:45.934191 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 2 10:28:45.934386 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 2 10:28:45.934561 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 2 10:28:45.934711 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 2 10:28:45.934858 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 2 10:28:45.935003 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 10:28:45.935171 kernel: pci_bus 0000:02: extended config space not accessible Jul 2 10:28:45.935373 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 2 10:28:45.935547 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 2 10:28:45.935702 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 2 10:28:45.935853 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 2 10:28:45.936024 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 2 10:28:45.936179 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 2 10:28:45.936341 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 2 10:28:45.936495 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 2 10:28:45.936655 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 10:28:45.936828 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 2 10:28:45.936985 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 2 10:28:45.937132 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 2 10:28:45.937297 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 2 10:28:45.937444 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 10:28:45.937607 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 2 10:28:45.937763 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 2 10:28:45.937910 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 10:28:45.938058 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 2 10:28:45.944478 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 2 10:28:45.944666 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 10:28:45.944821 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 2 10:28:45.944970 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 2 10:28:45.945118 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 10:28:45.945302 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 2 10:28:45.945452 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 2 10:28:45.945611 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 10:28:45.945762 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 2 10:28:45.945908 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 2 10:28:45.946053 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 10:28:45.946070 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 10:28:45.946083 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 10:28:45.946101 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 10:28:45.946113 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 10:28:45.946125 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 2 10:28:45.946137 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 2 10:28:45.946148 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 2 10:28:45.946160 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 2 10:28:45.946171 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 2 10:28:45.946182 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 2 10:28:45.946218 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 2 10:28:45.946238 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 2 10:28:45.946250 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 2 10:28:45.946261 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 2 10:28:45.946273 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 2 10:28:45.946284 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 2 10:28:45.946296 kernel: iommu: Default domain type: Translated Jul 2 10:28:45.946307 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 10:28:45.946454 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 2 10:28:45.946612 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 10:28:45.946765 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 2 10:28:45.946783 kernel: vgaarb: loaded Jul 2 10:28:45.946794 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 10:28:45.946806 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 10:28:45.946817 kernel: PTP clock support registered Jul 2 10:28:45.946829 kernel: PCI: Using ACPI for IRQ routing Jul 2 10:28:45.946840 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 10:28:45.946851 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 10:28:45.946868 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 2 10:28:45.946880 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 10:28:45.946891 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 10:28:45.946903 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 10:28:45.946915 kernel: pnp: PnP ACPI init Jul 2 10:28:45.947097 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 2 10:28:45.947117 kernel: pnp: PnP ACPI: found 5 devices Jul 2 10:28:45.947128 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 10:28:45.947145 kernel: NET: Registered PF_INET protocol family Jul 2 10:28:45.947157 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 10:28:45.947169 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 10:28:45.947181 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 10:28:45.947192 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 10:28:45.947218 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 10:28:45.947229 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 10:28:45.947241 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 10:28:45.947253 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 10:28:45.947269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 10:28:45.947280 kernel: NET: Registered PF_XDP protocol family Jul 2 10:28:45.947430 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 2 10:28:45.947592 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 2 10:28:45.947743 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 2 10:28:45.947892 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 2 10:28:45.948039 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 2 10:28:45.948192 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 2 10:28:45.948358 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 2 10:28:45.948518 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 2 10:28:45.948669 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 2 10:28:45.948816 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 2 10:28:45.948984 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 2 10:28:45.949137 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 2 10:28:45.954863 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 2 10:28:45.955034 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 2 10:28:45.955190 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 2 10:28:45.955360 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 2 10:28:45.955535 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 2 10:28:45.955692 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 2 10:28:45.955843 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 2 10:28:45.955992 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 2 10:28:45.956151 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 2 10:28:45.956320 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 10:28:45.956490 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 2 10:28:45.956656 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 2 10:28:45.956807 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 2 10:28:45.956956 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 10:28:45.957107 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 2 10:28:45.957279 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 2 10:28:45.957433 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 2 10:28:45.957594 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 10:28:45.957747 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 2 10:28:45.957896 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 2 10:28:45.958046 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 2 10:28:45.958211 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 10:28:45.958366 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 2 10:28:45.958534 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 2 10:28:45.958681 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 2 10:28:45.958829 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 10:28:45.958979 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 2 10:28:45.959129 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 2 10:28:45.959295 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 2 10:28:45.959444 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 10:28:45.959714 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 2 10:28:45.959875 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 2 10:28:45.960037 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 2 10:28:45.960184 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 10:28:45.960362 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 2 10:28:45.960541 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 2 10:28:45.960791 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 2 10:28:45.960956 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 10:28:45.961112 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 10:28:45.961369 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 10:28:45.961517 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 10:28:45.961732 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 2 10:28:45.961870 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 2 10:28:45.962003 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 2 10:28:45.962164 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 2 10:28:45.962329 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 2 10:28:45.962469 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 2 10:28:45.962707 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 2 10:28:45.962866 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 2 10:28:45.963008 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 2 10:28:45.963147 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 2 10:28:45.963348 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 2 10:28:45.963490 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 2 10:28:45.963716 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 2 10:28:45.963875 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 2 10:28:45.964016 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 2 10:28:45.964156 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 2 10:28:45.964333 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 2 10:28:45.964482 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 2 10:28:45.964639 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 2 10:28:45.964808 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 2 10:28:45.964951 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 2 10:28:45.965090 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 2 10:28:45.965262 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 2 10:28:45.965414 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 2 10:28:45.965568 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 2 10:28:45.965720 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 2 10:28:45.965861 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 2 10:28:45.966001 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 2 10:28:45.966020 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 2 10:28:45.966033 kernel: PCI: CLS 0 bytes, default 64 Jul 2 10:28:45.966046 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 10:28:45.966064 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 2 10:28:45.966076 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 10:28:45.966089 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jul 2 10:28:45.966100 kernel: Initialise system trusted keyrings Jul 2 10:28:45.966112 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 10:28:45.966124 kernel: Key type asymmetric registered Jul 2 10:28:45.966136 kernel: Asymmetric key parser 'x509' registered Jul 2 10:28:45.966148 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 10:28:45.966160 kernel: io scheduler mq-deadline registered Jul 2 10:28:45.966176 kernel: io scheduler kyber registered Jul 2 10:28:45.966188 kernel: io scheduler bfq registered Jul 2 10:28:45.966356 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 2 10:28:45.966516 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 2 10:28:45.966669 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.966820 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 2 10:28:45.966968 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 2 10:28:45.967124 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.967322 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 2 10:28:45.967469 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 2 10:28:45.967629 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.967777 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 2 10:28:45.967922 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 2 10:28:45.968074 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.968269 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 2 10:28:45.968418 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 2 10:28:45.968576 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.968724 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 2 10:28:45.968869 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 2 10:28:45.969022 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.969167 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 2 10:28:45.969336 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 2 10:28:45.969489 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.969649 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 2 10:28:45.969794 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 2 10:28:45.969945 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 2 10:28:45.969964 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 10:28:45.969977 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 2 10:28:45.969990 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 2 10:28:45.970002 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 10:28:45.970014 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 10:28:45.970026 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 10:28:45.970038 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 10:28:45.970056 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 10:28:45.970223 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 2 10:28:45.970367 kernel: rtc_cmos 00:03: registered as rtc0 Jul 2 10:28:45.970514 kernel: rtc_cmos 00:03: setting system clock to 2024-07-02T10:28:45 UTC (1719916125) Jul 2 10:28:45.970653 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 2 10:28:45.970671 kernel: intel_pstate: CPU model not supported Jul 2 10:28:45.970683 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 10:28:45.970701 kernel: NET: Registered PF_INET6 protocol family Jul 2 10:28:45.970714 kernel: Segment Routing with IPv6 Jul 2 10:28:45.970726 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 10:28:45.970738 kernel: NET: Registered PF_PACKET protocol family Jul 2 10:28:45.970750 kernel: Key type dns_resolver registered Jul 2 10:28:45.970761 kernel: IPI shorthand broadcast: enabled Jul 2 10:28:45.970773 kernel: sched_clock: Marking stable (1000283514, 208357786)->(1502548054, -293906754) Jul 2 10:28:45.970785 kernel: registered taskstats version 1 Jul 2 10:28:45.970797 kernel: Loading compiled-in X.509 certificates Jul 2 10:28:45.970809 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 10:28:45.970825 kernel: Key type .fscrypt registered Jul 2 10:28:45.970836 kernel: Key type fscrypt-provisioning registered Jul 2 10:28:45.970848 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 10:28:45.970860 kernel: ima: Allocated hash algorithm: sha1 Jul 2 10:28:45.970872 kernel: ima: No architecture policies found Jul 2 10:28:45.970885 kernel: clk: Disabling unused clocks Jul 2 10:28:45.970897 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 10:28:45.970908 kernel: Write protecting the kernel read-only data: 28672k Jul 2 10:28:45.970924 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 10:28:45.970936 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 10:28:45.970948 kernel: Run /init as init process Jul 2 10:28:45.970960 kernel: with arguments: Jul 2 10:28:45.970972 kernel: /init Jul 2 10:28:45.970983 kernel: with environment: Jul 2 10:28:45.970996 kernel: HOME=/ Jul 2 10:28:45.971007 kernel: TERM=linux Jul 2 10:28:45.971019 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 10:28:45.971041 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 10:28:45.971063 systemd[1]: Detected virtualization kvm. Jul 2 10:28:45.971076 systemd[1]: Detected architecture x86-64. Jul 2 10:28:45.971088 systemd[1]: Running in initrd. Jul 2 10:28:45.971101 systemd[1]: No hostname configured, using default hostname. Jul 2 10:28:45.971113 systemd[1]: Hostname set to . Jul 2 10:28:45.971126 systemd[1]: Initializing machine ID from VM UUID. Jul 2 10:28:45.971139 systemd[1]: Queued start job for default target initrd.target. Jul 2 10:28:45.971155 systemd[1]: Started systemd-ask-password-console.path. Jul 2 10:28:45.971168 systemd[1]: Reached target cryptsetup.target. Jul 2 10:28:45.971180 systemd[1]: Reached target paths.target. Jul 2 10:28:45.971208 systemd[1]: Reached target slices.target. Jul 2 10:28:45.971224 systemd[1]: Reached target swap.target. Jul 2 10:28:45.971236 systemd[1]: Reached target timers.target. Jul 2 10:28:45.971250 systemd[1]: Listening on iscsid.socket. Jul 2 10:28:45.971267 systemd[1]: Listening on iscsiuio.socket. Jul 2 10:28:45.971280 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 10:28:45.971297 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 10:28:45.971309 systemd[1]: Listening on systemd-journald.socket. Jul 2 10:28:45.971323 systemd[1]: Listening on systemd-networkd.socket. Jul 2 10:28:45.971335 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 10:28:45.971348 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 10:28:45.971364 systemd[1]: Reached target sockets.target. Jul 2 10:28:45.971377 systemd[1]: Starting kmod-static-nodes.service... Jul 2 10:28:45.971393 systemd[1]: Finished network-cleanup.service. Jul 2 10:28:45.971406 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 10:28:45.971419 systemd[1]: Starting systemd-journald.service... Jul 2 10:28:45.971431 systemd[1]: Starting systemd-modules-load.service... Jul 2 10:28:45.971444 systemd[1]: Starting systemd-resolved.service... Jul 2 10:28:45.971456 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 10:28:45.971469 systemd[1]: Finished kmod-static-nodes.service. Jul 2 10:28:45.971481 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 10:28:45.971494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 10:28:45.971522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 10:28:45.971546 systemd-journald[202]: Journal started Jul 2 10:28:45.971619 systemd-journald[202]: Runtime Journal (/run/log/journal/ff11528053394f8bb5e9d191e7da4cd2) is 4.7M, max 38.1M, 33.3M free. Jul 2 10:28:45.942615 systemd-modules-load[203]: Inserted module 'overlay' Jul 2 10:28:46.020294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 10:28:46.020337 kernel: Bridge firewalling registered Jul 2 10:28:46.020354 systemd[1]: Started systemd-resolved.service. Jul 2 10:28:45.949328 systemd-resolved[204]: Positive Trust Anchors: Jul 2 10:28:45.949346 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 10:28:46.051832 kernel: SCSI subsystem initialized Jul 2 10:28:46.051867 kernel: audit: type=1130 audit(1719916126.017:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.051900 systemd[1]: Started systemd-journald.service. Jul 2 10:28:46.051921 kernel: audit: type=1130 audit(1719916126.028:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.051938 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 10:28:46.051955 kernel: device-mapper: uevent: version 1.0.3 Jul 2 10:28:46.051970 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 10:28:46.051987 kernel: audit: type=1130 audit(1719916126.030:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:45.949389 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 10:28:46.078386 kernel: audit: type=1130 audit(1719916126.052:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.078552 kernel: audit: type=1130 audit(1719916126.072:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:45.953156 systemd-resolved[204]: Defaulting to hostname 'linux'. Jul 2 10:28:46.002781 systemd-modules-load[203]: Inserted module 'br_netfilter' Jul 2 10:28:46.035936 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 10:28:46.055142 systemd-modules-load[203]: Inserted module 'dm_multipath' Jul 2 10:28:46.055399 systemd[1]: Reached target nss-lookup.target. Jul 2 10:28:46.071049 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 10:28:46.072059 systemd[1]: Finished systemd-modules-load.service. Jul 2 10:28:46.089000 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:28:46.094660 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:28:46.103504 kernel: audit: type=1130 audit(1719916126.095:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.136179 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 10:28:46.144443 kernel: audit: type=1130 audit(1719916126.136:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.138096 systemd[1]: Starting dracut-cmdline.service... Jul 2 10:28:46.151625 dracut-cmdline[224]: dracut-dracut-053 Jul 2 10:28:46.154694 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 10:28:46.266222 kernel: Loading iSCSI transport class v2.0-870. Jul 2 10:28:46.288222 kernel: iscsi: registered transport (tcp) Jul 2 10:28:46.319097 kernel: iscsi: registered transport (qla4xxx) Jul 2 10:28:46.319170 kernel: QLogic iSCSI HBA Driver Jul 2 10:28:46.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.367469 systemd[1]: Finished dracut-cmdline.service. Jul 2 10:28:46.377651 kernel: audit: type=1130 audit(1719916126.367:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.369287 systemd[1]: Starting dracut-pre-udev.service... Jul 2 10:28:46.427254 kernel: raid6: sse2x4 gen() 14789 MB/s Jul 2 10:28:46.445270 kernel: raid6: sse2x4 xor() 8467 MB/s Jul 2 10:28:46.463581 kernel: raid6: sse2x2 gen() 10152 MB/s Jul 2 10:28:46.481250 kernel: raid6: sse2x2 xor() 8391 MB/s Jul 2 10:28:46.499531 kernel: raid6: sse2x1 gen() 10240 MB/s Jul 2 10:28:46.518531 kernel: raid6: sse2x1 xor() 7652 MB/s Jul 2 10:28:46.518602 kernel: raid6: using algorithm sse2x4 gen() 14789 MB/s Jul 2 10:28:46.518620 kernel: raid6: .... xor() 8467 MB/s, rmw enabled Jul 2 10:28:46.519128 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 10:28:46.536765 kernel: xor: automatically using best checksumming function avx Jul 2 10:28:46.644662 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 10:28:46.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.656990 systemd[1]: Finished dracut-pre-udev.service. Jul 2 10:28:46.663579 kernel: audit: type=1130 audit(1719916126.657:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.662000 audit: BPF prog-id=7 op=LOAD Jul 2 10:28:46.663388 systemd[1]: Starting systemd-udevd.service... Jul 2 10:28:46.662000 audit: BPF prog-id=8 op=LOAD Jul 2 10:28:46.681751 systemd-udevd[401]: Using default interface naming scheme 'v252'. Jul 2 10:28:46.691778 systemd[1]: Started systemd-udevd.service. Jul 2 10:28:46.693472 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 10:28:46.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.713275 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jul 2 10:28:46.759980 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 10:28:46.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.765662 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 10:28:46.873741 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 10:28:46.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:46.975226 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 2 10:28:46.992228 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 10:28:46.992337 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 10:28:46.993928 kernel: GPT:17805311 != 125829119 Jul 2 10:28:46.993960 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 10:28:46.995319 kernel: GPT:17805311 != 125829119 Jul 2 10:28:46.996253 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 10:28:46.997502 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 10:28:47.028088 kernel: ACPI: bus type USB registered Jul 2 10:28:47.028181 kernel: usbcore: registered new interface driver usbfs Jul 2 10:28:47.028226 kernel: usbcore: registered new interface driver hub Jul 2 10:28:47.030240 kernel: usbcore: registered new device driver usb Jul 2 10:28:47.063230 kernel: AVX version of gcm_enc/dec engaged. Jul 2 10:28:47.072092 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 2 10:28:47.072464 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 2 10:28:47.072652 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 2 10:28:47.072822 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 2 10:28:47.073026 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 2 10:28:47.073227 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 2 10:28:47.073395 kernel: hub 1-0:1.0: USB hub found Jul 2 10:28:47.073620 kernel: hub 1-0:1.0: 4 ports detected Jul 2 10:28:47.073804 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 2 10:28:47.074005 kernel: hub 2-0:1.0: USB hub found Jul 2 10:28:47.074211 kernel: hub 2-0:1.0: 4 ports detected Jul 2 10:28:47.112232 kernel: AES CTR mode by8 optimization enabled Jul 2 10:28:47.116220 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jul 2 10:28:47.117217 kernel: libata version 3.00 loaded. Jul 2 10:28:47.122455 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 10:28:47.218795 kernel: ahci 0000:00:1f.2: version 3.0 Jul 2 10:28:47.219091 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 2 10:28:47.219114 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 2 10:28:47.219322 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 2 10:28:47.219500 kernel: scsi host0: ahci Jul 2 10:28:47.219723 kernel: scsi host1: ahci Jul 2 10:28:47.219906 kernel: scsi host2: ahci Jul 2 10:28:47.220086 kernel: scsi host3: ahci Jul 2 10:28:47.220281 kernel: scsi host4: ahci Jul 2 10:28:47.220483 kernel: scsi host5: ahci Jul 2 10:28:47.220678 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jul 2 10:28:47.220697 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jul 2 10:28:47.220714 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jul 2 10:28:47.220730 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jul 2 10:28:47.220746 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jul 2 10:28:47.220762 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jul 2 10:28:47.216772 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 10:28:47.227031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 10:28:47.236628 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 10:28:47.243395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 10:28:47.248327 systemd[1]: Starting disk-uuid.service... Jul 2 10:28:47.279459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 10:28:47.280443 disk-uuid[528]: Primary Header is updated. Jul 2 10:28:47.280443 disk-uuid[528]: Secondary Entries is updated. Jul 2 10:28:47.280443 disk-uuid[528]: Secondary Header is updated. Jul 2 10:28:47.314479 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 2 10:28:47.464556 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 10:28:47.464640 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 2 10:28:47.464701 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 2 10:28:47.470612 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 10:28:47.470655 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 10:28:47.470673 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 10:28:47.512223 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 10:28:47.519568 kernel: usbcore: registered new interface driver usbhid Jul 2 10:28:47.519611 kernel: usbhid: USB HID core driver Jul 2 10:28:47.532279 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jul 2 10:28:47.539525 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 2 10:28:48.326785 disk-uuid[531]: The operation has completed successfully. Jul 2 10:28:48.327823 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 10:28:48.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.394937 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 10:28:48.395074 systemd[1]: Finished disk-uuid.service. Jul 2 10:28:48.396939 systemd[1]: Starting verity-setup.service... Jul 2 10:28:48.423177 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jul 2 10:28:48.495357 systemd[1]: Found device dev-mapper-usr.device. Jul 2 10:28:48.497059 systemd[1]: Mounting sysusr-usr.mount... Jul 2 10:28:48.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.499081 systemd[1]: Finished verity-setup.service. Jul 2 10:28:48.612240 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 10:28:48.613172 systemd[1]: Mounted sysusr-usr.mount. Jul 2 10:28:48.613983 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 10:28:48.614973 systemd[1]: Starting ignition-setup.service... Jul 2 10:28:48.616589 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 10:28:48.644739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:28:48.644802 kernel: BTRFS info (device vda6): using free space tree Jul 2 10:28:48.644829 kernel: BTRFS info (device vda6): has skinny extents Jul 2 10:28:48.662403 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 10:28:48.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.675773 systemd[1]: Finished ignition-setup.service. Jul 2 10:28:48.677553 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 10:28:48.776025 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 10:28:48.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.777000 audit: BPF prog-id=9 op=LOAD Jul 2 10:28:48.779522 systemd[1]: Starting systemd-networkd.service... Jul 2 10:28:48.818535 systemd-networkd[712]: lo: Link UP Jul 2 10:28:48.819263 systemd-networkd[712]: lo: Gained carrier Jul 2 10:28:48.820608 systemd-networkd[712]: Enumeration completed Jul 2 10:28:48.821287 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:28:48.822921 systemd[1]: Started systemd-networkd.service. Jul 2 10:28:48.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.824238 systemd[1]: Reached target network.target. Jul 2 10:28:48.826067 systemd[1]: Starting iscsiuio.service... Jul 2 10:28:48.826830 systemd-networkd[712]: eth0: Link UP Jul 2 10:28:48.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.826837 systemd-networkd[712]: eth0: Gained carrier Jul 2 10:28:48.851231 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 10:28:48.851231 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 10:28:48.851231 iscsid[717]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 10:28:48.851231 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 10:28:48.851231 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 10:28:48.851231 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 10:28:48.851231 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 10:28:48.841450 systemd[1]: Started iscsiuio.service. Jul 2 10:28:48.843222 systemd[1]: Starting iscsid.service... Jul 2 10:28:48.849895 systemd[1]: Started iscsid.service. Jul 2 10:28:48.852126 systemd[1]: Starting dracut-initqueue.service... Jul 2 10:28:48.856434 systemd-networkd[712]: eth0: DHCPv4 address 10.230.55.230/30, gateway 10.230.55.229 acquired from 10.230.55.229 Jul 2 10:28:48.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.881417 systemd[1]: Finished dracut-initqueue.service. Jul 2 10:28:48.882243 systemd[1]: Reached target remote-fs-pre.target. Jul 2 10:28:48.882828 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 10:28:48.883456 systemd[1]: Reached target remote-fs.target. Jul 2 10:28:48.885044 systemd[1]: Starting dracut-pre-mount.service... Jul 2 10:28:48.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.901353 systemd[1]: Finished dracut-pre-mount.service. Jul 2 10:28:48.905552 ignition[640]: Ignition 2.14.0 Jul 2 10:28:48.905576 ignition[640]: Stage: fetch-offline Jul 2 10:28:48.905702 ignition[640]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:48.905744 ignition[640]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:48.907393 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:48.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.907565 ignition[640]: parsed url from cmdline: "" Jul 2 10:28:48.909407 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 10:28:48.907572 ignition[640]: no config URL provided Jul 2 10:28:48.912461 systemd[1]: Starting ignition-fetch.service... Jul 2 10:28:48.907581 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 10:28:48.907596 ignition[640]: no config at "/usr/lib/ignition/user.ign" Jul 2 10:28:48.907605 ignition[640]: failed to fetch config: resource requires networking Jul 2 10:28:48.908068 ignition[640]: Ignition finished successfully Jul 2 10:28:48.922148 ignition[731]: Ignition 2.14.0 Jul 2 10:28:48.922164 ignition[731]: Stage: fetch Jul 2 10:28:48.922337 ignition[731]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:48.922380 ignition[731]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:48.923703 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:48.923833 ignition[731]: parsed url from cmdline: "" Jul 2 10:28:48.923840 ignition[731]: no config URL provided Jul 2 10:28:48.923849 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 10:28:48.923863 ignition[731]: no config at "/usr/lib/ignition/user.ign" Jul 2 10:28:48.928876 ignition[731]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 10:28:48.928913 ignition[731]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 10:28:48.930251 ignition[731]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 10:28:48.954648 ignition[731]: GET result: OK Jul 2 10:28:48.954796 ignition[731]: parsing config with SHA512: c58c8416d840fd28123cc1736223958dbe3f560f4b330828a8674d5300d9d8d7105c870a2c0e6b4d4d6647a12b13e7da0220331fe3677502881c5559a3f2c31b Jul 2 10:28:48.963673 unknown[731]: fetched base config from "system" Jul 2 10:28:48.964450 unknown[731]: fetched base config from "system" Jul 2 10:28:48.965157 unknown[731]: fetched user config from "openstack" Jul 2 10:28:48.966371 ignition[731]: fetch: fetch complete Jul 2 10:28:48.967046 ignition[731]: fetch: fetch passed Jul 2 10:28:48.967752 ignition[731]: Ignition finished successfully Jul 2 10:28:48.969937 systemd[1]: Finished ignition-fetch.service. Jul 2 10:28:48.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.971755 systemd[1]: Starting ignition-kargs.service... Jul 2 10:28:48.983170 ignition[737]: Ignition 2.14.0 Jul 2 10:28:48.983182 ignition[737]: Stage: kargs Jul 2 10:28:48.983373 ignition[737]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:48.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:48.983406 ignition[737]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:48.987752 systemd[1]: Finished ignition-kargs.service. Jul 2 10:28:48.984625 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:48.989961 systemd[1]: Starting ignition-disks.service... Jul 2 10:28:48.986539 ignition[737]: kargs: kargs passed Jul 2 10:28:48.986600 ignition[737]: Ignition finished successfully Jul 2 10:28:49.001225 ignition[743]: Ignition 2.14.0 Jul 2 10:28:49.001244 ignition[743]: Stage: disks Jul 2 10:28:49.001412 ignition[743]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:49.001457 ignition[743]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:49.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.005246 systemd[1]: Finished ignition-disks.service. Jul 2 10:28:49.002692 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:49.005997 systemd[1]: Reached target initrd-root-device.target. Jul 2 10:28:49.004167 ignition[743]: disks: disks passed Jul 2 10:28:49.006685 systemd[1]: Reached target local-fs-pre.target. Jul 2 10:28:49.004290 ignition[743]: Ignition finished successfully Jul 2 10:28:49.007932 systemd[1]: Reached target local-fs.target. Jul 2 10:28:49.009823 systemd[1]: Reached target sysinit.target. Jul 2 10:28:49.010413 systemd[1]: Reached target basic.target. Jul 2 10:28:49.012225 systemd[1]: Starting systemd-fsck-root.service... Jul 2 10:28:49.038743 systemd-fsck[751]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 10:28:49.043863 systemd[1]: Finished systemd-fsck-root.service. Jul 2 10:28:49.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.046276 systemd[1]: Mounting sysroot.mount... Jul 2 10:28:49.067261 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 10:28:49.068483 systemd[1]: Mounted sysroot.mount. Jul 2 10:28:49.069287 systemd[1]: Reached target initrd-root-fs.target. Jul 2 10:28:49.072016 systemd[1]: Mounting sysroot-usr.mount... Jul 2 10:28:49.073897 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 10:28:49.075523 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 2 10:28:49.076457 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 10:28:49.076514 systemd[1]: Reached target ignition-diskful.target. Jul 2 10:28:49.083183 systemd[1]: Mounted sysroot-usr.mount. Jul 2 10:28:49.086110 systemd[1]: Starting initrd-setup-root.service... Jul 2 10:28:49.098341 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 10:28:49.113011 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Jul 2 10:28:49.121520 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 10:28:49.128634 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 10:28:49.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.210570 systemd[1]: Finished initrd-setup-root.service. Jul 2 10:28:49.218229 systemd[1]: Starting ignition-mount.service... Jul 2 10:28:49.220633 systemd[1]: Starting sysroot-boot.service... Jul 2 10:28:49.241560 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 10:28:49.282389 ignition[806]: INFO : Ignition 2.14.0 Jul 2 10:28:49.282389 ignition[806]: INFO : Stage: mount Jul 2 10:28:49.283945 ignition[806]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:49.283945 ignition[806]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:49.283945 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:49.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.288868 ignition[806]: INFO : mount: mount passed Jul 2 10:28:49.288868 ignition[806]: INFO : Ignition finished successfully Jul 2 10:28:49.286543 systemd[1]: Finished ignition-mount.service. Jul 2 10:28:49.303974 coreos-metadata[757]: Jul 02 10:28:49.303 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 10:28:49.304361 systemd[1]: Finished sysroot-boot.service. Jul 2 10:28:49.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.360347 coreos-metadata[757]: Jul 02 10:28:49.360 INFO Fetch successful Jul 2 10:28:49.361964 coreos-metadata[757]: Jul 02 10:28:49.361 INFO wrote hostname srv-ehxin.gb1.brightbox.com to /sysroot/etc/hostname Jul 2 10:28:49.365448 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 10:28:49.365593 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 2 10:28:49.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:49.529534 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 10:28:49.543247 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (814) Jul 2 10:28:49.550238 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 10:28:49.550290 kernel: BTRFS info (device vda6): using free space tree Jul 2 10:28:49.550321 kernel: BTRFS info (device vda6): has skinny extents Jul 2 10:28:49.556730 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 10:28:49.559758 systemd[1]: Starting ignition-files.service... Jul 2 10:28:49.582498 ignition[834]: INFO : Ignition 2.14.0 Jul 2 10:28:49.583606 ignition[834]: INFO : Stage: files Jul 2 10:28:49.584453 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:49.585376 ignition[834]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:49.588417 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:49.591631 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Jul 2 10:28:49.593369 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 10:28:49.594571 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 10:28:49.599313 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 10:28:49.600908 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 10:28:49.603347 unknown[834]: wrote ssh authorized keys file for user: core Jul 2 10:28:49.604425 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 10:28:49.606434 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 10:28:49.609115 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 10:28:50.392596 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 10:28:50.596102 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 10:28:50.611301 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 10:28:50.611301 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 10:28:50.820396 systemd-networkd[712]: eth0: Gained IPv6LL Jul 2 10:28:51.225094 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 10:28:51.793923 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 10:28:51.795702 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 10:28:51.797035 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 10:28:51.798162 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 10:28:51.799500 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 10:28:51.800954 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 10:28:51.800954 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 10:28:51.800954 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 10:28:51.800954 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 10:28:51.805072 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 10:28:51.805072 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 10:28:51.805072 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 10:28:51.805072 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 10:28:51.805072 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 10:28:51.805072 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 10:28:52.297433 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 10:28:52.328064 systemd-networkd[712]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8df9:24:19ff:fee6:37e6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8df9:24:19ff:fee6:37e6/64 assigned by NDisc. Jul 2 10:28:52.328077 systemd-networkd[712]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 2 10:28:55.147450 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 10:28:55.150850 ignition[834]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 10:28:55.170003 ignition[834]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 10:28:55.170003 ignition[834]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 10:28:55.170003 ignition[834]: INFO : files: files passed Jul 2 10:28:55.170003 ignition[834]: INFO : Ignition finished successfully Jul 2 10:28:55.169009 systemd[1]: Finished ignition-files.service. Jul 2 10:28:55.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.177742 kernel: kauditd_printk_skb: 26 callbacks suppressed Jul 2 10:28:55.177805 kernel: audit: type=1130 audit(1719916135.173:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.182709 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 10:28:55.183467 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 10:28:55.184420 systemd[1]: Starting ignition-quench.service... Jul 2 10:28:55.189900 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 10:28:55.202724 kernel: audit: type=1130 audit(1719916135.192:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.202759 kernel: audit: type=1131 audit(1719916135.192:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.190040 systemd[1]: Finished ignition-quench.service. Jul 2 10:28:55.204653 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 10:28:55.205555 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 10:28:55.212102 kernel: audit: type=1130 audit(1719916135.206:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.206869 systemd[1]: Reached target ignition-complete.target. Jul 2 10:28:55.213980 systemd[1]: Starting initrd-parse-etc.service... Jul 2 10:28:55.237081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 10:28:55.238160 systemd[1]: Finished initrd-parse-etc.service. Jul 2 10:28:55.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.244214 kernel: audit: type=1130 audit(1719916135.237:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.248472 systemd[1]: Reached target initrd-fs.target. Jul 2 10:28:55.253568 kernel: audit: type=1131 audit(1719916135.237:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.249821 systemd[1]: Reached target initrd.target. Jul 2 10:28:55.250470 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 10:28:55.252101 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 10:28:55.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.271941 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 10:28:55.292783 kernel: audit: type=1130 audit(1719916135.272:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.280049 systemd[1]: Starting initrd-cleanup.service... Jul 2 10:28:55.302953 systemd[1]: Stopped target nss-lookup.target. Jul 2 10:28:55.304550 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 10:28:55.306042 systemd[1]: Stopped target timers.target. Jul 2 10:28:55.306761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 10:28:55.306951 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 10:28:55.313807 kernel: audit: type=1131 audit(1719916135.308:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.308986 systemd[1]: Stopped target initrd.target. Jul 2 10:28:55.314569 systemd[1]: Stopped target basic.target. Jul 2 10:28:55.316470 systemd[1]: Stopped target ignition-complete.target. Jul 2 10:28:55.317299 systemd[1]: Stopped target ignition-diskful.target. Jul 2 10:28:55.318672 systemd[1]: Stopped target initrd-root-device.target. Jul 2 10:28:55.320143 systemd[1]: Stopped target remote-fs.target. Jul 2 10:28:55.321525 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 10:28:55.322705 systemd[1]: Stopped target sysinit.target. Jul 2 10:28:55.323378 systemd[1]: Stopped target local-fs.target. Jul 2 10:28:55.326719 systemd[1]: Stopped target local-fs-pre.target. Jul 2 10:28:55.336125 kernel: audit: type=1131 audit(1719916135.329:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.327397 systemd[1]: Stopped target swap.target. Jul 2 10:28:55.328253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 10:28:55.328440 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 10:28:55.345451 kernel: audit: type=1131 audit(1719916135.338:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.329770 systemd[1]: Stopped target cryptsetup.target. Jul 2 10:28:55.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.336778 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 10:28:55.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.336966 systemd[1]: Stopped dracut-initqueue.service. Jul 2 10:28:55.339375 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 10:28:55.339528 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 10:28:55.346330 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 10:28:55.346499 systemd[1]: Stopped ignition-files.service. Jul 2 10:28:55.350662 systemd[1]: Stopping ignition-mount.service... Jul 2 10:28:55.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.353709 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 10:28:55.353885 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 10:28:55.355903 systemd[1]: Stopping sysroot-boot.service... Jul 2 10:28:55.356778 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 10:28:55.357104 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 10:28:55.358144 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 10:28:55.358398 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 10:28:55.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.371442 ignition[872]: INFO : Ignition 2.14.0 Jul 2 10:28:55.371442 ignition[872]: INFO : Stage: umount Jul 2 10:28:55.371442 ignition[872]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 10:28:55.371442 ignition[872]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 10:28:55.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.371317 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 10:28:55.379248 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 10:28:55.379248 ignition[872]: INFO : umount: umount passed Jul 2 10:28:55.379248 ignition[872]: INFO : Ignition finished successfully Jul 2 10:28:55.371442 systemd[1]: Finished initrd-cleanup.service. Jul 2 10:28:55.381982 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 10:28:55.382827 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 10:28:55.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.382949 systemd[1]: Stopped ignition-mount.service. Jul 2 10:28:55.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.384025 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 10:28:55.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.384084 systemd[1]: Stopped ignition-disks.service. Jul 2 10:28:55.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.385164 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 10:28:55.385246 systemd[1]: Stopped ignition-kargs.service. Jul 2 10:28:55.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.386616 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 10:28:55.386686 systemd[1]: Stopped ignition-fetch.service. Jul 2 10:28:55.387944 systemd[1]: Stopped target network.target. Jul 2 10:28:55.389034 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 10:28:55.389105 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 10:28:55.390344 systemd[1]: Stopped target paths.target. Jul 2 10:28:55.391459 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 10:28:55.394369 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 10:28:55.395085 systemd[1]: Stopped target slices.target. Jul 2 10:28:55.396284 systemd[1]: Stopped target sockets.target. Jul 2 10:28:55.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.397442 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 10:28:55.397491 systemd[1]: Closed iscsid.socket. Jul 2 10:28:55.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.398798 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 10:28:55.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.398844 systemd[1]: Closed iscsiuio.socket. Jul 2 10:28:55.399834 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 10:28:55.399894 systemd[1]: Stopped ignition-setup.service. Jul 2 10:28:55.401171 systemd[1]: Stopping systemd-networkd.service... Jul 2 10:28:55.402242 systemd[1]: Stopping systemd-resolved.service... Jul 2 10:28:55.403610 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 10:28:55.403730 systemd[1]: Stopped sysroot-boot.service. Jul 2 10:28:55.404522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 10:28:55.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.404581 systemd[1]: Stopped initrd-setup-root.service. Jul 2 10:28:55.405731 systemd-networkd[712]: eth0: DHCPv6 lease lost Jul 2 10:28:55.410238 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 10:28:55.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.417000 audit: BPF prog-id=9 op=UNLOAD Jul 2 10:28:55.410498 systemd[1]: Stopped systemd-networkd.service. Jul 2 10:28:55.415818 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 10:28:55.420000 audit: BPF prog-id=6 op=UNLOAD Jul 2 10:28:55.415961 systemd[1]: Stopped systemd-resolved.service. Jul 2 10:28:55.418291 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 10:28:55.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.418356 systemd[1]: Closed systemd-networkd.socket. Jul 2 10:28:55.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.420064 systemd[1]: Stopping network-cleanup.service... Jul 2 10:28:55.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.421036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 10:28:55.421138 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 10:28:55.424477 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 10:28:55.424557 systemd[1]: Stopped systemd-sysctl.service. Jul 2 10:28:55.426000 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 10:28:55.426063 systemd[1]: Stopped systemd-modules-load.service. Jul 2 10:28:55.427183 systemd[1]: Stopping systemd-udevd.service... Jul 2 10:28:55.429764 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 10:28:55.432501 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 10:28:55.432815 systemd[1]: Stopped systemd-udevd.service. Jul 2 10:28:55.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.437180 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 10:28:55.437408 systemd[1]: Stopped network-cleanup.service. Jul 2 10:28:55.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.439063 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 10:28:55.439123 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 10:28:55.440076 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 10:28:55.440125 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 10:28:55.444050 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 10:28:55.444398 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 10:28:55.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.459095 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 10:28:55.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.459181 systemd[1]: Stopped dracut-cmdline.service. Jul 2 10:28:55.460318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 10:28:55.460378 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 10:28:55.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.465113 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 10:28:55.465787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 10:28:55.465854 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 10:28:55.476074 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 10:28:55.476271 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 10:28:55.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:55.478719 systemd[1]: Reached target initrd-switch-root.target. Jul 2 10:28:55.480230 systemd[1]: Starting initrd-switch-root.service... Jul 2 10:28:55.498176 systemd[1]: Switching root. Jul 2 10:28:55.526704 iscsid[717]: iscsid shutting down. Jul 2 10:28:55.527473 systemd-journald[202]: Received SIGTERM from PID 1 (n/a). Jul 2 10:28:55.527557 systemd-journald[202]: Journal stopped Jul 2 10:29:00.154900 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 10:29:00.155053 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 10:29:00.155098 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 10:29:00.155118 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 10:29:00.155152 kernel: SELinux: policy capability open_perms=1 Jul 2 10:29:00.155178 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 10:29:00.155241 kernel: SELinux: policy capability always_check_network=0 Jul 2 10:29:00.155276 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 10:29:00.155301 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 10:29:00.155320 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 10:29:00.155342 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 10:29:00.155378 systemd[1]: Successfully loaded SELinux policy in 65.139ms. Jul 2 10:29:00.155428 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.155ms. Jul 2 10:29:00.155456 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 10:29:00.155483 systemd[1]: Detected virtualization kvm. Jul 2 10:29:00.155507 systemd[1]: Detected architecture x86-64. Jul 2 10:29:00.155527 systemd[1]: Detected first boot. Jul 2 10:29:00.155552 systemd[1]: Hostname set to . Jul 2 10:29:00.155580 systemd[1]: Initializing machine ID from VM UUID. Jul 2 10:29:00.155600 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 10:29:00.155619 systemd[1]: Populated /etc with preset unit settings. Jul 2 10:29:00.155640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:29:00.155677 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:29:00.155710 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:29:00.155743 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 10:29:00.155764 systemd[1]: Stopped iscsiuio.service. Jul 2 10:29:00.155790 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 10:29:00.155810 systemd[1]: Stopped iscsid.service. Jul 2 10:29:00.155829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 10:29:00.155854 systemd[1]: Stopped initrd-switch-root.service. Jul 2 10:29:00.155879 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 10:29:00.155906 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 10:29:00.155926 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 10:29:00.155952 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 10:29:00.155977 systemd[1]: Created slice system-getty.slice. Jul 2 10:29:00.156002 systemd[1]: Created slice system-modprobe.slice. Jul 2 10:29:00.156027 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 10:29:00.156047 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 10:29:00.156067 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 10:29:00.156094 systemd[1]: Created slice user.slice. Jul 2 10:29:00.156114 systemd[1]: Started systemd-ask-password-console.path. Jul 2 10:29:00.156134 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 10:29:00.156176 systemd[1]: Set up automount boot.automount. Jul 2 10:29:00.156211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 10:29:00.156382 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 10:29:00.156424 systemd[1]: Stopped target initrd-fs.target. Jul 2 10:29:00.156446 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 10:29:00.156465 systemd[1]: Reached target integritysetup.target. Jul 2 10:29:00.156484 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 10:29:00.156527 systemd[1]: Reached target remote-fs.target. Jul 2 10:29:00.156561 systemd[1]: Reached target slices.target. Jul 2 10:29:00.156582 systemd[1]: Reached target swap.target. Jul 2 10:29:00.156601 systemd[1]: Reached target torcx.target. Jul 2 10:29:00.156620 systemd[1]: Reached target veritysetup.target. Jul 2 10:29:00.156639 systemd[1]: Listening on systemd-coredump.socket. Jul 2 10:29:00.156667 systemd[1]: Listening on systemd-initctl.socket. Jul 2 10:29:00.156694 systemd[1]: Listening on systemd-networkd.socket. Jul 2 10:29:00.156719 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 10:29:00.156740 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 10:29:00.156758 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 10:29:00.156783 systemd[1]: Mounting dev-hugepages.mount... Jul 2 10:29:00.156803 systemd[1]: Mounting dev-mqueue.mount... Jul 2 10:29:00.156823 systemd[1]: Mounting media.mount... Jul 2 10:29:00.156842 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:29:00.156866 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 10:29:00.156892 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 10:29:00.156912 systemd[1]: Mounting tmp.mount... Jul 2 10:29:00.156932 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 10:29:00.156951 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:29:00.156970 systemd[1]: Starting kmod-static-nodes.service... Jul 2 10:29:00.156989 systemd[1]: Starting modprobe@configfs.service... Jul 2 10:29:00.157013 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:29:00.157033 systemd[1]: Starting modprobe@drm.service... Jul 2 10:29:00.157066 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:29:00.157087 systemd[1]: Starting modprobe@fuse.service... Jul 2 10:29:00.157106 systemd[1]: Starting modprobe@loop.service... Jul 2 10:29:00.157132 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 10:29:00.157169 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 10:29:00.157191 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 10:29:00.157818 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 10:29:00.157852 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 10:29:00.157882 systemd[1]: Stopped systemd-journald.service. Jul 2 10:29:00.157916 systemd[1]: Starting systemd-journald.service... Jul 2 10:29:00.157944 kernel: fuse: init (API version 7.34) Jul 2 10:29:00.157966 systemd[1]: Starting systemd-modules-load.service... Jul 2 10:29:00.157991 systemd[1]: Starting systemd-network-generator.service... Jul 2 10:29:00.158012 systemd[1]: Starting systemd-remount-fs.service... Jul 2 10:29:00.158030 kernel: loop: module loaded Jul 2 10:29:00.158055 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 10:29:00.158075 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 10:29:00.158094 systemd[1]: Stopped verity-setup.service. Jul 2 10:29:00.158125 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:29:00.158159 systemd[1]: Mounted dev-hugepages.mount. Jul 2 10:29:00.158180 systemd[1]: Mounted dev-mqueue.mount. Jul 2 10:29:00.158222 systemd[1]: Mounted media.mount. Jul 2 10:29:00.158252 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 10:29:00.158278 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 10:29:00.158304 systemd[1]: Mounted tmp.mount. Jul 2 10:29:00.158325 systemd[1]: Finished kmod-static-nodes.service. Jul 2 10:29:00.158344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 10:29:00.158363 systemd[1]: Finished modprobe@configfs.service. Jul 2 10:29:00.158389 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 10:29:00.158409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:29:00.158435 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:29:00.158456 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 10:29:00.158486 systemd[1]: Finished modprobe@drm.service. Jul 2 10:29:00.158515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:29:00.158535 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:29:00.158560 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 10:29:00.158580 systemd[1]: Finished modprobe@fuse.service. Jul 2 10:29:00.158604 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:29:00.158625 systemd[1]: Finished modprobe@loop.service. Jul 2 10:29:00.158645 systemd[1]: Finished systemd-network-generator.service. Jul 2 10:29:00.158664 systemd[1]: Finished systemd-remount-fs.service. Jul 2 10:29:00.158689 systemd[1]: Reached target network-pre.target. Jul 2 10:29:00.158709 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 10:29:00.158728 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 10:29:00.158747 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 10:29:00.158771 systemd-journald[982]: Journal started Jul 2 10:29:00.158864 systemd-journald[982]: Runtime Journal (/run/log/journal/ff11528053394f8bb5e9d191e7da4cd2) is 4.7M, max 38.1M, 33.3M free. Jul 2 10:28:55.739000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 10:28:55.856000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:28:55.856000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 10:28:55.856000 audit: BPF prog-id=10 op=LOAD Jul 2 10:28:55.856000 audit: BPF prog-id=10 op=UNLOAD Jul 2 10:28:55.856000 audit: BPF prog-id=11 op=LOAD Jul 2 10:28:55.856000 audit: BPF prog-id=11 op=UNLOAD Jul 2 10:28:56.080000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 10:28:56.080000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:28:56.080000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:28:56.085000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 10:28:56.085000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:28:56.085000 audit: CWD cwd="/" Jul 2 10:28:56.085000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:28:56.085000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:28:56.085000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 10:28:59.821000 audit: BPF prog-id=12 op=LOAD Jul 2 10:28:59.821000 audit: BPF prog-id=3 op=UNLOAD Jul 2 10:28:59.821000 audit: BPF prog-id=13 op=LOAD Jul 2 10:28:59.821000 audit: BPF prog-id=14 op=LOAD Jul 2 10:28:59.821000 audit: BPF prog-id=4 op=UNLOAD Jul 2 10:28:59.821000 audit: BPF prog-id=5 op=UNLOAD Jul 2 10:28:59.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:59.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:59.829000 audit: BPF prog-id=12 op=UNLOAD Jul 2 10:28:59.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:59.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:59.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:59.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:28:59.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.003000 audit: BPF prog-id=15 op=LOAD Jul 2 10:29:00.003000 audit: BPF prog-id=16 op=LOAD Jul 2 10:29:00.003000 audit: BPF prog-id=17 op=LOAD Jul 2 10:29:00.006000 audit: BPF prog-id=13 op=UNLOAD Jul 2 10:29:00.006000 audit: BPF prog-id=14 op=UNLOAD Jul 2 10:29:00.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.152000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 10:29:00.152000 audit[982]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff1ced8760 a2=4000 a3=7fff1ced87fc items=0 ppid=1 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:29:00.152000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 10:28:59.816885 systemd[1]: Queued start job for default target multi-user.target. Jul 2 10:29:00.194794 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 10:29:00.194839 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:29:00.194866 systemd[1]: Starting systemd-random-seed.service... Jul 2 10:29:00.194890 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:28:56.065221 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:28:59.816909 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 10:28:56.069487 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 10:28:59.824040 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 10:28:56.069526 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 10:28:56.069635 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 10:28:56.069653 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 10:28:56.069718 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 10:28:56.069741 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 10:28:56.070300 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 10:28:56.070365 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 10:28:56.070389 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 10:28:56.079031 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 10:28:56.079097 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 10:28:56.079131 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 10:28:56.079158 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 10:28:56.079215 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 10:28:56.079260 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 10:28:59.200506 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:28:59.200886 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:28:59.201096 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:28:59.201478 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 10:28:59.201569 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 10:28:59.201694 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-07-02T10:28:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 10:29:00.218344 systemd[1]: Starting systemd-sysusers.service... Jul 2 10:29:00.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.226414 systemd[1]: Started systemd-journald.service. Jul 2 10:29:00.226478 kernel: kauditd_printk_skb: 87 callbacks suppressed Jul 2 10:29:00.226513 kernel: audit: type=1130 audit(1719916140.224:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.233022 systemd[1]: Finished systemd-modules-load.service. Jul 2 10:29:00.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.239225 kernel: audit: type=1130 audit(1719916140.233:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.239461 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 10:29:00.240324 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 10:29:00.252442 systemd[1]: Starting systemd-journal-flush.service... Jul 2 10:29:00.255421 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:29:00.267421 systemd[1]: Finished systemd-random-seed.service. Jul 2 10:29:00.268279 systemd[1]: Reached target first-boot-complete.target. Jul 2 10:29:00.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.274242 kernel: audit: type=1130 audit(1719916140.267:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.280728 systemd-journald[982]: Time spent on flushing to /var/log/journal/ff11528053394f8bb5e9d191e7da4cd2 is 52.420ms for 1291 entries. Jul 2 10:29:00.280728 systemd-journald[982]: System Journal (/var/log/journal/ff11528053394f8bb5e9d191e7da4cd2) is 8.0M, max 584.8M, 576.8M free. Jul 2 10:29:00.348506 systemd-journald[982]: Received client request to flush runtime journal. Jul 2 10:29:00.348577 kernel: audit: type=1130 audit(1719916140.294:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.348614 kernel: audit: type=1130 audit(1719916140.316:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.293830 systemd[1]: Finished systemd-sysusers.service. Jul 2 10:29:00.316419 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:29:00.349764 systemd[1]: Finished systemd-journal-flush.service. Jul 2 10:29:00.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.356223 kernel: audit: type=1130 audit(1719916140.350:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.381950 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 10:29:00.384771 systemd[1]: Starting systemd-udev-settle.service... Jul 2 10:29:00.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.390760 kernel: audit: type=1130 audit(1719916140.382:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.403028 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 10:29:00.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.941909 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 10:29:00.950602 kernel: audit: type=1130 audit(1719916140.943:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:00.950709 kernel: audit: type=1334 audit(1719916140.948:133): prog-id=18 op=LOAD Jul 2 10:29:00.948000 audit: BPF prog-id=18 op=LOAD Jul 2 10:29:00.952636 kernel: audit: type=1334 audit(1719916140.950:134): prog-id=19 op=LOAD Jul 2 10:29:00.950000 audit: BPF prog-id=19 op=LOAD Jul 2 10:29:00.951698 systemd[1]: Starting systemd-udevd.service... Jul 2 10:29:00.950000 audit: BPF prog-id=7 op=UNLOAD Jul 2 10:29:00.950000 audit: BPF prog-id=8 op=UNLOAD Jul 2 10:29:00.976927 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Jul 2 10:29:01.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.056000 audit: BPF prog-id=20 op=LOAD Jul 2 10:29:01.040491 systemd[1]: Started systemd-udevd.service. Jul 2 10:29:01.057416 systemd[1]: Starting systemd-networkd.service... Jul 2 10:29:01.088000 audit: BPF prog-id=21 op=LOAD Jul 2 10:29:01.090000 audit: BPF prog-id=22 op=LOAD Jul 2 10:29:01.090000 audit: BPF prog-id=23 op=LOAD Jul 2 10:29:01.092541 systemd[1]: Starting systemd-userdbd.service... Jul 2 10:29:01.152649 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 10:29:01.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.181061 systemd[1]: Started systemd-userdbd.service. Jul 2 10:29:01.315222 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 2 10:29:01.332399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 10:29:01.359351 systemd-networkd[1028]: lo: Link UP Jul 2 10:29:01.359365 systemd-networkd[1028]: lo: Gained carrier Jul 2 10:29:01.360152 systemd-networkd[1028]: Enumeration completed Jul 2 10:29:01.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.360303 systemd[1]: Started systemd-networkd.service. Jul 2 10:29:01.360337 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 10:29:01.364333 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 10:29:01.363136 systemd-networkd[1028]: eth0: Link UP Jul 2 10:29:01.363148 systemd-networkd[1028]: eth0: Gained carrier Jul 2 10:29:01.368251 kernel: ACPI: button: Power Button [PWRF] Jul 2 10:29:01.366000 audit[1027]: AVC avc: denied { confidentiality } for pid=1027 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 10:29:01.366000 audit[1027]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f707363ca0 a1=3207c a2=7f0c4341dbc5 a3=5 items=108 ppid=1015 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:29:01.366000 audit: CWD cwd="/" Jul 2 10:29:01.366000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=1 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=2 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=3 name=(null) inode=14232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=4 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=5 name=(null) inode=14233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=6 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=7 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=8 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=9 name=(null) inode=14235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=10 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=11 name=(null) inode=14236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=12 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=13 name=(null) inode=14237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=14 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=15 name=(null) inode=14238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=16 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=17 name=(null) inode=14239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=18 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=19 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=20 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=21 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=22 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=23 name=(null) inode=14242 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=24 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=25 name=(null) inode=14243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=26 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=27 name=(null) inode=14244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=28 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=29 name=(null) inode=14245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=30 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=31 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=32 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=33 name=(null) inode=14247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=34 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=35 name=(null) inode=14248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=36 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=37 name=(null) inode=14249 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=38 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=39 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=40 name=(null) inode=14246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=41 name=(null) inode=14251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=42 name=(null) inode=14231 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=43 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=44 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=45 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=46 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=47 name=(null) inode=14254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=48 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=49 name=(null) inode=14255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=50 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=51 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=52 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=53 name=(null) inode=14257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=55 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=56 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=57 name=(null) inode=14259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=58 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=59 name=(null) inode=14260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=60 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=61 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=62 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=63 name=(null) inode=14262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=64 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=65 name=(null) inode=14263 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=66 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=67 name=(null) inode=14264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=68 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=69 name=(null) inode=14265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=70 name=(null) inode=14261 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=71 name=(null) inode=14266 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=72 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=73 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=74 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=75 name=(null) inode=14268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=76 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=77 name=(null) inode=14269 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=78 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=79 name=(null) inode=14270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=80 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=81 name=(null) inode=14271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=82 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=83 name=(null) inode=14272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=84 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=85 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=86 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=87 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=88 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=89 name=(null) inode=14275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=90 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=91 name=(null) inode=14276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=92 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=93 name=(null) inode=14277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=94 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=95 name=(null) inode=14278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=96 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=97 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=98 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=99 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=100 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=101 name=(null) inode=14281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=102 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=103 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=104 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=105 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=106 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PATH item=107 name=(null) inode=14284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 10:29:01.366000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 10:29:01.379411 systemd-networkd[1028]: eth0: DHCPv4 address 10.230.55.230/30, gateway 10.230.55.229 acquired from 10.230.55.229 Jul 2 10:29:01.401266 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 10:29:01.468226 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 2 10:29:01.471222 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 2 10:29:01.471524 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 2 10:29:01.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.581146 systemd[1]: Finished systemd-udev-settle.service. Jul 2 10:29:01.583940 systemd[1]: Starting lvm2-activation-early.service... Jul 2 10:29:01.619588 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 10:29:01.653575 systemd[1]: Finished lvm2-activation-early.service. Jul 2 10:29:01.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.654497 systemd[1]: Reached target cryptsetup.target. Jul 2 10:29:01.657737 systemd[1]: Starting lvm2-activation.service... Jul 2 10:29:01.665209 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 10:29:01.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.697770 systemd[1]: Finished lvm2-activation.service. Jul 2 10:29:01.698604 systemd[1]: Reached target local-fs-pre.target. Jul 2 10:29:01.699229 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 10:29:01.699260 systemd[1]: Reached target local-fs.target. Jul 2 10:29:01.699829 systemd[1]: Reached target machines.target. Jul 2 10:29:01.702152 systemd[1]: Starting ldconfig.service... Jul 2 10:29:01.705297 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:29:01.705357 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:29:01.706947 systemd[1]: Starting systemd-boot-update.service... Jul 2 10:29:01.709435 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 10:29:01.711993 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 10:29:01.714163 systemd[1]: Starting systemd-sysext.service... Jul 2 10:29:01.733659 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 10:29:01.736561 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Jul 2 10:29:01.738660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 10:29:01.761669 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 10:29:01.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.773125 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 10:29:01.773557 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 10:29:01.811259 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 10:29:01.904961 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 10:29:01.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:01.906363 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 10:29:01.943393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 10:29:01.965476 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 10:29:02.014760 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Jul 2 10:29:02.014760 systemd-fsck[1056]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 10:29:02.015224 (sd-sysext)[1059]: Using extensions 'kubernetes'. Jul 2 10:29:02.018397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 10:29:02.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.021329 systemd[1]: Mounting boot.mount... Jul 2 10:29:02.025060 (sd-sysext)[1059]: Merged extensions into '/usr'. Jul 2 10:29:02.069543 systemd[1]: Mounted boot.mount. Jul 2 10:29:02.073136 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:29:02.080410 systemd[1]: Mounting usr-share-oem.mount... Jul 2 10:29:02.081433 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.083394 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:29:02.087155 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:29:02.090385 systemd[1]: Starting modprobe@loop.service... Jul 2 10:29:02.091088 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.091344 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:29:02.091549 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:29:02.098648 systemd[1]: Mounted usr-share-oem.mount. Jul 2 10:29:02.099856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:29:02.100071 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:29:02.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.101357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:29:02.101569 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:29:02.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.103756 systemd[1]: Finished systemd-boot-update.service. Jul 2 10:29:02.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.105161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:29:02.105389 systemd[1]: Finished modprobe@loop.service. Jul 2 10:29:02.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.107110 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:29:02.107300 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.109972 systemd[1]: Finished systemd-sysext.service. Jul 2 10:29:02.114574 systemd[1]: Starting ensure-sysext.service... Jul 2 10:29:02.116784 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 10:29:02.129948 systemd[1]: Reloading. Jul 2 10:29:02.202553 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 10:29:02.223691 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 10:29:02.245871 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-07-02T10:29:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:29:02.245924 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-07-02T10:29:02Z" level=info msg="torcx already run" Jul 2 10:29:02.267312 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 10:29:02.428048 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:29:02.429440 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:29:02.457046 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:29:02.547000 audit: BPF prog-id=24 op=LOAD Jul 2 10:29:02.547000 audit: BPF prog-id=21 op=UNLOAD Jul 2 10:29:02.548000 audit: BPF prog-id=25 op=LOAD Jul 2 10:29:02.548000 audit: BPF prog-id=26 op=LOAD Jul 2 10:29:02.548000 audit: BPF prog-id=22 op=UNLOAD Jul 2 10:29:02.548000 audit: BPF prog-id=23 op=UNLOAD Jul 2 10:29:02.555000 audit: BPF prog-id=27 op=LOAD Jul 2 10:29:02.557000 audit: BPF prog-id=28 op=LOAD Jul 2 10:29:02.557000 audit: BPF prog-id=18 op=UNLOAD Jul 2 10:29:02.557000 audit: BPF prog-id=19 op=UNLOAD Jul 2 10:29:02.559000 audit: BPF prog-id=29 op=LOAD Jul 2 10:29:02.559000 audit: BPF prog-id=15 op=UNLOAD Jul 2 10:29:02.560000 audit: BPF prog-id=30 op=LOAD Jul 2 10:29:02.560000 audit: BPF prog-id=31 op=LOAD Jul 2 10:29:02.560000 audit: BPF prog-id=16 op=UNLOAD Jul 2 10:29:02.560000 audit: BPF prog-id=17 op=UNLOAD Jul 2 10:29:02.568000 audit: BPF prog-id=32 op=LOAD Jul 2 10:29:02.568000 audit: BPF prog-id=20 op=UNLOAD Jul 2 10:29:02.596736 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.600153 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:29:02.604157 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:29:02.611549 systemd[1]: Starting modprobe@loop.service... Jul 2 10:29:02.612336 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.612537 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:29:02.614236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:29:02.614532 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:29:02.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.629569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:29:02.629795 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:29:02.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.631134 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:29:02.631325 systemd[1]: Finished modprobe@loop.service. Jul 2 10:29:02.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.641686 systemd[1]: Finished ensure-sysext.service. Jul 2 10:29:02.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.643724 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.645544 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 10:29:02.647927 systemd[1]: Starting modprobe@drm.service... Jul 2 10:29:02.651683 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 10:29:02.653621 systemd[1]: Starting modprobe@loop.service... Jul 2 10:29:02.655058 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.655187 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:29:02.656740 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 10:29:02.659516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 10:29:02.659757 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 10:29:02.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.661293 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 10:29:02.661483 systemd[1]: Finished modprobe@drm.service. Jul 2 10:29:02.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.662593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 10:29:02.662782 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 10:29:02.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.663902 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 10:29:02.664101 systemd[1]: Finished modprobe@loop.service. Jul 2 10:29:02.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.665003 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 10:29:02.665076 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 10:29:02.780116 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 10:29:02.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.781310 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:29:02.784743 systemd[1]: Starting audit-rules.service... Jul 2 10:29:02.790219 systemd[1]: Starting clean-ca-certificates.service... Jul 2 10:29:02.800044 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 10:29:02.822000 audit: BPF prog-id=33 op=LOAD Jul 2 10:29:02.827711 systemd[1]: Starting systemd-resolved.service... Jul 2 10:29:02.839000 audit: BPF prog-id=34 op=LOAD Jul 2 10:29:02.841300 systemd[1]: Starting systemd-timesyncd.service... Jul 2 10:29:02.846694 systemd[1]: Starting systemd-update-utmp.service... Jul 2 10:29:02.847681 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 10:29:02.848855 systemd[1]: Finished clean-ca-certificates.service. Jul 2 10:29:02.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.853227 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 10:29:02.862000 audit[1152]: SYSTEM_BOOT pid=1152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.867018 systemd[1]: Finished systemd-update-utmp.service. Jul 2 10:29:02.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.979459 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 10:29:02.984794 systemd[1]: Started systemd-timesyncd.service. Jul 2 10:29:02.985622 systemd[1]: Reached target time-set.target. Jul 2 10:29:02.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:02.989714 systemd-resolved[1149]: Positive Trust Anchors: Jul 2 10:29:02.989784 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 10:29:02.989822 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 10:29:03.036805 systemd-resolved[1149]: Using system hostname 'srv-ehxin.gb1.brightbox.com'. Jul 2 10:29:03.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 10:29:03.040926 systemd[1]: Started systemd-resolved.service. Jul 2 10:29:03.043463 systemd[1]: Reached target network.target. Jul 2 10:29:03.044047 systemd[1]: Reached target nss-lookup.target. Jul 2 10:29:03.044000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 10:29:03.044000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd77971490 a2=420 a3=0 items=0 ppid=1144 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 10:29:03.044000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 10:29:03.044879 augenrules[1165]: No rules Jul 2 10:29:03.046250 systemd[1]: Finished audit-rules.service. Jul 2 10:29:03.152500 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 10:29:03.166519 systemd[1]: Finished ldconfig.service. Jul 2 10:29:03.172478 systemd[1]: Starting systemd-update-done.service... Jul 2 10:29:03.184378 systemd[1]: Finished systemd-update-done.service. Jul 2 10:29:03.185235 systemd[1]: Reached target sysinit.target. Jul 2 10:29:03.185932 systemd[1]: Started motdgen.path. Jul 2 10:29:03.186548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 10:29:03.187475 systemd[1]: Started logrotate.timer. Jul 2 10:29:03.188187 systemd[1]: Started mdadm.timer. Jul 2 10:29:03.188736 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 10:29:03.189344 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 10:29:03.189399 systemd[1]: Reached target paths.target. Jul 2 10:29:03.189945 systemd[1]: Reached target timers.target. Jul 2 10:29:03.190969 systemd[1]: Listening on dbus.socket. Jul 2 10:29:03.193147 systemd[1]: Starting docker.socket... Jul 2 10:29:03.200112 systemd[1]: Listening on sshd.socket. Jul 2 10:29:03.200885 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:29:03.201545 systemd[1]: Listening on docker.socket. Jul 2 10:29:03.202246 systemd[1]: Reached target sockets.target. Jul 2 10:29:03.202819 systemd[1]: Reached target basic.target. Jul 2 10:29:03.203451 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 10:29:03.203494 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 10:29:03.204999 systemd[1]: Starting containerd.service... Jul 2 10:29:03.208879 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 10:29:03.226914 systemd[1]: Starting dbus.service... Jul 2 10:29:03.229692 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 10:29:03.232346 systemd[1]: Starting extend-filesystems.service... Jul 2 10:29:03.233175 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 10:29:03.234932 systemd[1]: Starting motdgen.service... Jul 2 10:29:03.241176 systemd[1]: Starting prepare-helm.service... Jul 2 10:29:03.245576 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 10:29:03.254080 systemd[1]: Starting sshd-keygen.service... Jul 2 10:29:03.286060 jq[1177]: false Jul 2 10:29:03.287049 systemd-timesyncd[1151]: Contacted time server 131.111.8.61:123 (0.flatcar.pool.ntp.org). Jul 2 10:29:03.287157 systemd-timesyncd[1151]: Initial clock synchronization to Tue 2024-07-02 10:29:03.659383 UTC. Jul 2 10:29:03.291895 systemd[1]: Starting systemd-logind.service... Jul 2 10:29:03.299721 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 10:29:03.299856 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 10:29:03.300623 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 10:29:03.302371 systemd[1]: Starting update-engine.service... Jul 2 10:29:03.305334 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 10:29:03.311281 jq[1195]: true Jul 2 10:29:03.311376 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 10:29:03.312168 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 10:29:03.315117 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 10:29:03.315530 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 10:29:03.344192 extend-filesystems[1179]: Found loop1 Jul 2 10:29:03.345539 extend-filesystems[1179]: Found vda Jul 2 10:29:03.346444 extend-filesystems[1179]: Found vda1 Jul 2 10:29:03.347239 extend-filesystems[1179]: Found vda2 Jul 2 10:29:03.347939 extend-filesystems[1179]: Found vda3 Jul 2 10:29:03.349375 extend-filesystems[1179]: Found usr Jul 2 10:29:03.349375 extend-filesystems[1179]: Found vda4 Jul 2 10:29:03.349375 extend-filesystems[1179]: Found vda6 Jul 2 10:29:03.349375 extend-filesystems[1179]: Found vda7 Jul 2 10:29:03.349375 extend-filesystems[1179]: Found vda9 Jul 2 10:29:03.349375 extend-filesystems[1179]: Checking size of /dev/vda9 Jul 2 10:29:03.353527 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 10:29:03.353796 systemd[1]: Finished motdgen.service. Jul 2 10:29:03.363593 systemd-networkd[1028]: eth0: Gained IPv6LL Jul 2 10:29:03.366585 jq[1200]: true Jul 2 10:29:03.372899 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 10:29:03.373794 systemd[1]: Reached target network-online.target. Jul 2 10:29:03.387807 systemd[1]: Starting kubelet.service... Jul 2 10:29:03.405741 tar[1199]: linux-amd64/helm Jul 2 10:29:03.428123 extend-filesystems[1179]: Resized partition /dev/vda9 Jul 2 10:29:03.468281 extend-filesystems[1230]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 10:29:03.481427 systemd-logind[1191]: Watching system buttons on /dev/input/event2 (Power Button) Jul 2 10:29:03.481914 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 10:29:03.482354 systemd-logind[1191]: New seat seat0. Jul 2 10:29:03.495294 env[1203]: time="2024-07-02T10:29:03.495152540Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 10:29:03.530528 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 2 10:29:03.568558 dbus-daemon[1175]: [system] SELinux support is enabled Jul 2 10:29:03.569476 systemd[1]: Started dbus.service. Jul 2 10:29:03.573141 dbus-daemon[1175]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1028 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 10:29:03.572802 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 10:29:03.577109 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 10:29:03.572852 systemd[1]: Reached target system-config.target. Jul 2 10:29:03.573527 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 10:29:03.573553 systemd[1]: Reached target user-config.target. Jul 2 10:29:03.577039 systemd[1]: Started systemd-logind.service. Jul 2 10:29:03.586760 systemd[1]: Starting systemd-hostnamed.service... Jul 2 10:29:03.603005 env[1203]: time="2024-07-02T10:29:03.602887252Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 10:29:03.644727 update_engine[1193]: I0702 10:29:03.640690 1193 main.cc:92] Flatcar Update Engine starting Jul 2 10:29:03.704738 update_engine[1193]: I0702 10:29:03.654422 1193 update_check_scheduler.cc:74] Next update check in 8m56s Jul 2 10:29:03.678965 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 10:29:03.649069 systemd[1]: Started update-engine.service. Jul 2 10:29:03.709736 env[1203]: time="2024-07-02T10:29:03.704758075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:29:03.709736 env[1203]: time="2024-07-02T10:29:03.709601239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:29:03.709736 env[1203]: time="2024-07-02T10:29:03.709668921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:29:03.684017 dbus-daemon[1175]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1235 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 10:29:03.653358 systemd[1]: Started locksmithd.service. Jul 2 10:29:03.681460 systemd[1]: Started systemd-hostnamed.service. Jul 2 10:29:03.710494 env[1203]: time="2024-07-02T10:29:03.710102881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:29:03.710494 env[1203]: time="2024-07-02T10:29:03.710148363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 10:29:03.710494 env[1203]: time="2024-07-02T10:29:03.710189701Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 10:29:03.710494 env[1203]: time="2024-07-02T10:29:03.710242094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 10:29:03.691440 systemd[1]: Starting polkit.service... Jul 2 10:29:03.710880 env[1203]: time="2024-07-02T10:29:03.710571578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:29:03.712095 env[1203]: time="2024-07-02T10:29:03.711265957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 10:29:03.712095 env[1203]: time="2024-07-02T10:29:03.711505177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 10:29:03.712095 env[1203]: time="2024-07-02T10:29:03.711550179Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 10:29:03.712095 env[1203]: time="2024-07-02T10:29:03.711661304Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 10:29:03.712095 env[1203]: time="2024-07-02T10:29:03.711687696Z" level=info msg="metadata content store policy set" policy=shared Jul 2 10:29:03.736066 polkitd[1237]: Started polkitd version 121 Jul 2 10:29:03.830268 bash[1227]: Updated "/home/core/.ssh/authorized_keys" Jul 2 10:29:03.831415 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 10:29:03.845778 polkitd[1237]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 10:29:03.874250 polkitd[1237]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 10:29:03.898510 polkitd[1237]: Finished loading, compiling and executing 2 rules Jul 2 10:29:03.899373 dbus-daemon[1175]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 10:29:03.899578 systemd[1]: Started polkit.service. Jul 2 10:29:03.900779 polkitd[1237]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 10:29:03.907669 env[1203]: time="2024-07-02T10:29:03.907595698Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 10:29:03.907669 env[1203]: time="2024-07-02T10:29:03.907673104Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907710426Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907796887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907827792Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907856562Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907882004Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907912877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907940406Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.907964966Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.908003416Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.908081 env[1203]: time="2024-07-02T10:29:03.908033152Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 10:29:03.908618 env[1203]: time="2024-07-02T10:29:03.908240639Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 10:29:03.908618 env[1203]: time="2024-07-02T10:29:03.908460582Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 10:29:03.908927 env[1203]: time="2024-07-02T10:29:03.908898416Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 10:29:03.909009 env[1203]: time="2024-07-02T10:29:03.908957978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909009 env[1203]: time="2024-07-02T10:29:03.908987791Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 10:29:03.909139 env[1203]: time="2024-07-02T10:29:03.909092361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909139 env[1203]: time="2024-07-02T10:29:03.909120587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909237 env[1203]: time="2024-07-02T10:29:03.909147713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909237 env[1203]: time="2024-07-02T10:29:03.909171243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909237 env[1203]: time="2024-07-02T10:29:03.909209021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909369 env[1203]: time="2024-07-02T10:29:03.909237336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909369 env[1203]: time="2024-07-02T10:29:03.909257458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909369 env[1203]: time="2024-07-02T10:29:03.909275204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909369 env[1203]: time="2024-07-02T10:29:03.909300898Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 10:29:03.909569 env[1203]: time="2024-07-02T10:29:03.909511601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909569 env[1203]: time="2024-07-02T10:29:03.909536918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909569 env[1203]: time="2024-07-02T10:29:03.909556689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.909849 env[1203]: time="2024-07-02T10:29:03.909576918Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 10:29:03.909849 env[1203]: time="2024-07-02T10:29:03.909605553Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 10:29:03.909849 env[1203]: time="2024-07-02T10:29:03.909624919Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 10:29:03.909849 env[1203]: time="2024-07-02T10:29:03.909703718Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 10:29:03.909849 env[1203]: time="2024-07-02T10:29:03.909794284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 10:29:03.910317 env[1203]: time="2024-07-02T10:29:03.910148717Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.910410465Z" level=info msg="Connect containerd service" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.910489525Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911474247Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911606529Z" level=info msg="Start subscribing containerd event" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911665988Z" level=info msg="Start recovering state" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911781314Z" level=info msg="Start event monitor" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911806364Z" level=info msg="Start snapshots syncer" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911823023Z" level=info msg="Start cni network conf syncer for default" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.911835312Z" level=info msg="Start streaming server" Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.912608763Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 10:29:03.919418 env[1203]: time="2024-07-02T10:29:03.912740205Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 10:29:03.923937 systemd[1]: Started containerd.service. Jul 2 10:29:03.926991 env[1203]: time="2024-07-02T10:29:03.925655651Z" level=info msg="containerd successfully booted in 0.436900s" Jul 2 10:29:03.982081 systemd-hostnamed[1235]: Hostname set to (static) Jul 2 10:29:04.002531 systemd-networkd[1028]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8df9:24:19ff:fee6:37e6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8df9:24:19ff:fee6:37e6/64 assigned by NDisc. Jul 2 10:29:04.002545 systemd-networkd[1028]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 2 10:29:04.418369 tar[1199]: linux-amd64/LICENSE Jul 2 10:29:04.418924 tar[1199]: linux-amd64/README.md Jul 2 10:29:04.425291 systemd[1]: Finished prepare-helm.service. Jul 2 10:29:04.489943 locksmithd[1236]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 10:29:04.936189 systemd[1]: Created slice system-sshd.slice. Jul 2 10:29:04.961687 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 2 10:29:05.006931 extend-filesystems[1230]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 10:29:05.006931 extend-filesystems[1230]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 2 10:29:05.006931 extend-filesystems[1230]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 2 10:29:05.010721 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Jul 2 10:29:05.012288 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 10:29:05.012526 systemd[1]: Finished extend-filesystems.service. Jul 2 10:29:05.196419 sshd_keygen[1192]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 10:29:05.235907 systemd[1]: Finished sshd-keygen.service. Jul 2 10:29:05.241136 systemd[1]: Starting issuegen.service... Jul 2 10:29:05.243981 systemd[1]: Started sshd@0-10.230.55.230:22-147.75.109.163:55864.service. Jul 2 10:29:05.254185 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 10:29:05.254440 systemd[1]: Finished issuegen.service. Jul 2 10:29:05.257368 systemd[1]: Starting systemd-user-sessions.service... Jul 2 10:29:05.266813 systemd[1]: Finished systemd-user-sessions.service. Jul 2 10:29:05.269645 systemd[1]: Started getty@tty1.service. Jul 2 10:29:05.272988 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 10:29:05.274520 systemd[1]: Reached target getty.target. Jul 2 10:29:05.889826 systemd[1]: Started kubelet.service. Jul 2 10:29:06.240129 sshd[1263]: Accepted publickey for core from 147.75.109.163 port 55864 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:06.246045 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:06.283171 systemd[1]: Created slice user-500.slice. Jul 2 10:29:06.288433 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 10:29:06.315599 systemd-logind[1191]: New session 1 of user core. Jul 2 10:29:06.367242 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 10:29:06.372563 systemd[1]: Starting user@500.service... Jul 2 10:29:06.390447 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:06.642423 systemd[1279]: Queued start job for default target default.target. Jul 2 10:29:06.643995 systemd[1279]: Reached target paths.target. Jul 2 10:29:06.646516 systemd[1279]: Reached target sockets.target. Jul 2 10:29:06.646547 systemd[1279]: Reached target timers.target. Jul 2 10:29:06.646567 systemd[1279]: Reached target basic.target. Jul 2 10:29:06.646643 systemd[1279]: Reached target default.target. Jul 2 10:29:06.646697 systemd[1279]: Startup finished in 222ms. Jul 2 10:29:06.647413 systemd[1]: Started user@500.service. Jul 2 10:29:06.650928 systemd[1]: Started session-1.scope. Jul 2 10:29:07.338281 systemd[1]: Started sshd@1-10.230.55.230:22-147.75.109.163:55878.service. Jul 2 10:29:07.621938 kubelet[1272]: E0702 10:29:07.621513 1272 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:29:07.624396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:29:07.624606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:29:07.625105 systemd[1]: kubelet.service: Consumed 1.128s CPU time. Jul 2 10:29:08.244514 sshd[1289]: Accepted publickey for core from 147.75.109.163 port 55878 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:08.247100 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:08.259755 systemd-logind[1191]: New session 2 of user core. Jul 2 10:29:08.261058 systemd[1]: Started session-2.scope. Jul 2 10:29:08.875693 sshd[1289]: pam_unix(sshd:session): session closed for user core Jul 2 10:29:08.879536 systemd[1]: sshd@1-10.230.55.230:22-147.75.109.163:55878.service: Deactivated successfully. Jul 2 10:29:08.880635 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 10:29:08.882573 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Jul 2 10:29:08.885914 systemd-logind[1191]: Removed session 2. Jul 2 10:29:09.028838 systemd[1]: Started sshd@2-10.230.55.230:22-147.75.109.163:55880.service. Jul 2 10:29:09.927123 sshd[1295]: Accepted publickey for core from 147.75.109.163 port 55880 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:09.929712 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:09.948136 systemd-logind[1191]: New session 3 of user core. Jul 2 10:29:09.950121 systemd[1]: Started session-3.scope. Jul 2 10:29:10.419020 coreos-metadata[1174]: Jul 02 10:29:10.418 WARN failed to locate config-drive, using the metadata service API instead Jul 2 10:29:10.478374 coreos-metadata[1174]: Jul 02 10:29:10.478 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 10:29:10.528261 coreos-metadata[1174]: Jul 02 10:29:10.528 INFO Fetch successful Jul 2 10:29:10.528862 coreos-metadata[1174]: Jul 02 10:29:10.528 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 10:29:10.563685 sshd[1295]: pam_unix(sshd:session): session closed for user core Jul 2 10:29:10.567852 systemd[1]: sshd@2-10.230.55.230:22-147.75.109.163:55880.service: Deactivated successfully. Jul 2 10:29:10.569002 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 10:29:10.569781 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Jul 2 10:29:10.571461 systemd-logind[1191]: Removed session 3. Jul 2 10:29:10.572762 coreos-metadata[1174]: Jul 02 10:29:10.572 INFO Fetch successful Jul 2 10:29:10.575076 unknown[1174]: wrote ssh authorized keys file for user: core Jul 2 10:29:10.614577 update-ssh-keys[1302]: Updated "/home/core/.ssh/authorized_keys" Jul 2 10:29:10.615438 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 10:29:10.616125 systemd[1]: Reached target multi-user.target. Jul 2 10:29:10.618878 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 10:29:10.636897 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 10:29:10.637169 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 10:29:10.641313 systemd[1]: Startup finished in 1.113s (kernel) + 10.025s (initrd) + 14.984s (userspace) = 26.123s. Jul 2 10:29:17.878336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 10:29:17.878610 systemd[1]: Stopped kubelet.service. Jul 2 10:29:17.878682 systemd[1]: kubelet.service: Consumed 1.128s CPU time. Jul 2 10:29:17.882680 systemd[1]: Starting kubelet.service... Jul 2 10:29:18.043786 systemd[1]: Started kubelet.service. Jul 2 10:29:18.123923 kubelet[1308]: E0702 10:29:18.123482 1308 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:29:18.128499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:29:18.128757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:29:20.823628 systemd[1]: Started sshd@3-10.230.55.230:22-147.75.109.163:60506.service. Jul 2 10:29:21.708959 sshd[1315]: Accepted publickey for core from 147.75.109.163 port 60506 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:21.711464 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:21.733644 systemd[1]: Started session-4.scope. Jul 2 10:29:21.734276 systemd-logind[1191]: New session 4 of user core. Jul 2 10:29:22.332517 sshd[1315]: pam_unix(sshd:session): session closed for user core Jul 2 10:29:22.342528 systemd[1]: sshd@3-10.230.55.230:22-147.75.109.163:60506.service: Deactivated successfully. Jul 2 10:29:22.343460 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 10:29:22.353777 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Jul 2 10:29:22.355652 systemd-logind[1191]: Removed session 4. Jul 2 10:29:22.481410 systemd[1]: Started sshd@4-10.230.55.230:22-147.75.109.163:60514.service. Jul 2 10:29:23.363983 sshd[1321]: Accepted publickey for core from 147.75.109.163 port 60514 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:23.365800 sshd[1321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:23.372931 systemd[1]: Started session-5.scope. Jul 2 10:29:23.374146 systemd-logind[1191]: New session 5 of user core. Jul 2 10:29:23.969161 sshd[1321]: pam_unix(sshd:session): session closed for user core Jul 2 10:29:23.972632 systemd[1]: sshd@4-10.230.55.230:22-147.75.109.163:60514.service: Deactivated successfully. Jul 2 10:29:23.973554 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 10:29:23.974622 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Jul 2 10:29:23.976841 systemd-logind[1191]: Removed session 5. Jul 2 10:29:24.123299 systemd[1]: Started sshd@5-10.230.55.230:22-147.75.109.163:56202.service. Jul 2 10:29:25.014142 sshd[1327]: Accepted publickey for core from 147.75.109.163 port 56202 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:25.016771 sshd[1327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:25.025268 systemd[1]: Started session-6.scope. Jul 2 10:29:25.026181 systemd-logind[1191]: New session 6 of user core. Jul 2 10:29:25.631497 sshd[1327]: pam_unix(sshd:session): session closed for user core Jul 2 10:29:25.637028 systemd[1]: sshd@5-10.230.55.230:22-147.75.109.163:56202.service: Deactivated successfully. Jul 2 10:29:25.637952 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 10:29:25.640160 systemd-logind[1191]: Session 6 logged out. Waiting for processes to exit. Jul 2 10:29:25.641884 systemd-logind[1191]: Removed session 6. Jul 2 10:29:25.776209 systemd[1]: Started sshd@6-10.230.55.230:22-147.75.109.163:56210.service. Jul 2 10:29:26.663741 sshd[1333]: Accepted publickey for core from 147.75.109.163 port 56210 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:29:26.666478 sshd[1333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:29:26.673937 systemd[1]: Started session-7.scope. Jul 2 10:29:26.674805 systemd-logind[1191]: New session 7 of user core. Jul 2 10:29:27.161813 sudo[1336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 10:29:27.162811 sudo[1336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 10:29:27.224403 systemd[1]: Starting docker.service... Jul 2 10:29:27.294124 env[1346]: time="2024-07-02T10:29:27.294029861Z" level=info msg="Starting up" Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.296983096Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.297020405Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.297049817Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.297072297Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.299729020Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.299749013Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.299765863Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 10:29:27.303236 env[1346]: time="2024-07-02T10:29:27.299781922Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 10:29:27.354914 env[1346]: time="2024-07-02T10:29:27.354857534Z" level=info msg="Loading containers: start." Jul 2 10:29:27.554842 kernel: Initializing XFRM netlink socket Jul 2 10:29:27.614412 env[1346]: time="2024-07-02T10:29:27.614356860Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 10:29:27.727353 systemd-networkd[1028]: docker0: Link UP Jul 2 10:29:27.745487 env[1346]: time="2024-07-02T10:29:27.744435382Z" level=info msg="Loading containers: done." Jul 2 10:29:27.762453 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1817968530-merged.mount: Deactivated successfully. Jul 2 10:29:27.771059 env[1346]: time="2024-07-02T10:29:27.770987346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 10:29:27.771666 env[1346]: time="2024-07-02T10:29:27.771635714Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 10:29:27.771966 env[1346]: time="2024-07-02T10:29:27.771939717Z" level=info msg="Daemon has completed initialization" Jul 2 10:29:27.796822 systemd[1]: Started docker.service. Jul 2 10:29:27.810156 env[1346]: time="2024-07-02T10:29:27.810051602Z" level=info msg="API listen on /run/docker.sock" Jul 2 10:29:28.381446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 10:29:28.382983 systemd[1]: Stopped kubelet.service. Jul 2 10:29:28.389973 systemd[1]: Starting kubelet.service... Jul 2 10:29:28.524806 systemd[1]: Started kubelet.service. Jul 2 10:29:28.757227 kubelet[1475]: E0702 10:29:28.756900 1475 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:29:28.770239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:29:28.770493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:29:30.012164 env[1203]: time="2024-07-02T10:29:30.011411066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 10:29:31.032510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621066509.mount: Deactivated successfully. Jul 2 10:29:33.768628 env[1203]: time="2024-07-02T10:29:33.768353071Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:33.772472 env[1203]: time="2024-07-02T10:29:33.772429647Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:33.779416 env[1203]: time="2024-07-02T10:29:33.779332682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:33.784741 env[1203]: time="2024-07-02T10:29:33.784703056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:33.785146 env[1203]: time="2024-07-02T10:29:33.785081781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 10:29:33.805366 env[1203]: time="2024-07-02T10:29:33.805314530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 10:29:34.067434 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 10:29:36.904129 env[1203]: time="2024-07-02T10:29:36.904063109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:36.908943 env[1203]: time="2024-07-02T10:29:36.908895280Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:36.911740 env[1203]: time="2024-07-02T10:29:36.911698562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:36.915894 env[1203]: time="2024-07-02T10:29:36.915725919Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:36.916372 env[1203]: time="2024-07-02T10:29:36.916332451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 10:29:36.930021 env[1203]: time="2024-07-02T10:29:36.929959224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 10:29:39.021652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 10:29:39.021949 systemd[1]: Stopped kubelet.service. Jul 2 10:29:39.024037 systemd[1]: Starting kubelet.service... Jul 2 10:29:39.064242 env[1203]: time="2024-07-02T10:29:39.063634574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:39.068286 env[1203]: time="2024-07-02T10:29:39.068243909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:39.164598 systemd[1]: Started kubelet.service. Jul 2 10:29:39.213235 env[1203]: time="2024-07-02T10:29:39.212222149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:39.216221 env[1203]: time="2024-07-02T10:29:39.216062734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:39.217311 env[1203]: time="2024-07-02T10:29:39.217267218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 10:29:39.230328 env[1203]: time="2024-07-02T10:29:39.230278148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 10:29:39.243407 kubelet[1509]: E0702 10:29:39.243304 1509 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:29:39.245713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:29:39.245933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:29:39.667736 systemd[1]: Started sshd@7-10.230.55.230:22-218.92.0.112:53355.service. Jul 2 10:29:40.863192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878842235.mount: Deactivated successfully. Jul 2 10:29:41.138917 sshd[1522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Jul 2 10:29:41.746534 env[1203]: time="2024-07-02T10:29:41.746415707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:41.749345 env[1203]: time="2024-07-02T10:29:41.749296737Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:41.751082 env[1203]: time="2024-07-02T10:29:41.751044161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:41.756088 env[1203]: time="2024-07-02T10:29:41.753501933Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:41.756088 env[1203]: time="2024-07-02T10:29:41.754956353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 10:29:41.767840 env[1203]: time="2024-07-02T10:29:41.767788650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 10:29:42.428602 env[1203]: time="2024-07-02T10:29:42.419532266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:42.428602 env[1203]: time="2024-07-02T10:29:42.425589309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:42.428602 env[1203]: time="2024-07-02T10:29:42.428514236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:42.427622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961789961.mount: Deactivated successfully. Jul 2 10:29:42.430582 env[1203]: time="2024-07-02T10:29:42.430526130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:42.431190 env[1203]: time="2024-07-02T10:29:42.431145148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 10:29:42.460794 env[1203]: time="2024-07-02T10:29:42.460305960Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 10:29:43.200592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551144483.mount: Deactivated successfully. Jul 2 10:29:43.573905 sshd[1522]: Failed password for root from 218.92.0.112 port 53355 ssh2 Jul 2 10:29:47.146192 sshd[1522]: Failed password for root from 218.92.0.112 port 53355 ssh2 Jul 2 10:29:47.972446 env[1203]: time="2024-07-02T10:29:47.970623279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:47.980755 env[1203]: time="2024-07-02T10:29:47.980694907Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:47.986376 env[1203]: time="2024-07-02T10:29:47.986336908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:47.991854 env[1203]: time="2024-07-02T10:29:47.991817124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:47.993276 env[1203]: time="2024-07-02T10:29:47.993238953Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 10:29:48.008518 env[1203]: time="2024-07-02T10:29:48.007850316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 10:29:48.525238 update_engine[1193]: I0702 10:29:48.523715 1193 update_attempter.cc:509] Updating boot flags... Jul 2 10:29:48.981252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161681398.mount: Deactivated successfully. Jul 2 10:29:49.425483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 10:29:49.425752 systemd[1]: Stopped kubelet.service. Jul 2 10:29:49.428725 systemd[1]: Starting kubelet.service... Jul 2 10:29:50.138693 env[1203]: time="2024-07-02T10:29:50.138552695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:50.142512 env[1203]: time="2024-07-02T10:29:50.142410657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:50.145771 env[1203]: time="2024-07-02T10:29:50.145687896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:50.149185 env[1203]: time="2024-07-02T10:29:50.148367729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:50.149346 env[1203]: time="2024-07-02T10:29:50.149301439Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 10:29:50.524280 systemd[1]: Started kubelet.service. Jul 2 10:29:50.599782 kubelet[1565]: E0702 10:29:50.599628 1565 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 10:29:50.602013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 10:29:50.602248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 10:29:50.870667 sshd[1522]: Failed password for root from 218.92.0.112 port 53355 ssh2 Jul 2 10:29:52.613666 sshd[1522]: Received disconnect from 218.92.0.112 port 53355:11: [preauth] Jul 2 10:29:52.613666 sshd[1522]: Disconnected from authenticating user root 218.92.0.112 port 53355 [preauth] Jul 2 10:29:52.613011 sshd[1522]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Jul 2 10:29:52.615331 systemd[1]: sshd@7-10.230.55.230:22-218.92.0.112:53355.service: Deactivated successfully. Jul 2 10:29:52.864751 systemd[1]: Started sshd@8-10.230.55.230:22-218.92.0.112:25811.service. Jul 2 10:29:54.334124 sshd[1630]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Jul 2 10:29:54.448182 systemd[1]: Stopped kubelet.service. Jul 2 10:29:54.451236 systemd[1]: Starting kubelet.service... Jul 2 10:29:54.502430 systemd[1]: Reloading. Jul 2 10:29:54.646893 /usr/lib/systemd/system-generators/torcx-generator[1659]: time="2024-07-02T10:29:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:29:54.646945 /usr/lib/systemd/system-generators/torcx-generator[1659]: time="2024-07-02T10:29:54Z" level=info msg="torcx already run" Jul 2 10:29:54.809503 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:29:54.809532 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:29:54.835776 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:29:54.970840 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 10:29:54.970957 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 10:29:54.971307 systemd[1]: Stopped kubelet.service. Jul 2 10:29:54.973856 systemd[1]: Starting kubelet.service... Jul 2 10:29:55.187584 systemd[1]: Started kubelet.service. Jul 2 10:29:55.305459 kubelet[1711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:29:55.305459 kubelet[1711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 10:29:55.305459 kubelet[1711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:29:55.305459 kubelet[1711]: I0702 10:29:55.304516 1711 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 10:29:55.734973 kubelet[1711]: I0702 10:29:55.734928 1711 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 10:29:55.734973 kubelet[1711]: I0702 10:29:55.734967 1711 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 10:29:55.735311 kubelet[1711]: I0702 10:29:55.735268 1711 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 10:29:55.784307 kubelet[1711]: I0702 10:29:55.780545 1711 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:29:55.785576 kubelet[1711]: E0702 10:29:55.785529 1711 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.55.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.798709 kubelet[1711]: I0702 10:29:55.798669 1711 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 10:29:55.799401 kubelet[1711]: I0702 10:29:55.799374 1711 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 10:29:55.799795 kubelet[1711]: I0702 10:29:55.799765 1711 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 10:29:55.801005 kubelet[1711]: I0702 10:29:55.800975 1711 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 10:29:55.801159 kubelet[1711]: I0702 10:29:55.801135 1711 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 10:29:55.803962 kubelet[1711]: I0702 10:29:55.803932 1711 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:29:55.805766 kubelet[1711]: I0702 10:29:55.805740 1711 kubelet.go:393] "Attempting to sync node with API server" Jul 2 10:29:55.805917 kubelet[1711]: I0702 10:29:55.805893 1711 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 10:29:55.806089 kubelet[1711]: I0702 10:29:55.806065 1711 kubelet.go:309] "Adding apiserver pod source" Jul 2 10:29:55.806250 kubelet[1711]: I0702 10:29:55.806226 1711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 10:29:55.812185 kubelet[1711]: I0702 10:29:55.812152 1711 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 10:29:55.826572 kubelet[1711]: W0702 10:29:55.826520 1711 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 10:29:55.828264 kubelet[1711]: I0702 10:29:55.828238 1711 server.go:1232] "Started kubelet" Jul 2 10:29:55.830996 kubelet[1711]: W0702 10:29:55.828804 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.230.55.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.831347 kubelet[1711]: E0702 10:29:55.831313 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.55.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.831577 kubelet[1711]: W0702 10:29:55.831533 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.230.55.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ehxin.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.832511 kubelet[1711]: E0702 10:29:55.832483 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.55.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ehxin.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.838889 kubelet[1711]: I0702 10:29:55.835772 1711 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 10:29:55.838889 kubelet[1711]: I0702 10:29:55.837157 1711 server.go:462] "Adding debug handlers to kubelet server" Jul 2 10:29:55.843162 kubelet[1711]: I0702 10:29:55.843128 1711 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 10:29:55.843959 kubelet[1711]: I0702 10:29:55.843707 1711 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 10:29:55.844456 kubelet[1711]: E0702 10:29:55.844327 1711 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"srv-ehxin.gb1.brightbox.com.17de5eaae7c79504", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"srv-ehxin.gb1.brightbox.com", UID:"srv-ehxin.gb1.brightbox.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"srv-ehxin.gb1.brightbox.com"}, FirstTimestamp:time.Date(2024, time.July, 2, 10, 29, 55, 828176132, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 10, 29, 55, 828176132, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"srv-ehxin.gb1.brightbox.com"}': 'Post "https://10.230.55.230:6443/api/v1/namespaces/default/events": dial tcp 10.230.55.230:6443: connect: connection refused'(may retry after sleeping) Jul 2 10:29:55.858924 kubelet[1711]: E0702 10:29:55.858881 1711 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 10:29:55.859672 kubelet[1711]: E0702 10:29:55.859421 1711 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 10:29:55.866799 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 10:29:55.866973 kubelet[1711]: I0702 10:29:55.865271 1711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 10:29:55.869047 kubelet[1711]: I0702 10:29:55.869009 1711 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 10:29:55.871236 kubelet[1711]: I0702 10:29:55.871211 1711 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 10:29:55.871345 kubelet[1711]: I0702 10:29:55.871329 1711 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 10:29:55.873141 kubelet[1711]: W0702 10:29:55.873080 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.230.55.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.873246 kubelet[1711]: E0702 10:29:55.873150 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.55.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.873324 kubelet[1711]: E0702 10:29:55.873298 1711 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.55.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ehxin.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.55.230:6443: connect: connection refused" interval="200ms" Jul 2 10:29:55.886383 sshd[1630]: Failed password for root from 218.92.0.112 port 25811 ssh2 Jul 2 10:29:55.924994 kubelet[1711]: I0702 10:29:55.924898 1711 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 10:29:55.924994 kubelet[1711]: I0702 10:29:55.924928 1711 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 10:29:55.924994 kubelet[1711]: I0702 10:29:55.924960 1711 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:29:55.928965 kubelet[1711]: I0702 10:29:55.928557 1711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 10:29:55.931590 kubelet[1711]: I0702 10:29:55.931553 1711 policy_none.go:49] "None policy: Start" Jul 2 10:29:55.934428 kubelet[1711]: I0702 10:29:55.933310 1711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 10:29:55.934428 kubelet[1711]: I0702 10:29:55.933354 1711 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 10:29:55.934428 kubelet[1711]: I0702 10:29:55.933483 1711 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 10:29:55.934428 kubelet[1711]: E0702 10:29:55.933566 1711 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 10:29:55.942727 kubelet[1711]: I0702 10:29:55.942032 1711 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 10:29:55.942727 kubelet[1711]: I0702 10:29:55.942088 1711 state_mem.go:35] "Initializing new in-memory state store" Jul 2 10:29:55.942727 kubelet[1711]: W0702 10:29:55.942550 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.230.55.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.942727 kubelet[1711]: E0702 10:29:55.942600 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.55.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:55.954926 systemd[1]: Created slice kubepods.slice. Jul 2 10:29:55.963740 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 10:29:55.968513 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 10:29:55.975816 kubelet[1711]: I0702 10:29:55.975784 1711 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 10:29:55.977589 kubelet[1711]: I0702 10:29:55.977559 1711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 10:29:55.978004 kubelet[1711]: I0702 10:29:55.975846 1711 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:55.979797 kubelet[1711]: E0702 10:29:55.979771 1711 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.230.55.230:6443/api/v1/nodes\": dial tcp 10.230.55.230:6443: connect: connection refused" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:55.979963 kubelet[1711]: E0702 10:29:55.979938 1711 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ehxin.gb1.brightbox.com\" not found" Jul 2 10:29:56.039373 kubelet[1711]: I0702 10:29:56.034742 1711 topology_manager.go:215] "Topology Admit Handler" podUID="dc993581a99d2c6f0fb5926bf20d3735" podNamespace="kube-system" podName="kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.043159 kubelet[1711]: I0702 10:29:56.042407 1711 topology_manager.go:215] "Topology Admit Handler" podUID="3d2476b21c716ffb6a839602cf172c76" podNamespace="kube-system" podName="kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.046211 kubelet[1711]: I0702 10:29:56.046169 1711 topology_manager.go:215] "Topology Admit Handler" podUID="51ccbfbabdb6a4fb5c0935203ed8213e" podNamespace="kube-system" podName="kube-scheduler-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.064394 systemd[1]: Created slice kubepods-burstable-poddc993581a99d2c6f0fb5926bf20d3735.slice. Jul 2 10:29:56.076074 kubelet[1711]: E0702 10:29:56.075941 1711 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.55.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ehxin.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.55.230:6443: connect: connection refused" interval="400ms" Jul 2 10:29:56.095721 systemd[1]: Created slice kubepods-burstable-pod3d2476b21c716ffb6a839602cf172c76.slice. Jul 2 10:29:56.114370 systemd[1]: Created slice kubepods-burstable-pod51ccbfbabdb6a4fb5c0935203ed8213e.slice. Jul 2 10:29:56.173522 kubelet[1711]: I0702 10:29:56.172801 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-flexvolume-dir\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.173522 kubelet[1711]: I0702 10:29:56.172876 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51ccbfbabdb6a4fb5c0935203ed8213e-kubeconfig\") pod \"kube-scheduler-srv-ehxin.gb1.brightbox.com\" (UID: \"51ccbfbabdb6a4fb5c0935203ed8213e\") " pod="kube-system/kube-scheduler-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.173522 kubelet[1711]: I0702 10:29:56.172916 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-k8s-certs\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.173522 kubelet[1711]: I0702 10:29:56.172957 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-kubeconfig\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.173522 kubelet[1711]: I0702 10:29:56.173042 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.174058 kubelet[1711]: I0702 10:29:56.173087 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc993581a99d2c6f0fb5926bf20d3735-ca-certs\") pod \"kube-apiserver-srv-ehxin.gb1.brightbox.com\" (UID: \"dc993581a99d2c6f0fb5926bf20d3735\") " pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.174058 kubelet[1711]: I0702 10:29:56.173128 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc993581a99d2c6f0fb5926bf20d3735-k8s-certs\") pod \"kube-apiserver-srv-ehxin.gb1.brightbox.com\" (UID: \"dc993581a99d2c6f0fb5926bf20d3735\") " pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.174058 kubelet[1711]: I0702 10:29:56.173175 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc993581a99d2c6f0fb5926bf20d3735-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ehxin.gb1.brightbox.com\" (UID: \"dc993581a99d2c6f0fb5926bf20d3735\") " pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.174058 kubelet[1711]: I0702 10:29:56.173239 1711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-ca-certs\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.185386 kubelet[1711]: I0702 10:29:56.185337 1711 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.186137 kubelet[1711]: E0702 10:29:56.186066 1711 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.230.55.230:6443/api/v1/nodes\": dial tcp 10.230.55.230:6443: connect: connection refused" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.368318 sshd[1630]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Jul 2 10:29:56.392886 env[1203]: time="2024-07-02T10:29:56.392420735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ehxin.gb1.brightbox.com,Uid:dc993581a99d2c6f0fb5926bf20d3735,Namespace:kube-system,Attempt:0,}" Jul 2 10:29:56.414355 env[1203]: time="2024-07-02T10:29:56.414127898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ehxin.gb1.brightbox.com,Uid:3d2476b21c716ffb6a839602cf172c76,Namespace:kube-system,Attempt:0,}" Jul 2 10:29:56.421142 env[1203]: time="2024-07-02T10:29:56.421101542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ehxin.gb1.brightbox.com,Uid:51ccbfbabdb6a4fb5c0935203ed8213e,Namespace:kube-system,Attempt:0,}" Jul 2 10:29:56.483058 kubelet[1711]: E0702 10:29:56.482868 1711 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.55.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ehxin.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.55.230:6443: connect: connection refused" interval="800ms" Jul 2 10:29:56.590498 kubelet[1711]: I0702 10:29:56.590452 1711 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.590888 kubelet[1711]: E0702 10:29:56.590851 1711 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.230.55.230:6443/api/v1/nodes\": dial tcp 10.230.55.230:6443: connect: connection refused" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:56.915854 kubelet[1711]: W0702 10:29:56.915730 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.230.55.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:56.915854 kubelet[1711]: E0702 10:29:56.915820 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.55.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:56.925527 kubelet[1711]: W0702 10:29:56.925418 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.230.55.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:56.925527 kubelet[1711]: E0702 10:29:56.925485 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.55.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:57.115927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904254021.mount: Deactivated successfully. Jul 2 10:29:57.131789 env[1203]: time="2024-07-02T10:29:57.131706696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.139247 kubelet[1711]: W0702 10:29:57.139083 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.230.55.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ehxin.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:57.139247 kubelet[1711]: E0702 10:29:57.139169 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.55.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ehxin.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:57.147127 env[1203]: time="2024-07-02T10:29:57.147061783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.150068 env[1203]: time="2024-07-02T10:29:57.150026660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.152359 env[1203]: time="2024-07-02T10:29:57.152317919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.156712 env[1203]: time="2024-07-02T10:29:57.156666546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.160768 env[1203]: time="2024-07-02T10:29:57.160729230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.168065 env[1203]: time="2024-07-02T10:29:57.164765093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.168065 env[1203]: time="2024-07-02T10:29:57.167052383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.170860 env[1203]: time="2024-07-02T10:29:57.170800931Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.177184 env[1203]: time="2024-07-02T10:29:57.177101626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.179592 env[1203]: time="2024-07-02T10:29:57.179543494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.183098 kubelet[1711]: W0702 10:29:57.182664 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.230.55.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:57.183098 kubelet[1711]: E0702 10:29:57.182754 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.55.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:57.189184 env[1203]: time="2024-07-02T10:29:57.188090338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:29:57.233593 env[1203]: time="2024-07-02T10:29:57.233454671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:29:57.233593 env[1203]: time="2024-07-02T10:29:57.233537811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:29:57.234072 env[1203]: time="2024-07-02T10:29:57.233994085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:29:57.235035 env[1203]: time="2024-07-02T10:29:57.234958591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa3c203af981d165a8dd7c8a6588015ef5759dffc8bd921be6248af7ac91e0e3 pid=1752 runtime=io.containerd.runc.v2 Jul 2 10:29:57.236701 env[1203]: time="2024-07-02T10:29:57.236379300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:29:57.236701 env[1203]: time="2024-07-02T10:29:57.236441090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:29:57.236701 env[1203]: time="2024-07-02T10:29:57.236467942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:29:57.241339 env[1203]: time="2024-07-02T10:29:57.241238126Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88326b67b6122227eeb85ce129084d00f5f605726e92a00fc80195311148ebe0 pid=1763 runtime=io.containerd.runc.v2 Jul 2 10:29:57.267367 systemd[1]: Started cri-containerd-fa3c203af981d165a8dd7c8a6588015ef5759dffc8bd921be6248af7ac91e0e3.scope. Jul 2 10:29:57.287818 kubelet[1711]: E0702 10:29:57.285132 1711 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.55.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ehxin.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.55.230:6443: connect: connection refused" interval="1.6s" Jul 2 10:29:57.310102 systemd[1]: Started cri-containerd-88326b67b6122227eeb85ce129084d00f5f605726e92a00fc80195311148ebe0.scope. Jul 2 10:29:57.369457 env[1203]: time="2024-07-02T10:29:57.369339226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ehxin.gb1.brightbox.com,Uid:dc993581a99d2c6f0fb5926bf20d3735,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa3c203af981d165a8dd7c8a6588015ef5759dffc8bd921be6248af7ac91e0e3\"" Jul 2 10:29:57.376062 env[1203]: time="2024-07-02T10:29:57.375590600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:29:57.376062 env[1203]: time="2024-07-02T10:29:57.375637778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:29:57.376062 env[1203]: time="2024-07-02T10:29:57.375653933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:29:57.376062 env[1203]: time="2024-07-02T10:29:57.375824384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ce66be09d717c886dcc65b86f4e120ffcdb66b53b0603edbca5032d9ddec4d9 pid=1825 runtime=io.containerd.runc.v2 Jul 2 10:29:57.379093 env[1203]: time="2024-07-02T10:29:57.379044129Z" level=info msg="CreateContainer within sandbox \"fa3c203af981d165a8dd7c8a6588015ef5759dffc8bd921be6248af7ac91e0e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 10:29:57.398711 kubelet[1711]: I0702 10:29:57.398089 1711 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:57.398711 kubelet[1711]: E0702 10:29:57.398670 1711 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.230.55.230:6443/api/v1/nodes\": dial tcp 10.230.55.230:6443: connect: connection refused" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:57.439682 systemd[1]: Started cri-containerd-6ce66be09d717c886dcc65b86f4e120ffcdb66b53b0603edbca5032d9ddec4d9.scope. Jul 2 10:29:57.444850 env[1203]: time="2024-07-02T10:29:57.443759059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ehxin.gb1.brightbox.com,Uid:3d2476b21c716ffb6a839602cf172c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"88326b67b6122227eeb85ce129084d00f5f605726e92a00fc80195311148ebe0\"" Jul 2 10:29:57.449597 env[1203]: time="2024-07-02T10:29:57.449550083Z" level=info msg="CreateContainer within sandbox \"88326b67b6122227eeb85ce129084d00f5f605726e92a00fc80195311148ebe0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 10:29:57.501980 env[1203]: time="2024-07-02T10:29:57.501914568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ehxin.gb1.brightbox.com,Uid:51ccbfbabdb6a4fb5c0935203ed8213e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ce66be09d717c886dcc65b86f4e120ffcdb66b53b0603edbca5032d9ddec4d9\"" Jul 2 10:29:57.502706 env[1203]: time="2024-07-02T10:29:57.502654660Z" level=info msg="CreateContainer within sandbox \"fa3c203af981d165a8dd7c8a6588015ef5759dffc8bd921be6248af7ac91e0e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12dba147589dd9c93b0baa99e3d4c11aa566e3731664671aeb38cc36cf64f3eb\"" Jul 2 10:29:57.504815 env[1203]: time="2024-07-02T10:29:57.504780434Z" level=info msg="StartContainer for \"12dba147589dd9c93b0baa99e3d4c11aa566e3731664671aeb38cc36cf64f3eb\"" Jul 2 10:29:57.507545 env[1203]: time="2024-07-02T10:29:57.507509865Z" level=info msg="CreateContainer within sandbox \"6ce66be09d717c886dcc65b86f4e120ffcdb66b53b0603edbca5032d9ddec4d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 10:29:57.530488 systemd[1]: Started cri-containerd-12dba147589dd9c93b0baa99e3d4c11aa566e3731664671aeb38cc36cf64f3eb.scope. Jul 2 10:29:57.534761 env[1203]: time="2024-07-02T10:29:57.534684590Z" level=info msg="CreateContainer within sandbox \"88326b67b6122227eeb85ce129084d00f5f605726e92a00fc80195311148ebe0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75d959e0c6330bec49a6f736f8106e1f86cfa50b72a2f1f67cee6a3238cdea91\"" Jul 2 10:29:57.537050 env[1203]: time="2024-07-02T10:29:57.536730527Z" level=info msg="StartContainer for \"75d959e0c6330bec49a6f736f8106e1f86cfa50b72a2f1f67cee6a3238cdea91\"" Jul 2 10:29:57.562288 env[1203]: time="2024-07-02T10:29:57.562225552Z" level=info msg="CreateContainer within sandbox \"6ce66be09d717c886dcc65b86f4e120ffcdb66b53b0603edbca5032d9ddec4d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83b9d6a6f462b81b2d36d8805c792067ba924c7453fe71ba2fac1c584b129446\"" Jul 2 10:29:57.563092 env[1203]: time="2024-07-02T10:29:57.563058254Z" level=info msg="StartContainer for \"83b9d6a6f462b81b2d36d8805c792067ba924c7453fe71ba2fac1c584b129446\"" Jul 2 10:29:57.576542 systemd[1]: Started cri-containerd-75d959e0c6330bec49a6f736f8106e1f86cfa50b72a2f1f67cee6a3238cdea91.scope. Jul 2 10:29:57.608108 systemd[1]: Started cri-containerd-83b9d6a6f462b81b2d36d8805c792067ba924c7453fe71ba2fac1c584b129446.scope. Jul 2 10:29:57.705378 env[1203]: time="2024-07-02T10:29:57.705226897Z" level=info msg="StartContainer for \"12dba147589dd9c93b0baa99e3d4c11aa566e3731664671aeb38cc36cf64f3eb\" returns successfully" Jul 2 10:29:57.709127 env[1203]: time="2024-07-02T10:29:57.709079342Z" level=info msg="StartContainer for \"75d959e0c6330bec49a6f736f8106e1f86cfa50b72a2f1f67cee6a3238cdea91\" returns successfully" Jul 2 10:29:57.777602 env[1203]: time="2024-07-02T10:29:57.760418299Z" level=info msg="StartContainer for \"83b9d6a6f462b81b2d36d8805c792067ba924c7453fe71ba2fac1c584b129446\" returns successfully" Jul 2 10:29:57.958745 kubelet[1711]: E0702 10:29:57.958635 1711 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.55.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:57.961739 systemd[1]: Started sshd@9-10.230.55.230:22-218.92.0.56:34488.service. Jul 2 10:29:58.575503 kubelet[1711]: W0702 10:29:58.575452 1711 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.230.55.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:58.575751 kubelet[1711]: E0702 10:29:58.575728 1711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.55.230:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.55.230:6443: connect: connection refused Jul 2 10:29:58.863546 sshd[1630]: Failed password for root from 218.92.0.112 port 25811 ssh2 Jul 2 10:29:58.885808 kubelet[1711]: E0702 10:29:58.885766 1711 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.55.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ehxin.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.55.230:6443: connect: connection refused" interval="3.2s" Jul 2 10:29:59.003189 kubelet[1711]: I0702 10:29:59.003152 1711 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:29:59.004315 kubelet[1711]: E0702 10:29:59.004278 1711 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.230.55.230:6443/api/v1/nodes\": dial tcp 10.230.55.230:6443: connect: connection refused" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:01.707780 kubelet[1711]: E0702 10:30:01.707618 1711 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"srv-ehxin.gb1.brightbox.com.17de5eaae7c79504", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"srv-ehxin.gb1.brightbox.com", UID:"srv-ehxin.gb1.brightbox.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"srv-ehxin.gb1.brightbox.com"}, FirstTimestamp:time.Date(2024, time.July, 2, 10, 29, 55, 828176132, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 10, 29, 55, 828176132, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"srv-ehxin.gb1.brightbox.com"}': 'namespaces "default" not found' (will not retry!) Jul 2 10:30:01.768721 kubelet[1711]: E0702 10:30:01.766299 1711 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"srv-ehxin.gb1.brightbox.com.17de5eaae9a41039", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"srv-ehxin.gb1.brightbox.com", UID:"srv-ehxin.gb1.brightbox.com", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"srv-ehxin.gb1.brightbox.com"}, FirstTimestamp:time.Date(2024, time.July, 2, 10, 29, 55, 859402809, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 10, 29, 55, 859402809, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"srv-ehxin.gb1.brightbox.com"}': 'namespaces "default" not found' (will not retry!) Jul 2 10:30:01.820092 kubelet[1711]: I0702 10:30:01.818342 1711 apiserver.go:52] "Watching apiserver" Jul 2 10:30:01.872078 kubelet[1711]: I0702 10:30:01.872016 1711 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 10:30:02.036427 kubelet[1711]: E0702 10:30:02.036228 1711 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "srv-ehxin.gb1.brightbox.com" not found Jul 2 10:30:02.096747 kubelet[1711]: E0702 10:30:02.096697 1711 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-ehxin.gb1.brightbox.com\" not found" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:02.209217 kubelet[1711]: I0702 10:30:02.209172 1711 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:02.217402 kubelet[1711]: I0702 10:30:02.217353 1711 kubelet_node_status.go:73] "Successfully registered node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:02.245354 sshd[1630]: Failed password for root from 218.92.0.112 port 25811 ssh2 Jul 2 10:30:03.113984 sshd[1978]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Jul 2 10:30:03.319087 kubelet[1711]: W0702 10:30:03.319006 1711 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:30:04.025123 sshd[1630]: Received disconnect from 218.92.0.112 port 25811:11: [preauth] Jul 2 10:30:04.025123 sshd[1630]: Disconnected from authenticating user root 218.92.0.112 port 25811 [preauth] Jul 2 10:30:04.024679 systemd[1]: sshd@8-10.230.55.230:22-218.92.0.112:25811.service: Deactivated successfully. Jul 2 10:30:04.023145 sshd[1630]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Jul 2 10:30:04.255339 systemd[1]: Started sshd@10-10.230.55.230:22-218.92.0.112:39436.service. Jul 2 10:30:04.905376 systemd[1]: Reloading. Jul 2 10:30:05.036445 /usr/lib/systemd/system-generators/torcx-generator[2015]: time="2024-07-02T10:30:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 10:30:05.036997 /usr/lib/systemd/system-generators/torcx-generator[2015]: time="2024-07-02T10:30:05Z" level=info msg="torcx already run" Jul 2 10:30:05.166064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 10:30:05.166091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 10:30:05.194566 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 10:30:05.353058 systemd[1]: Stopping kubelet.service... Jul 2 10:30:05.355765 kubelet[1711]: I0702 10:30:05.354638 1711 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:30:05.373229 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 10:30:05.373518 systemd[1]: Stopped kubelet.service. Jul 2 10:30:05.373593 systemd[1]: kubelet.service: Consumed 1.065s CPU time. Jul 2 10:30:05.376130 systemd[1]: Starting kubelet.service... Jul 2 10:30:05.568676 sshd[1978]: Failed password for root from 218.92.0.56 port 34488 ssh2 Jul 2 10:30:05.749251 sshd[1990]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Jul 2 10:30:07.014736 systemd[1]: Started kubelet.service. Jul 2 10:30:07.165277 kubelet[2068]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:30:07.165277 kubelet[2068]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 10:30:07.165277 kubelet[2068]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 10:30:07.166569 kubelet[2068]: I0702 10:30:07.166481 2068 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 10:30:07.177552 kubelet[2068]: I0702 10:30:07.177492 2068 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 10:30:07.177552 kubelet[2068]: I0702 10:30:07.177537 2068 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 10:30:07.179462 kubelet[2068]: I0702 10:30:07.179363 2068 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 10:30:07.185038 kubelet[2068]: I0702 10:30:07.182630 2068 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 10:30:07.189412 kubelet[2068]: I0702 10:30:07.189158 2068 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 10:30:07.199964 sudo[2081]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 10:30:07.200400 sudo[2081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 10:30:07.206935 kubelet[2068]: I0702 10:30:07.206812 2068 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 10:30:07.208315 kubelet[2068]: I0702 10:30:07.207431 2068 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 10:30:07.208315 kubelet[2068]: I0702 10:30:07.207685 2068 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 10:30:07.208315 kubelet[2068]: I0702 10:30:07.207719 2068 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 10:30:07.208315 kubelet[2068]: I0702 10:30:07.207734 2068 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 10:30:07.208315 kubelet[2068]: I0702 10:30:07.207826 2068 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:30:07.208315 kubelet[2068]: I0702 10:30:07.208042 2068 kubelet.go:393] "Attempting to sync node with API server" Jul 2 10:30:07.223012 kubelet[2068]: I0702 10:30:07.220646 2068 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 10:30:07.223012 kubelet[2068]: I0702 10:30:07.220740 2068 kubelet.go:309] "Adding apiserver pod source" Jul 2 10:30:07.223012 kubelet[2068]: I0702 10:30:07.221829 2068 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 10:30:07.223012 kubelet[2068]: I0702 10:30:07.222405 2068 apiserver.go:52] "Watching apiserver" Jul 2 10:30:07.234309 kubelet[2068]: I0702 10:30:07.234235 2068 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 10:30:07.242524 kubelet[2068]: I0702 10:30:07.242481 2068 server.go:1232] "Started kubelet" Jul 2 10:30:07.250577 kubelet[2068]: E0702 10:30:07.250217 2068 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 10:30:07.250577 kubelet[2068]: E0702 10:30:07.250278 2068 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 10:30:07.253051 kubelet[2068]: I0702 10:30:07.252286 2068 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 10:30:07.253492 kubelet[2068]: I0702 10:30:07.253356 2068 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 10:30:07.254591 kubelet[2068]: I0702 10:30:07.254561 2068 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 10:30:07.255932 kubelet[2068]: I0702 10:30:07.255903 2068 server.go:462] "Adding debug handlers to kubelet server" Jul 2 10:30:07.264584 kubelet[2068]: I0702 10:30:07.264542 2068 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 10:30:07.264910 kubelet[2068]: I0702 10:30:07.264882 2068 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 10:30:07.271676 kubelet[2068]: I0702 10:30:07.270077 2068 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 10:30:07.271676 kubelet[2068]: I0702 10:30:07.270376 2068 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 10:30:07.349402 kubelet[2068]: I0702 10:30:07.347844 2068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 10:30:07.350744 kubelet[2068]: I0702 10:30:07.350715 2068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 10:30:07.350830 kubelet[2068]: I0702 10:30:07.350747 2068 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 10:30:07.350830 kubelet[2068]: I0702 10:30:07.350777 2068 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 10:30:07.350965 kubelet[2068]: E0702 10:30:07.350859 2068 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 10:30:07.436697 kubelet[2068]: I0702 10:30:07.436657 2068 kubelet_node_status.go:70] "Attempting to register node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.451626 kubelet[2068]: E0702 10:30:07.451580 2068 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 10:30:07.452435 kubelet[2068]: I0702 10:30:07.452408 2068 kubelet_node_status.go:108] "Node was previously registered" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.452536 kubelet[2068]: I0702 10:30:07.452513 2068 kubelet_node_status.go:73] "Successfully registered node" node="srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.477821 kubelet[2068]: I0702 10:30:07.477764 2068 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 10:30:07.478110 kubelet[2068]: I0702 10:30:07.478087 2068 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 10:30:07.478277 kubelet[2068]: I0702 10:30:07.478254 2068 state_mem.go:36] "Initialized new in-memory state store" Jul 2 10:30:07.478780 kubelet[2068]: I0702 10:30:07.478758 2068 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 10:30:07.479058 kubelet[2068]: I0702 10:30:07.479024 2068 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 10:30:07.479209 kubelet[2068]: I0702 10:30:07.479172 2068 policy_none.go:49] "None policy: Start" Jul 2 10:30:07.480368 kubelet[2068]: I0702 10:30:07.480319 2068 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 10:30:07.480598 kubelet[2068]: I0702 10:30:07.480575 2068 state_mem.go:35] "Initializing new in-memory state store" Jul 2 10:30:07.481048 kubelet[2068]: I0702 10:30:07.481024 2068 state_mem.go:75] "Updated machine memory state" Jul 2 10:30:07.508591 kubelet[2068]: I0702 10:30:07.508556 2068 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 10:30:07.510614 kubelet[2068]: I0702 10:30:07.510589 2068 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 10:30:07.652371 kubelet[2068]: I0702 10:30:07.651821 2068 topology_manager.go:215] "Topology Admit Handler" podUID="dc993581a99d2c6f0fb5926bf20d3735" podNamespace="kube-system" podName="kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.652371 kubelet[2068]: I0702 10:30:07.652078 2068 topology_manager.go:215] "Topology Admit Handler" podUID="3d2476b21c716ffb6a839602cf172c76" podNamespace="kube-system" podName="kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.652371 kubelet[2068]: I0702 10:30:07.652241 2068 topology_manager.go:215] "Topology Admit Handler" podUID="51ccbfbabdb6a4fb5c0935203ed8213e" podNamespace="kube-system" podName="kube-scheduler-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.667701 kubelet[2068]: W0702 10:30:07.667660 2068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:30:07.668115 kubelet[2068]: W0702 10:30:07.667745 2068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 10:30:07.673218 kubelet[2068]: I0702 10:30:07.671027 2068 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 10:30:07.690778 kubelet[2068]: I0702 10:30:07.690733 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51ccbfbabdb6a4fb5c0935203ed8213e-kubeconfig\") pod \"kube-scheduler-srv-ehxin.gb1.brightbox.com\" (UID: \"51ccbfbabdb6a4fb5c0935203ed8213e\") " pod="kube-system/kube-scheduler-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.691067 kubelet[2068]: I0702 10:30:07.691043 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc993581a99d2c6f0fb5926bf20d3735-k8s-certs\") pod \"kube-apiserver-srv-ehxin.gb1.brightbox.com\" (UID: \"dc993581a99d2c6f0fb5926bf20d3735\") " pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.691256 kubelet[2068]: I0702 10:30:07.691232 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc993581a99d2c6f0fb5926bf20d3735-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ehxin.gb1.brightbox.com\" (UID: \"dc993581a99d2c6f0fb5926bf20d3735\") " pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.691424 kubelet[2068]: I0702 10:30:07.691395 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-flexvolume-dir\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.691591 kubelet[2068]: I0702 10:30:07.691563 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-kubeconfig\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.691749 kubelet[2068]: I0702 10:30:07.691727 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.691901 kubelet[2068]: I0702 10:30:07.691878 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc993581a99d2c6f0fb5926bf20d3735-ca-certs\") pod \"kube-apiserver-srv-ehxin.gb1.brightbox.com\" (UID: \"dc993581a99d2c6f0fb5926bf20d3735\") " pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.692125 kubelet[2068]: I0702 10:30:07.692103 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-ca-certs\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.692304 kubelet[2068]: I0702 10:30:07.692282 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d2476b21c716ffb6a839602cf172c76-k8s-certs\") pod \"kube-controller-manager-srv-ehxin.gb1.brightbox.com\" (UID: \"3d2476b21c716ffb6a839602cf172c76\") " pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" Jul 2 10:30:07.751952 kubelet[2068]: I0702 10:30:07.751884 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ehxin.gb1.brightbox.com" podStartSLOduration=4.745127772 podCreationTimestamp="2024-07-02 10:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:30:07.716916629 +0000 UTC m=+0.686581938" watchObservedRunningTime="2024-07-02 10:30:07.745127772 +0000 UTC m=+0.714793077" Jul 2 10:30:07.767508 kubelet[2068]: I0702 10:30:07.767435 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ehxin.gb1.brightbox.com" podStartSLOduration=0.767388287 podCreationTimestamp="2024-07-02 10:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:30:07.76631712 +0000 UTC m=+0.735982426" watchObservedRunningTime="2024-07-02 10:30:07.767388287 +0000 UTC m=+0.737053591" Jul 2 10:30:07.767760 kubelet[2068]: I0702 10:30:07.767576 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ehxin.gb1.brightbox.com" podStartSLOduration=0.767551158 podCreationTimestamp="2024-07-02 10:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:30:07.752124593 +0000 UTC m=+0.721789884" watchObservedRunningTime="2024-07-02 10:30:07.767551158 +0000 UTC m=+0.737216456" Jul 2 10:30:07.817471 sshd[1990]: Failed password for root from 218.92.0.112 port 39436 ssh2 Jul 2 10:30:08.470637 sshd[1978]: Failed password for root from 218.92.0.56 port 34488 ssh2 Jul 2 10:30:08.520342 sudo[2081]: pam_unix(sudo:session): session closed for user root Jul 2 10:30:11.110058 sshd[1978]: Failed password for root from 218.92.0.56 port 34488 ssh2 Jul 2 10:30:11.217847 sudo[1336]: pam_unix(sudo:session): session closed for user root Jul 2 10:30:11.368543 sshd[1333]: pam_unix(sshd:session): session closed for user core Jul 2 10:30:11.376087 systemd[1]: sshd@6-10.230.55.230:22-147.75.109.163:56210.service: Deactivated successfully. Jul 2 10:30:11.380963 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 10:30:11.381276 systemd[1]: session-7.scope: Consumed 6.153s CPU time. Jul 2 10:30:11.382600 systemd-logind[1191]: Session 7 logged out. Waiting for processes to exit. Jul 2 10:30:11.386957 systemd-logind[1191]: Removed session 7. Jul 2 10:30:11.842002 sshd[1990]: Failed password for root from 218.92.0.112 port 39436 ssh2 Jul 2 10:30:13.286144 sshd[1978]: Received disconnect from 218.92.0.56 port 34488:11: [preauth] Jul 2 10:30:13.286144 sshd[1978]: Disconnected from authenticating user root 218.92.0.56 port 34488 [preauth] Jul 2 10:30:13.285400 sshd[1978]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Jul 2 10:30:13.287681 systemd[1]: sshd@9-10.230.55.230:22-218.92.0.56:34488.service: Deactivated successfully. Jul 2 10:30:13.533963 systemd[1]: Started sshd@11-10.230.55.230:22-218.92.0.56:32213.service. Jul 2 10:30:15.215010 sshd[1990]: Failed password for root from 218.92.0.112 port 39436 ssh2 Jul 2 10:30:15.920074 sshd[2144]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Jul 2 10:30:17.230001 sshd[1990]: Received disconnect from 218.92.0.112 port 39436:11: [preauth] Jul 2 10:30:17.230001 sshd[1990]: Disconnected from authenticating user root 218.92.0.112 port 39436 [preauth] Jul 2 10:30:17.230570 sshd[1990]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.112 user=root Jul 2 10:30:17.232481 systemd[1]: sshd@10-10.230.55.230:22-218.92.0.112:39436.service: Deactivated successfully. Jul 2 10:30:18.353810 sshd[2144]: Failed password for root from 218.92.0.56 port 32213 ssh2 Jul 2 10:30:19.675018 kubelet[2068]: I0702 10:30:19.674644 2068 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 10:30:19.676757 env[1203]: time="2024-07-02T10:30:19.676686975Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 10:30:19.681256 kubelet[2068]: I0702 10:30:19.678272 2068 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 10:30:20.340478 sshd[2144]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Jul 2 10:30:20.722426 kubelet[2068]: I0702 10:30:20.722360 2068 topology_manager.go:215] "Topology Admit Handler" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" podNamespace="kube-system" podName="cilium-qwc5c" Jul 2 10:30:20.724241 kubelet[2068]: I0702 10:30:20.724210 2068 topology_manager.go:215] "Topology Admit Handler" podUID="e4ee3048-ce5a-42f7-8333-5edbed4245e9" podNamespace="kube-system" podName="kube-proxy-j28w8" Jul 2 10:30:20.733331 systemd[1]: Created slice kubepods-burstable-pod45dc5c0d_44eb_40ff_bde9_0241ecb69730.slice. Jul 2 10:30:20.742971 kubelet[2068]: I0702 10:30:20.742907 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-run\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.742971 kubelet[2068]: I0702 10:30:20.742962 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-kernel\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743176 kubelet[2068]: I0702 10:30:20.742998 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5szd9\" (UniqueName: \"kubernetes.io/projected/e4ee3048-ce5a-42f7-8333-5edbed4245e9-kube-api-access-5szd9\") pod \"kube-proxy-j28w8\" (UID: \"e4ee3048-ce5a-42f7-8333-5edbed4245e9\") " pod="kube-system/kube-proxy-j28w8" Jul 2 10:30:20.743176 kubelet[2068]: I0702 10:30:20.743035 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-bpf-maps\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743176 kubelet[2068]: I0702 10:30:20.743067 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-net\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743176 kubelet[2068]: I0702 10:30:20.743097 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4ee3048-ce5a-42f7-8333-5edbed4245e9-xtables-lock\") pod \"kube-proxy-j28w8\" (UID: \"e4ee3048-ce5a-42f7-8333-5edbed4245e9\") " pod="kube-system/kube-proxy-j28w8" Jul 2 10:30:20.743176 kubelet[2068]: I0702 10:30:20.743147 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-config-path\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743459 kubelet[2068]: I0702 10:30:20.743188 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-cgroup\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743459 kubelet[2068]: I0702 10:30:20.743241 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4ee3048-ce5a-42f7-8333-5edbed4245e9-kube-proxy\") pod \"kube-proxy-j28w8\" (UID: \"e4ee3048-ce5a-42f7-8333-5edbed4245e9\") " pod="kube-system/kube-proxy-j28w8" Jul 2 10:30:20.743459 kubelet[2068]: I0702 10:30:20.743275 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hostproc\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743459 kubelet[2068]: I0702 10:30:20.743308 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hubble-tls\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743459 kubelet[2068]: I0702 10:30:20.743346 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4nl\" (UniqueName: \"kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-kube-api-access-6h4nl\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743459 kubelet[2068]: I0702 10:30:20.743387 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4ee3048-ce5a-42f7-8333-5edbed4245e9-lib-modules\") pod \"kube-proxy-j28w8\" (UID: \"e4ee3048-ce5a-42f7-8333-5edbed4245e9\") " pod="kube-system/kube-proxy-j28w8" Jul 2 10:30:20.743760 kubelet[2068]: I0702 10:30:20.743418 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-xtables-lock\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743760 kubelet[2068]: I0702 10:30:20.743449 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45dc5c0d-44eb-40ff-bde9-0241ecb69730-clustermesh-secrets\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743760 kubelet[2068]: I0702 10:30:20.743496 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cni-path\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743760 kubelet[2068]: I0702 10:30:20.743534 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-etc-cni-netd\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.743760 kubelet[2068]: I0702 10:30:20.743569 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-lib-modules\") pod \"cilium-qwc5c\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " pod="kube-system/cilium-qwc5c" Jul 2 10:30:20.746303 systemd[1]: Created slice kubepods-besteffort-pode4ee3048_ce5a_42f7_8333_5edbed4245e9.slice. Jul 2 10:30:20.841146 kubelet[2068]: I0702 10:30:20.841093 2068 topology_manager.go:215] "Topology Admit Handler" podUID="61beabe7-eaa6-467c-b430-59ed80f1e6e0" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-djdj7" Jul 2 10:30:20.844340 kubelet[2068]: I0702 10:30:20.844281 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgvv\" (UniqueName: \"kubernetes.io/projected/61beabe7-eaa6-467c-b430-59ed80f1e6e0-kube-api-access-ccgvv\") pod \"cilium-operator-6bc8ccdb58-djdj7\" (UID: \"61beabe7-eaa6-467c-b430-59ed80f1e6e0\") " pod="kube-system/cilium-operator-6bc8ccdb58-djdj7" Jul 2 10:30:20.845602 kubelet[2068]: I0702 10:30:20.845577 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61beabe7-eaa6-467c-b430-59ed80f1e6e0-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-djdj7\" (UID: \"61beabe7-eaa6-467c-b430-59ed80f1e6e0\") " pod="kube-system/cilium-operator-6bc8ccdb58-djdj7" Jul 2 10:30:20.852427 systemd[1]: Created slice kubepods-besteffort-pod61beabe7_eaa6_467c_b430_59ed80f1e6e0.slice. Jul 2 10:30:21.045046 env[1203]: time="2024-07-02T10:30:21.043609191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwc5c,Uid:45dc5c0d-44eb-40ff-bde9-0241ecb69730,Namespace:kube-system,Attempt:0,}" Jul 2 10:30:21.062470 env[1203]: time="2024-07-02T10:30:21.062033772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j28w8,Uid:e4ee3048-ce5a-42f7-8333-5edbed4245e9,Namespace:kube-system,Attempt:0,}" Jul 2 10:30:21.114917 env[1203]: time="2024-07-02T10:30:21.114728089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:30:21.115270 env[1203]: time="2024-07-02T10:30:21.115181548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:30:21.115505 env[1203]: time="2024-07-02T10:30:21.115444419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:30:21.116037 env[1203]: time="2024-07-02T10:30:21.115981249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9 pid=2163 runtime=io.containerd.runc.v2 Jul 2 10:30:21.143379 systemd[1]: Started cri-containerd-545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9.scope. Jul 2 10:30:21.159441 env[1203]: time="2024-07-02T10:30:21.159327555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:30:21.159848 env[1203]: time="2024-07-02T10:30:21.159801761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:30:21.160056 env[1203]: time="2024-07-02T10:30:21.160006828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:30:21.161913 env[1203]: time="2024-07-02T10:30:21.161868246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-djdj7,Uid:61beabe7-eaa6-467c-b430-59ed80f1e6e0,Namespace:kube-system,Attempt:0,}" Jul 2 10:30:21.162541 env[1203]: time="2024-07-02T10:30:21.161821687Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbfeea7351961ba7865b9e733171ca1ffe65699ce6885d6776eab050d7fd3605 pid=2187 runtime=io.containerd.runc.v2 Jul 2 10:30:21.204394 systemd[1]: Started cri-containerd-dbfeea7351961ba7865b9e733171ca1ffe65699ce6885d6776eab050d7fd3605.scope. Jul 2 10:30:21.212942 env[1203]: time="2024-07-02T10:30:21.212889692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qwc5c,Uid:45dc5c0d-44eb-40ff-bde9-0241ecb69730,Namespace:kube-system,Attempt:0,} returns sandbox id \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\"" Jul 2 10:30:21.225468 env[1203]: time="2024-07-02T10:30:21.225415766Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 10:30:21.240000 env[1203]: time="2024-07-02T10:30:21.239846393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:30:21.240312 env[1203]: time="2024-07-02T10:30:21.240257719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:30:21.240592 env[1203]: time="2024-07-02T10:30:21.240530702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:30:21.241632 env[1203]: time="2024-07-02T10:30:21.241579750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd pid=2241 runtime=io.containerd.runc.v2 Jul 2 10:30:21.269723 systemd[1]: Started cri-containerd-c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd.scope. Jul 2 10:30:21.287482 env[1203]: time="2024-07-02T10:30:21.287414775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j28w8,Uid:e4ee3048-ce5a-42f7-8333-5edbed4245e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbfeea7351961ba7865b9e733171ca1ffe65699ce6885d6776eab050d7fd3605\"" Jul 2 10:30:21.300761 env[1203]: time="2024-07-02T10:30:21.300034975Z" level=info msg="CreateContainer within sandbox \"dbfeea7351961ba7865b9e733171ca1ffe65699ce6885d6776eab050d7fd3605\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 10:30:21.372990 env[1203]: time="2024-07-02T10:30:21.372935762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-djdj7,Uid:61beabe7-eaa6-467c-b430-59ed80f1e6e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\"" Jul 2 10:30:21.388853 env[1203]: time="2024-07-02T10:30:21.388773527Z" level=info msg="CreateContainer within sandbox \"dbfeea7351961ba7865b9e733171ca1ffe65699ce6885d6776eab050d7fd3605\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2620aa4f8dc9b592d30b37bb785727f0c783116a22f3496bab3c87a5d467472c\"" Jul 2 10:30:21.395041 env[1203]: time="2024-07-02T10:30:21.390011170Z" level=info msg="StartContainer for \"2620aa4f8dc9b592d30b37bb785727f0c783116a22f3496bab3c87a5d467472c\"" Jul 2 10:30:21.445972 systemd[1]: Started cri-containerd-2620aa4f8dc9b592d30b37bb785727f0c783116a22f3496bab3c87a5d467472c.scope. Jul 2 10:30:21.484760 env[1203]: time="2024-07-02T10:30:21.484705443Z" level=info msg="StartContainer for \"2620aa4f8dc9b592d30b37bb785727f0c783116a22f3496bab3c87a5d467472c\" returns successfully" Jul 2 10:30:22.463863 sshd[2144]: Failed password for root from 218.92.0.56 port 32213 ssh2 Jul 2 10:30:24.762909 kubelet[2068]: I0702 10:30:24.762841 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j28w8" podStartSLOduration=4.76275881 podCreationTimestamp="2024-07-02 10:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:30:21.546707049 +0000 UTC m=+14.516372368" watchObservedRunningTime="2024-07-02 10:30:24.76275881 +0000 UTC m=+17.732424133" Jul 2 10:30:25.837697 sshd[2144]: Failed password for root from 218.92.0.56 port 32213 ssh2 Jul 2 10:30:26.193066 sshd[2144]: Received disconnect from 218.92.0.56 port 32213:11: [preauth] Jul 2 10:30:26.193066 sshd[2144]: Disconnected from authenticating user root 218.92.0.56 port 32213 [preauth] Jul 2 10:30:26.193635 sshd[2144]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Jul 2 10:30:26.195276 systemd[1]: sshd@11-10.230.55.230:22-218.92.0.56:32213.service: Deactivated successfully. Jul 2 10:30:26.425019 systemd[1]: Started sshd@12-10.230.55.230:22-218.92.0.56:10068.service. Jul 2 10:30:30.232723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115282318.mount: Deactivated successfully. Jul 2 10:30:31.017183 sshd[2434]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Jul 2 10:30:33.315700 sshd[2434]: Failed password for root from 218.92.0.56 port 10068 ssh2 Jul 2 10:30:34.922792 env[1203]: time="2024-07-02T10:30:34.922729449Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:30:34.926864 env[1203]: time="2024-07-02T10:30:34.926810227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:30:34.930506 env[1203]: time="2024-07-02T10:30:34.930335450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:30:34.932133 env[1203]: time="2024-07-02T10:30:34.931487151Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 10:30:34.934411 env[1203]: time="2024-07-02T10:30:34.934229287Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 10:30:34.936039 env[1203]: time="2024-07-02T10:30:34.935873598Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:30:34.957895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938983613.mount: Deactivated successfully. Jul 2 10:30:34.968159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327428279.mount: Deactivated successfully. Jul 2 10:30:34.974646 env[1203]: time="2024-07-02T10:30:34.974309930Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\"" Jul 2 10:30:34.979123 env[1203]: time="2024-07-02T10:30:34.978727294Z" level=info msg="StartContainer for \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\"" Jul 2 10:30:35.042593 systemd[1]: Started cri-containerd-cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f.scope. Jul 2 10:30:35.190286 systemd[1]: cri-containerd-cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f.scope: Deactivated successfully. Jul 2 10:30:35.204140 env[1203]: time="2024-07-02T10:30:35.199614945Z" level=info msg="StartContainer for \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\" returns successfully" Jul 2 10:30:35.403619 env[1203]: time="2024-07-02T10:30:35.403495772Z" level=info msg="shim disconnected" id=cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f Jul 2 10:30:35.404479 env[1203]: time="2024-07-02T10:30:35.404353960Z" level=warning msg="cleaning up after shim disconnected" id=cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f namespace=k8s.io Jul 2 10:30:35.404661 env[1203]: time="2024-07-02T10:30:35.404631832Z" level=info msg="cleaning up dead shim" Jul 2 10:30:35.470154 env[1203]: time="2024-07-02T10:30:35.469495732Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:30:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2488 runtime=io.containerd.runc.v2\n" Jul 2 10:30:35.709316 env[1203]: time="2024-07-02T10:30:35.709242034Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 10:30:35.777236 env[1203]: time="2024-07-02T10:30:35.776786935Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\"" Jul 2 10:30:35.779125 env[1203]: time="2024-07-02T10:30:35.777744358Z" level=info msg="StartContainer for \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\"" Jul 2 10:30:35.811630 systemd[1]: Started cri-containerd-34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906.scope. Jul 2 10:30:35.867923 env[1203]: time="2024-07-02T10:30:35.867861881Z" level=info msg="StartContainer for \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\" returns successfully" Jul 2 10:30:35.886065 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 10:30:35.886475 systemd[1]: Stopped systemd-sysctl.service. Jul 2 10:30:35.887582 systemd[1]: Stopping systemd-sysctl.service... Jul 2 10:30:35.892390 systemd[1]: Starting systemd-sysctl.service... Jul 2 10:30:35.901211 systemd[1]: cri-containerd-34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906.scope: Deactivated successfully. Jul 2 10:30:35.956371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f-rootfs.mount: Deactivated successfully. Jul 2 10:30:35.971408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906-rootfs.mount: Deactivated successfully. Jul 2 10:30:35.981778 systemd[1]: Finished systemd-sysctl.service. Jul 2 10:30:36.004682 env[1203]: time="2024-07-02T10:30:36.004459364Z" level=info msg="shim disconnected" id=34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906 Jul 2 10:30:36.004682 env[1203]: time="2024-07-02T10:30:36.004514322Z" level=warning msg="cleaning up after shim disconnected" id=34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906 namespace=k8s.io Jul 2 10:30:36.004682 env[1203]: time="2024-07-02T10:30:36.004530373Z" level=info msg="cleaning up dead shim" Jul 2 10:30:36.030883 env[1203]: time="2024-07-02T10:30:36.030297699Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:30:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2555 runtime=io.containerd.runc.v2\n" Jul 2 10:30:36.717231 env[1203]: time="2024-07-02T10:30:36.716376259Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 10:30:36.820921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716865413.mount: Deactivated successfully. Jul 2 10:30:36.883615 env[1203]: time="2024-07-02T10:30:36.883476237Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\"" Jul 2 10:30:36.888698 env[1203]: time="2024-07-02T10:30:36.888656777Z" level=info msg="StartContainer for \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\"" Jul 2 10:30:36.946073 systemd[1]: Started cri-containerd-1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad.scope. Jul 2 10:30:36.955500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509065128.mount: Deactivated successfully. Jul 2 10:30:37.063669 systemd[1]: cri-containerd-1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad.scope: Deactivated successfully. Jul 2 10:30:37.088378 env[1203]: time="2024-07-02T10:30:37.088295384Z" level=info msg="StartContainer for \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\" returns successfully" Jul 2 10:30:37.200980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad-rootfs.mount: Deactivated successfully. Jul 2 10:30:37.257972 env[1203]: time="2024-07-02T10:30:37.257868939Z" level=info msg="shim disconnected" id=1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad Jul 2 10:30:37.257972 env[1203]: time="2024-07-02T10:30:37.257962276Z" level=warning msg="cleaning up after shim disconnected" id=1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad namespace=k8s.io Jul 2 10:30:37.258442 env[1203]: time="2024-07-02T10:30:37.257980300Z" level=info msg="cleaning up dead shim" Jul 2 10:30:37.281429 env[1203]: time="2024-07-02T10:30:37.281328827Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2612 runtime=io.containerd.runc.v2\n" Jul 2 10:30:37.555418 sshd[2434]: Failed password for root from 218.92.0.56 port 10068 ssh2 Jul 2 10:30:37.734533 env[1203]: time="2024-07-02T10:30:37.734476730Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 10:30:37.794211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514344541.mount: Deactivated successfully. Jul 2 10:30:37.830982 env[1203]: time="2024-07-02T10:30:37.830446484Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\"" Jul 2 10:30:37.831843 env[1203]: time="2024-07-02T10:30:37.831777341Z" level=info msg="StartContainer for \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\"" Jul 2 10:30:37.879408 systemd[1]: Started cri-containerd-ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb.scope. Jul 2 10:30:37.955676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712117181.mount: Deactivated successfully. Jul 2 10:30:37.986687 systemd[1]: cri-containerd-ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb.scope: Deactivated successfully. Jul 2 10:30:37.999382 env[1203]: time="2024-07-02T10:30:37.988766421Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45dc5c0d_44eb_40ff_bde9_0241ecb69730.slice/cri-containerd-ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb.scope/memory.events\": no such file or directory" Jul 2 10:30:38.057425 env[1203]: time="2024-07-02T10:30:38.057272576Z" level=info msg="StartContainer for \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\" returns successfully" Jul 2 10:30:38.100244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb-rootfs.mount: Deactivated successfully. Jul 2 10:30:38.249849 env[1203]: time="2024-07-02T10:30:38.249779787Z" level=info msg="shim disconnected" id=ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb Jul 2 10:30:38.249849 env[1203]: time="2024-07-02T10:30:38.249847209Z" level=warning msg="cleaning up after shim disconnected" id=ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb namespace=k8s.io Jul 2 10:30:38.249849 env[1203]: time="2024-07-02T10:30:38.249863461Z" level=info msg="cleaning up dead shim" Jul 2 10:30:38.278240 env[1203]: time="2024-07-02T10:30:38.278171623Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:30:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2671 runtime=io.containerd.runc.v2\n" Jul 2 10:30:38.587624 env[1203]: time="2024-07-02T10:30:38.587562996Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:30:38.590807 env[1203]: time="2024-07-02T10:30:38.590772384Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:30:38.593968 env[1203]: time="2024-07-02T10:30:38.593930581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 10:30:38.594449 env[1203]: time="2024-07-02T10:30:38.594409346Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 10:30:38.601091 env[1203]: time="2024-07-02T10:30:38.600978746Z" level=info msg="CreateContainer within sandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 10:30:38.629667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376193102.mount: Deactivated successfully. Jul 2 10:30:38.647304 env[1203]: time="2024-07-02T10:30:38.647252888Z" level=info msg="CreateContainer within sandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\"" Jul 2 10:30:38.650423 env[1203]: time="2024-07-02T10:30:38.650388470Z" level=info msg="StartContainer for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\"" Jul 2 10:30:38.671920 systemd[1]: Started cri-containerd-273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b.scope. Jul 2 10:30:38.730438 env[1203]: time="2024-07-02T10:30:38.730064852Z" level=info msg="StartContainer for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" returns successfully" Jul 2 10:30:38.749443 env[1203]: time="2024-07-02T10:30:38.749376445Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 10:30:38.777097 env[1203]: time="2024-07-02T10:30:38.777031855Z" level=info msg="CreateContainer within sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\"" Jul 2 10:30:38.778113 env[1203]: time="2024-07-02T10:30:38.778077526Z" level=info msg="StartContainer for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\"" Jul 2 10:30:38.806011 systemd[1]: Started cri-containerd-6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228.scope. Jul 2 10:30:38.876349 env[1203]: time="2024-07-02T10:30:38.876177296Z" level=info msg="StartContainer for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" returns successfully" Jul 2 10:30:39.254655 kubelet[2068]: I0702 10:30:39.253988 2068 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 10:30:39.807611 kubelet[2068]: I0702 10:30:39.807543 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-djdj7" podStartSLOduration=2.591870037 podCreationTimestamp="2024-07-02 10:30:20 +0000 UTC" firstStartedPulling="2024-07-02 10:30:21.38008625 +0000 UTC m=+14.349751541" lastFinishedPulling="2024-07-02 10:30:38.595637976 +0000 UTC m=+31.565303280" observedRunningTime="2024-07-02 10:30:38.800424531 +0000 UTC m=+31.770089842" watchObservedRunningTime="2024-07-02 10:30:39.807421776 +0000 UTC m=+32.777087080" Jul 2 10:30:39.821169 kubelet[2068]: I0702 10:30:39.817477 2068 topology_manager.go:215] "Topology Admit Handler" podUID="f3884a5e-2f2a-4e96-91f7-d4c14d91e297" podNamespace="kube-system" podName="coredns-5dd5756b68-fn4qf" Jul 2 10:30:39.821169 kubelet[2068]: I0702 10:30:39.817866 2068 topology_manager.go:215] "Topology Admit Handler" podUID="6d65a374-4912-498b-928b-2cfd9744b6a8" podNamespace="kube-system" podName="coredns-5dd5756b68-z66mq" Jul 2 10:30:39.828795 systemd[1]: Created slice kubepods-burstable-pod6d65a374_4912_498b_928b_2cfd9744b6a8.slice. Jul 2 10:30:39.840920 systemd[1]: Created slice kubepods-burstable-podf3884a5e_2f2a_4e96_91f7_d4c14d91e297.slice. Jul 2 10:30:39.872888 kubelet[2068]: I0702 10:30:39.872837 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzpb\" (UniqueName: \"kubernetes.io/projected/f3884a5e-2f2a-4e96-91f7-d4c14d91e297-kube-api-access-tnzpb\") pod \"coredns-5dd5756b68-fn4qf\" (UID: \"f3884a5e-2f2a-4e96-91f7-d4c14d91e297\") " pod="kube-system/coredns-5dd5756b68-fn4qf" Jul 2 10:30:39.873891 kubelet[2068]: I0702 10:30:39.873847 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d65a374-4912-498b-928b-2cfd9744b6a8-config-volume\") pod \"coredns-5dd5756b68-z66mq\" (UID: \"6d65a374-4912-498b-928b-2cfd9744b6a8\") " pod="kube-system/coredns-5dd5756b68-z66mq" Jul 2 10:30:39.885691 kubelet[2068]: I0702 10:30:39.874190 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3884a5e-2f2a-4e96-91f7-d4c14d91e297-config-volume\") pod \"coredns-5dd5756b68-fn4qf\" (UID: \"f3884a5e-2f2a-4e96-91f7-d4c14d91e297\") " pod="kube-system/coredns-5dd5756b68-fn4qf" Jul 2 10:30:39.885691 kubelet[2068]: I0702 10:30:39.874662 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqn66\" (UniqueName: \"kubernetes.io/projected/6d65a374-4912-498b-928b-2cfd9744b6a8-kube-api-access-nqn66\") pod \"coredns-5dd5756b68-z66mq\" (UID: \"6d65a374-4912-498b-928b-2cfd9744b6a8\") " pod="kube-system/coredns-5dd5756b68-z66mq" Jul 2 10:30:40.133939 env[1203]: time="2024-07-02T10:30:40.133340409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-z66mq,Uid:6d65a374-4912-498b-928b-2cfd9744b6a8,Namespace:kube-system,Attempt:0,}" Jul 2 10:30:40.145577 env[1203]: time="2024-07-02T10:30:40.145515098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fn4qf,Uid:f3884a5e-2f2a-4e96-91f7-d4c14d91e297,Namespace:kube-system,Attempt:0,}" Jul 2 10:30:40.252739 kubelet[2068]: I0702 10:30:40.252597 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qwc5c" podStartSLOduration=6.544347638 podCreationTimestamp="2024-07-02 10:30:20 +0000 UTC" firstStartedPulling="2024-07-02 10:30:21.224679163 +0000 UTC m=+14.194344461" lastFinishedPulling="2024-07-02 10:30:34.932855248 +0000 UTC m=+27.902520558" observedRunningTime="2024-07-02 10:30:40.15792301 +0000 UTC m=+33.127588312" watchObservedRunningTime="2024-07-02 10:30:40.252523735 +0000 UTC m=+33.222189038" Jul 2 10:30:40.934514 sshd[2434]: Failed password for root from 218.92.0.56 port 10068 ssh2 Jul 2 10:30:42.494611 sshd[2434]: Received disconnect from 218.92.0.56 port 10068:11: [preauth] Jul 2 10:30:42.494611 sshd[2434]: Disconnected from authenticating user root 218.92.0.56 port 10068 [preauth] Jul 2 10:30:42.493511 sshd[2434]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.56 user=root Jul 2 10:30:42.497598 systemd[1]: sshd@12-10.230.55.230:22-218.92.0.56:10068.service: Deactivated successfully. Jul 2 10:30:43.240837 systemd-networkd[1028]: cilium_host: Link UP Jul 2 10:30:43.251473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 10:30:43.256710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 10:30:43.250779 systemd-networkd[1028]: cilium_net: Link UP Jul 2 10:30:43.251151 systemd-networkd[1028]: cilium_net: Gained carrier Jul 2 10:30:43.252133 systemd-networkd[1028]: cilium_host: Gained carrier Jul 2 10:30:43.620823 systemd-networkd[1028]: cilium_vxlan: Link UP Jul 2 10:30:43.620846 systemd-networkd[1028]: cilium_vxlan: Gained carrier Jul 2 10:30:43.644745 systemd-networkd[1028]: cilium_net: Gained IPv6LL Jul 2 10:30:43.843511 systemd-networkd[1028]: cilium_host: Gained IPv6LL Jul 2 10:30:44.451238 kernel: NET: Registered PF_ALG protocol family Jul 2 10:30:45.190425 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Jul 2 10:30:45.701091 systemd-networkd[1028]: lxc_health: Link UP Jul 2 10:30:45.722713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 10:30:45.722004 systemd-networkd[1028]: lxc_health: Gained carrier Jul 2 10:30:45.943240 systemd-networkd[1028]: lxca4947f04e9eb: Link UP Jul 2 10:30:45.973262 kernel: eth0: renamed from tmpa15c4 Jul 2 10:30:45.969793 systemd-networkd[1028]: lxcbf084fb46c4f: Link UP Jul 2 10:30:45.998147 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca4947f04e9eb: link becomes ready Jul 2 10:30:46.009419 kernel: eth0: renamed from tmpe2b56 Jul 2 10:30:46.009521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbf084fb46c4f: link becomes ready Jul 2 10:30:45.994565 systemd-networkd[1028]: lxca4947f04e9eb: Gained carrier Jul 2 10:30:46.010625 systemd-networkd[1028]: lxcbf084fb46c4f: Gained carrier Jul 2 10:30:46.857420 systemd-networkd[1028]: lxc_health: Gained IPv6LL Jul 2 10:30:47.299535 systemd-networkd[1028]: lxca4947f04e9eb: Gained IPv6LL Jul 2 10:30:47.683630 systemd-networkd[1028]: lxcbf084fb46c4f: Gained IPv6LL Jul 2 10:30:48.261497 systemd[1]: Started sshd@13-10.230.55.230:22-218.92.0.118:42556.service. Jul 2 10:30:50.865915 sshd[3239]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Jul 2 10:30:51.766367 env[1203]: time="2024-07-02T10:30:51.766182166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:30:51.767623 env[1203]: time="2024-07-02T10:30:51.766300464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:30:51.767623 env[1203]: time="2024-07-02T10:30:51.766319307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:30:51.768270 env[1203]: time="2024-07-02T10:30:51.768148368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a15c465cafeb7b2df9bddca0cc4a1488b16f7155aafd214418c3589cf79d3035 pid=3260 runtime=io.containerd.runc.v2 Jul 2 10:30:51.815069 systemd[1]: Started cri-containerd-a15c465cafeb7b2df9bddca0cc4a1488b16f7155aafd214418c3589cf79d3035.scope. Jul 2 10:30:51.839966 env[1203]: time="2024-07-02T10:30:51.815052666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:30:51.839966 env[1203]: time="2024-07-02T10:30:51.815543331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:30:51.839966 env[1203]: time="2024-07-02T10:30:51.815566696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:30:51.839966 env[1203]: time="2024-07-02T10:30:51.816130134Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2b562b0365c5bf60c810e10229068c062e9e68b173a6398bf6a00369675ac21 pid=3281 runtime=io.containerd.runc.v2 Jul 2 10:30:51.838783 systemd[1]: run-containerd-runc-k8s.io-a15c465cafeb7b2df9bddca0cc4a1488b16f7155aafd214418c3589cf79d3035-runc.rxtT5I.mount: Deactivated successfully. Jul 2 10:30:51.872090 systemd[1]: Started cri-containerd-e2b562b0365c5bf60c810e10229068c062e9e68b173a6398bf6a00369675ac21.scope. Jul 2 10:30:51.983606 env[1203]: time="2024-07-02T10:30:51.981927146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fn4qf,Uid:f3884a5e-2f2a-4e96-91f7-d4c14d91e297,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2b562b0365c5bf60c810e10229068c062e9e68b173a6398bf6a00369675ac21\"" Jul 2 10:30:51.995396 env[1203]: time="2024-07-02T10:30:51.995338766Z" level=info msg="CreateContainer within sandbox \"e2b562b0365c5bf60c810e10229068c062e9e68b173a6398bf6a00369675ac21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 10:30:52.054106 env[1203]: time="2024-07-02T10:30:52.053272101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-z66mq,Uid:6d65a374-4912-498b-928b-2cfd9744b6a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15c465cafeb7b2df9bddca0cc4a1488b16f7155aafd214418c3589cf79d3035\"" Jul 2 10:30:52.058068 env[1203]: time="2024-07-02T10:30:52.058008374Z" level=info msg="CreateContainer within sandbox \"e2b562b0365c5bf60c810e10229068c062e9e68b173a6398bf6a00369675ac21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baf68762bb9540e684b4546e6344606f583f8c50b64d3d0038be687934812399\"" Jul 2 10:30:52.058734 env[1203]: time="2024-07-02T10:30:52.058699656Z" level=info msg="StartContainer for \"baf68762bb9540e684b4546e6344606f583f8c50b64d3d0038be687934812399\"" Jul 2 10:30:52.061444 env[1203]: time="2024-07-02T10:30:52.060572133Z" level=info msg="CreateContainer within sandbox \"a15c465cafeb7b2df9bddca0cc4a1488b16f7155aafd214418c3589cf79d3035\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 10:30:52.084037 env[1203]: time="2024-07-02T10:30:52.083969759Z" level=info msg="CreateContainer within sandbox \"a15c465cafeb7b2df9bddca0cc4a1488b16f7155aafd214418c3589cf79d3035\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70eef4c7d137021902c1b394fd591767728b993c800b477359309804d28ae2d8\"" Jul 2 10:30:52.085284 env[1203]: time="2024-07-02T10:30:52.085190661Z" level=info msg="StartContainer for \"70eef4c7d137021902c1b394fd591767728b993c800b477359309804d28ae2d8\"" Jul 2 10:30:52.112626 systemd[1]: Started cri-containerd-baf68762bb9540e684b4546e6344606f583f8c50b64d3d0038be687934812399.scope. Jul 2 10:30:52.145416 systemd[1]: Started cri-containerd-70eef4c7d137021902c1b394fd591767728b993c800b477359309804d28ae2d8.scope. Jul 2 10:30:52.205588 env[1203]: time="2024-07-02T10:30:52.205536392Z" level=info msg="StartContainer for \"baf68762bb9540e684b4546e6344606f583f8c50b64d3d0038be687934812399\" returns successfully" Jul 2 10:30:52.224800 env[1203]: time="2024-07-02T10:30:52.224742833Z" level=info msg="StartContainer for \"70eef4c7d137021902c1b394fd591767728b993c800b477359309804d28ae2d8\" returns successfully" Jul 2 10:30:52.787275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905514528.mount: Deactivated successfully. Jul 2 10:30:52.920794 kubelet[2068]: I0702 10:30:52.920740 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fn4qf" podStartSLOduration=32.92063563 podCreationTimestamp="2024-07-02 10:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:30:52.896718833 +0000 UTC m=+45.866384151" watchObservedRunningTime="2024-07-02 10:30:52.92063563 +0000 UTC m=+45.890300928" Jul 2 10:30:52.954480 kubelet[2068]: I0702 10:30:52.954413 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-z66mq" podStartSLOduration=32.954344009 podCreationTimestamp="2024-07-02 10:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:30:52.921862939 +0000 UTC m=+45.891528237" watchObservedRunningTime="2024-07-02 10:30:52.954344009 +0000 UTC m=+45.924009314" Jul 2 10:30:53.105938 sshd[3239]: Failed password for root from 218.92.0.118 port 42556 ssh2 Jul 2 10:30:57.741450 sshd[3239]: Failed password for root from 218.92.0.118 port 42556 ssh2 Jul 2 10:31:01.880533 sshd[3239]: Failed password for root from 218.92.0.118 port 42556 ssh2 Jul 2 10:31:03.651857 sshd[3239]: Received disconnect from 218.92.0.118 port 42556:11: [preauth] Jul 2 10:31:03.651857 sshd[3239]: Disconnected from authenticating user root 218.92.0.118 port 42556 [preauth] Jul 2 10:31:03.651879 sshd[3239]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Jul 2 10:31:03.653403 systemd[1]: sshd@13-10.230.55.230:22-218.92.0.118:42556.service: Deactivated successfully. Jul 2 10:31:04.057081 systemd[1]: Started sshd@14-10.230.55.230:22-218.92.0.118:50509.service. Jul 2 10:31:06.961491 sshd[3420]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Jul 2 10:31:09.063638 sshd[3420]: Failed password for root from 218.92.0.118 port 50509 ssh2 Jul 2 10:31:13.316421 sshd[3420]: Failed password for root from 218.92.0.118 port 50509 ssh2 Jul 2 10:31:16.539189 sshd[3420]: Failed password for root from 218.92.0.118 port 50509 ssh2 Jul 2 10:31:17.278083 sshd[3420]: Received disconnect from 218.92.0.118 port 50509:11: [preauth] Jul 2 10:31:17.278083 sshd[3420]: Disconnected from authenticating user root 218.92.0.118 port 50509 [preauth] Jul 2 10:31:17.278489 sshd[3420]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Jul 2 10:31:17.280134 systemd[1]: sshd@14-10.230.55.230:22-218.92.0.118:50509.service: Deactivated successfully. Jul 2 10:31:23.434688 systemd[1]: Started sshd@15-10.230.55.230:22-218.92.0.118:16493.service. Jul 2 10:31:30.016641 sshd[3432]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Jul 2 10:31:30.140476 systemd[1]: Started sshd@16-10.230.55.230:22-147.75.109.163:37696.service. Jul 2 10:31:31.022010 sshd[3435]: Accepted publickey for core from 147.75.109.163 port 37696 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:31:31.025459 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:31.034003 systemd[1]: Started session-8.scope. Jul 2 10:31:31.035657 systemd-logind[1191]: New session 8 of user core. Jul 2 10:31:31.749104 sshd[3432]: Failed password for root from 218.92.0.118 port 16493 ssh2 Jul 2 10:31:32.106326 sshd[3435]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:32.118037 systemd[1]: sshd@16-10.230.55.230:22-147.75.109.163:37696.service: Deactivated successfully. Jul 2 10:31:32.119022 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 10:31:32.122515 systemd-logind[1191]: Session 8 logged out. Waiting for processes to exit. Jul 2 10:31:32.124162 systemd-logind[1191]: Removed session 8. Jul 2 10:31:34.431473 sshd[3432]: Failed password for root from 218.92.0.118 port 16493 ssh2 Jul 2 10:31:38.262734 systemd[1]: Started sshd@17-10.230.55.230:22-147.75.109.163:46430.service. Jul 2 10:31:38.304621 sshd[3432]: Failed password for root from 218.92.0.118 port 16493 ssh2 Jul 2 10:31:38.875309 sshd[3432]: Received disconnect from 218.92.0.118 port 16493:11: [preauth] Jul 2 10:31:38.875575 sshd[3432]: Disconnected from authenticating user root 218.92.0.118 port 16493 [preauth] Jul 2 10:31:38.875721 sshd[3432]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.118 user=root Jul 2 10:31:38.877493 systemd[1]: sshd@15-10.230.55.230:22-218.92.0.118:16493.service: Deactivated successfully. Jul 2 10:31:39.145595 sshd[3447]: Accepted publickey for core from 147.75.109.163 port 46430 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:31:39.151898 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:39.162643 systemd-logind[1191]: New session 9 of user core. Jul 2 10:31:39.163695 systemd[1]: Started session-9.scope. Jul 2 10:31:39.970688 sshd[3447]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:39.975930 systemd-logind[1191]: Session 9 logged out. Waiting for processes to exit. Jul 2 10:31:39.978519 systemd[1]: sshd@17-10.230.55.230:22-147.75.109.163:46430.service: Deactivated successfully. Jul 2 10:31:39.979614 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 10:31:39.981969 systemd-logind[1191]: Removed session 9. Jul 2 10:31:45.130423 systemd[1]: Started sshd@18-10.230.55.230:22-147.75.109.163:41338.service. Jul 2 10:31:46.012869 sshd[3461]: Accepted publickey for core from 147.75.109.163 port 41338 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:31:46.015437 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:46.023437 systemd-logind[1191]: New session 10 of user core. Jul 2 10:31:46.024542 systemd[1]: Started session-10.scope. Jul 2 10:31:46.750749 sshd[3461]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:46.755608 systemd[1]: sshd@18-10.230.55.230:22-147.75.109.163:41338.service: Deactivated successfully. Jul 2 10:31:46.756624 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 10:31:46.759470 systemd-logind[1191]: Session 10 logged out. Waiting for processes to exit. Jul 2 10:31:46.761303 systemd-logind[1191]: Removed session 10. Jul 2 10:31:51.896819 systemd[1]: Started sshd@19-10.230.55.230:22-147.75.109.163:41354.service. Jul 2 10:31:52.789773 sshd[3475]: Accepted publickey for core from 147.75.109.163 port 41354 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:31:52.793000 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:52.800918 systemd-logind[1191]: New session 11 of user core. Jul 2 10:31:52.801043 systemd[1]: Started session-11.scope. Jul 2 10:31:53.544149 sshd[3475]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:53.564633 systemd[1]: sshd@19-10.230.55.230:22-147.75.109.163:41354.service: Deactivated successfully. Jul 2 10:31:53.565862 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 10:31:53.575065 systemd-logind[1191]: Session 11 logged out. Waiting for processes to exit. Jul 2 10:31:53.577800 systemd-logind[1191]: Removed session 11. Jul 2 10:31:53.696935 systemd[1]: Started sshd@20-10.230.55.230:22-147.75.109.163:36650.service. Jul 2 10:31:54.572751 sshd[3490]: Accepted publickey for core from 147.75.109.163 port 36650 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:31:54.575544 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:54.584169 systemd[1]: Started session-12.scope. Jul 2 10:31:54.584172 systemd-logind[1191]: New session 12 of user core. Jul 2 10:31:56.499094 sshd[3490]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:56.504489 systemd[1]: sshd@20-10.230.55.230:22-147.75.109.163:36650.service: Deactivated successfully. Jul 2 10:31:56.505596 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 10:31:56.506390 systemd-logind[1191]: Session 12 logged out. Waiting for processes to exit. Jul 2 10:31:56.508079 systemd-logind[1191]: Removed session 12. Jul 2 10:31:56.644679 systemd[1]: Started sshd@21-10.230.55.230:22-147.75.109.163:36658.service. Jul 2 10:31:57.541819 sshd[3500]: Accepted publickey for core from 147.75.109.163 port 36658 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:31:57.543696 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:31:57.551572 systemd[1]: Started session-13.scope. Jul 2 10:31:57.552094 systemd-logind[1191]: New session 13 of user core. Jul 2 10:31:58.309078 sshd[3500]: pam_unix(sshd:session): session closed for user core Jul 2 10:31:58.312865 systemd-logind[1191]: Session 13 logged out. Waiting for processes to exit. Jul 2 10:31:58.314656 systemd[1]: sshd@21-10.230.55.230:22-147.75.109.163:36658.service: Deactivated successfully. Jul 2 10:31:58.315589 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 10:31:58.316739 systemd-logind[1191]: Removed session 13. Jul 2 10:32:03.471780 systemd[1]: Started sshd@22-10.230.55.230:22-147.75.109.163:37840.service. Jul 2 10:32:04.339046 sshd[3512]: Accepted publickey for core from 147.75.109.163 port 37840 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:04.341485 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:04.347513 systemd-logind[1191]: New session 14 of user core. Jul 2 10:32:04.348662 systemd[1]: Started session-14.scope. Jul 2 10:32:05.032525 sshd[3512]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:05.036943 systemd[1]: sshd@22-10.230.55.230:22-147.75.109.163:37840.service: Deactivated successfully. Jul 2 10:32:05.038578 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 10:32:05.040139 systemd-logind[1191]: Session 14 logged out. Waiting for processes to exit. Jul 2 10:32:05.042008 systemd-logind[1191]: Removed session 14. Jul 2 10:32:10.181141 systemd[1]: Started sshd@23-10.230.55.230:22-147.75.109.163:37856.service. Jul 2 10:32:11.072782 sshd[3526]: Accepted publickey for core from 147.75.109.163 port 37856 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:11.075108 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:11.082777 systemd[1]: Started session-15.scope. Jul 2 10:32:11.083295 systemd-logind[1191]: New session 15 of user core. Jul 2 10:32:11.765463 sshd[3526]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:11.770396 systemd[1]: sshd@23-10.230.55.230:22-147.75.109.163:37856.service: Deactivated successfully. Jul 2 10:32:11.771369 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 10:32:11.772972 systemd-logind[1191]: Session 15 logged out. Waiting for processes to exit. Jul 2 10:32:11.774397 systemd-logind[1191]: Removed session 15. Jul 2 10:32:11.921643 systemd[1]: Started sshd@24-10.230.55.230:22-147.75.109.163:37864.service. Jul 2 10:32:12.796372 sshd[3538]: Accepted publickey for core from 147.75.109.163 port 37864 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:12.798569 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:12.807535 systemd[1]: Started session-16.scope. Jul 2 10:32:12.810647 systemd-logind[1191]: New session 16 of user core. Jul 2 10:32:14.043749 sshd[3538]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:14.048604 systemd[1]: sshd@24-10.230.55.230:22-147.75.109.163:37864.service: Deactivated successfully. Jul 2 10:32:14.049609 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 10:32:14.051274 systemd-logind[1191]: Session 16 logged out. Waiting for processes to exit. Jul 2 10:32:14.053298 systemd-logind[1191]: Removed session 16. Jul 2 10:32:14.197015 systemd[1]: Started sshd@25-10.230.55.230:22-147.75.109.163:56910.service. Jul 2 10:32:15.096251 sshd[3548]: Accepted publickey for core from 147.75.109.163 port 56910 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:15.098892 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:15.107992 systemd-logind[1191]: New session 17 of user core. Jul 2 10:32:15.108398 systemd[1]: Started session-17.scope. Jul 2 10:32:17.127068 sshd[3548]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:17.133974 systemd-logind[1191]: Session 17 logged out. Waiting for processes to exit. Jul 2 10:32:17.136031 systemd[1]: sshd@25-10.230.55.230:22-147.75.109.163:56910.service: Deactivated successfully. Jul 2 10:32:17.138787 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 10:32:17.142366 systemd-logind[1191]: Removed session 17. Jul 2 10:32:17.266983 systemd[1]: Started sshd@26-10.230.55.230:22-147.75.109.163:56920.service. Jul 2 10:32:18.169351 sshd[3565]: Accepted publickey for core from 147.75.109.163 port 56920 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:18.171014 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:18.181108 systemd-logind[1191]: New session 18 of user core. Jul 2 10:32:18.182031 systemd[1]: Started session-18.scope. Jul 2 10:32:19.300709 sshd[3565]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:19.310168 systemd[1]: sshd@26-10.230.55.230:22-147.75.109.163:56920.service: Deactivated successfully. Jul 2 10:32:19.311786 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 10:32:19.318013 systemd-logind[1191]: Session 18 logged out. Waiting for processes to exit. Jul 2 10:32:19.320419 systemd-logind[1191]: Removed session 18. Jul 2 10:32:19.464561 systemd[1]: Started sshd@27-10.230.55.230:22-147.75.109.163:56926.service. Jul 2 10:32:20.342016 sshd[3575]: Accepted publickey for core from 147.75.109.163 port 56926 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:20.346074 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:20.365269 systemd[1]: Started session-19.scope. Jul 2 10:32:20.370954 systemd-logind[1191]: New session 19 of user core. Jul 2 10:32:21.086377 sshd[3575]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:21.092384 systemd[1]: sshd@27-10.230.55.230:22-147.75.109.163:56926.service: Deactivated successfully. Jul 2 10:32:21.093841 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 10:32:21.096847 systemd-logind[1191]: Session 19 logged out. Waiting for processes to exit. Jul 2 10:32:21.098983 systemd-logind[1191]: Removed session 19. Jul 2 10:32:26.238741 systemd[1]: Started sshd@28-10.230.55.230:22-147.75.109.163:59552.service. Jul 2 10:32:27.122810 sshd[3590]: Accepted publickey for core from 147.75.109.163 port 59552 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:27.124697 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:27.133472 systemd[1]: Started session-20.scope. Jul 2 10:32:27.134121 systemd-logind[1191]: New session 20 of user core. Jul 2 10:32:27.825809 sshd[3590]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:27.829177 systemd[1]: sshd@28-10.230.55.230:22-147.75.109.163:59552.service: Deactivated successfully. Jul 2 10:32:27.830710 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 10:32:27.830758 systemd-logind[1191]: Session 20 logged out. Waiting for processes to exit. Jul 2 10:32:27.832230 systemd-logind[1191]: Removed session 20. Jul 2 10:32:32.967676 systemd[1]: Started sshd@29-10.230.55.230:22-147.75.109.163:42794.service. Jul 2 10:32:33.843747 sshd[3604]: Accepted publickey for core from 147.75.109.163 port 42794 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:33.846661 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:33.858739 systemd-logind[1191]: New session 21 of user core. Jul 2 10:32:33.859665 systemd[1]: Started session-21.scope. Jul 2 10:32:34.607805 sshd[3604]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:34.611645 systemd[1]: sshd@29-10.230.55.230:22-147.75.109.163:42794.service: Deactivated successfully. Jul 2 10:32:34.612872 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 10:32:34.614511 systemd-logind[1191]: Session 21 logged out. Waiting for processes to exit. Jul 2 10:32:34.617747 systemd-logind[1191]: Removed session 21. Jul 2 10:32:39.732583 systemd[1]: Started sshd@30-10.230.55.230:22-147.75.109.163:42808.service. Jul 2 10:32:40.616771 sshd[3616]: Accepted publickey for core from 147.75.109.163 port 42808 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:40.618954 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:40.625990 systemd-logind[1191]: New session 22 of user core. Jul 2 10:32:40.626163 systemd[1]: Started session-22.scope. Jul 2 10:32:41.301346 sshd[3616]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:41.305028 systemd[1]: sshd@30-10.230.55.230:22-147.75.109.163:42808.service: Deactivated successfully. Jul 2 10:32:41.306037 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 10:32:41.308131 systemd-logind[1191]: Session 22 logged out. Waiting for processes to exit. Jul 2 10:32:41.310696 systemd-logind[1191]: Removed session 22. Jul 2 10:32:41.447991 systemd[1]: Started sshd@31-10.230.55.230:22-147.75.109.163:42810.service. Jul 2 10:32:42.330486 sshd[3627]: Accepted publickey for core from 147.75.109.163 port 42810 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:42.332869 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:42.340192 systemd-logind[1191]: New session 23 of user core. Jul 2 10:32:42.341555 systemd[1]: Started session-23.scope. Jul 2 10:32:44.857064 env[1203]: time="2024-07-02T10:32:44.852061803Z" level=info msg="StopContainer for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" with timeout 30 (s)" Jul 2 10:32:44.859078 env[1203]: time="2024-07-02T10:32:44.858149771Z" level=info msg="Stop container \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" with signal terminated" Jul 2 10:32:44.911815 systemd[1]: cri-containerd-273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b.scope: Deactivated successfully. Jul 2 10:32:44.934056 systemd[1]: run-containerd-runc-k8s.io-6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228-runc.GBqhDe.mount: Deactivated successfully. Jul 2 10:32:44.970655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b-rootfs.mount: Deactivated successfully. Jul 2 10:32:44.985645 env[1203]: time="2024-07-02T10:32:44.985559094Z" level=info msg="shim disconnected" id=273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b Jul 2 10:32:44.986068 env[1203]: time="2024-07-02T10:32:44.986026552Z" level=warning msg="cleaning up after shim disconnected" id=273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b namespace=k8s.io Jul 2 10:32:44.986212 env[1203]: time="2024-07-02T10:32:44.986167624Z" level=info msg="cleaning up dead shim" Jul 2 10:32:44.986398 env[1203]: time="2024-07-02T10:32:44.986257830Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 10:32:44.996223 env[1203]: time="2024-07-02T10:32:44.996156592Z" level=info msg="StopContainer for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" with timeout 2 (s)" Jul 2 10:32:44.996901 env[1203]: time="2024-07-02T10:32:44.996868288Z" level=info msg="Stop container \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" with signal terminated" Jul 2 10:32:45.010018 systemd-networkd[1028]: lxc_health: Link DOWN Jul 2 10:32:45.010031 systemd-networkd[1028]: lxc_health: Lost carrier Jul 2 10:32:45.015799 env[1203]: time="2024-07-02T10:32:45.015732895Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3671 runtime=io.containerd.runc.v2\n" Jul 2 10:32:45.019559 env[1203]: time="2024-07-02T10:32:45.019046079Z" level=info msg="StopContainer for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" returns successfully" Jul 2 10:32:45.020570 env[1203]: time="2024-07-02T10:32:45.020361256Z" level=info msg="StopPodSandbox for \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\"" Jul 2 10:32:45.020570 env[1203]: time="2024-07-02T10:32:45.020450947Z" level=info msg="Container to stop \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:45.026685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd-shm.mount: Deactivated successfully. Jul 2 10:32:45.064375 systemd[1]: cri-containerd-c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd.scope: Deactivated successfully. Jul 2 10:32:45.083388 systemd[1]: cri-containerd-6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228.scope: Deactivated successfully. Jul 2 10:32:45.083781 systemd[1]: cri-containerd-6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228.scope: Consumed 9.910s CPU time. Jul 2 10:32:45.113130 env[1203]: time="2024-07-02T10:32:45.111890622Z" level=info msg="shim disconnected" id=c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd Jul 2 10:32:45.113130 env[1203]: time="2024-07-02T10:32:45.111974052Z" level=warning msg="cleaning up after shim disconnected" id=c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd namespace=k8s.io Jul 2 10:32:45.113130 env[1203]: time="2024-07-02T10:32:45.111992077Z" level=info msg="cleaning up dead shim" Jul 2 10:32:45.120903 env[1203]: time="2024-07-02T10:32:45.120844069Z" level=info msg="shim disconnected" id=6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228 Jul 2 10:32:45.120903 env[1203]: time="2024-07-02T10:32:45.120901867Z" level=warning msg="cleaning up after shim disconnected" id=6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228 namespace=k8s.io Jul 2 10:32:45.121805 env[1203]: time="2024-07-02T10:32:45.120919037Z" level=info msg="cleaning up dead shim" Jul 2 10:32:45.129649 env[1203]: time="2024-07-02T10:32:45.129593900Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3728 runtime=io.containerd.runc.v2\n" Jul 2 10:32:45.130395 env[1203]: time="2024-07-02T10:32:45.130356992Z" level=info msg="TearDown network for sandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" successfully" Jul 2 10:32:45.130553 env[1203]: time="2024-07-02T10:32:45.130520051Z" level=info msg="StopPodSandbox for \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" returns successfully" Jul 2 10:32:45.150717 env[1203]: time="2024-07-02T10:32:45.150652003Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\n" Jul 2 10:32:45.152910 env[1203]: time="2024-07-02T10:32:45.152864106Z" level=info msg="StopContainer for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" returns successfully" Jul 2 10:32:45.153968 env[1203]: time="2024-07-02T10:32:45.153932581Z" level=info msg="StopPodSandbox for \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\"" Jul 2 10:32:45.154091 env[1203]: time="2024-07-02T10:32:45.154007820Z" level=info msg="Container to stop \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:45.154091 env[1203]: time="2024-07-02T10:32:45.154033351Z" level=info msg="Container to stop \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:45.154091 env[1203]: time="2024-07-02T10:32:45.154052851Z" level=info msg="Container to stop \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:45.154091 env[1203]: time="2024-07-02T10:32:45.154071612Z" level=info msg="Container to stop \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:45.154406 env[1203]: time="2024-07-02T10:32:45.154089176Z" level=info msg="Container to stop \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:45.162765 systemd[1]: cri-containerd-545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9.scope: Deactivated successfully. Jul 2 10:32:45.196586 env[1203]: time="2024-07-02T10:32:45.196378567Z" level=info msg="shim disconnected" id=545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9 Jul 2 10:32:45.198186 env[1203]: time="2024-07-02T10:32:45.196550753Z" level=warning msg="cleaning up after shim disconnected" id=545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9 namespace=k8s.io Jul 2 10:32:45.198186 env[1203]: time="2024-07-02T10:32:45.198186452Z" level=info msg="cleaning up dead shim" Jul 2 10:32:45.219881 env[1203]: time="2024-07-02T10:32:45.219807775Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3773 runtime=io.containerd.runc.v2\n" Jul 2 10:32:45.220364 env[1203]: time="2024-07-02T10:32:45.220326712Z" level=info msg="TearDown network for sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" successfully" Jul 2 10:32:45.220453 env[1203]: time="2024-07-02T10:32:45.220364916Z" level=info msg="StopPodSandbox for \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" returns successfully" Jul 2 10:32:45.269079 kubelet[2068]: I0702 10:32:45.269025 2068 scope.go:117] "RemoveContainer" containerID="273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b" Jul 2 10:32:45.272351 env[1203]: time="2024-07-02T10:32:45.272288994Z" level=info msg="RemoveContainer for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\"" Jul 2 10:32:45.283571 env[1203]: time="2024-07-02T10:32:45.283515690Z" level=info msg="RemoveContainer for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" returns successfully" Jul 2 10:32:45.285104 kubelet[2068]: I0702 10:32:45.284309 2068 scope.go:117] "RemoveContainer" containerID="273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b" Jul 2 10:32:45.285229 env[1203]: time="2024-07-02T10:32:45.284630650Z" level=error msg="ContainerStatus for \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\": not found" Jul 2 10:32:45.287415 kubelet[2068]: E0702 10:32:45.287385 2068 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\": not found" containerID="273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b" Jul 2 10:32:45.289657 kubelet[2068]: I0702 10:32:45.289624 2068 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b"} err="failed to get container status \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"273f49819e7a42518cdc2f0d3b8d205dee79e24eae770c739d33d9ba1838cc3b\": not found" Jul 2 10:32:45.289759 kubelet[2068]: I0702 10:32:45.289667 2068 scope.go:117] "RemoveContainer" containerID="6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228" Jul 2 10:32:45.294590 env[1203]: time="2024-07-02T10:32:45.294547825Z" level=info msg="RemoveContainer for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\"" Jul 2 10:32:45.299674 env[1203]: time="2024-07-02T10:32:45.299597973Z" level=info msg="RemoveContainer for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" returns successfully" Jul 2 10:32:45.300177 kubelet[2068]: I0702 10:32:45.300141 2068 scope.go:117] "RemoveContainer" containerID="ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb" Jul 2 10:32:45.302741 env[1203]: time="2024-07-02T10:32:45.302189110Z" level=info msg="RemoveContainer for \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\"" Jul 2 10:32:45.311746 env[1203]: time="2024-07-02T10:32:45.311502481Z" level=info msg="RemoveContainer for \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\" returns successfully" Jul 2 10:32:45.312150 kubelet[2068]: I0702 10:32:45.312108 2068 scope.go:117] "RemoveContainer" containerID="1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad" Jul 2 10:32:45.314327 env[1203]: time="2024-07-02T10:32:45.314269708Z" level=info msg="RemoveContainer for \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\"" Jul 2 10:32:45.320164 env[1203]: time="2024-07-02T10:32:45.319455962Z" level=info msg="RemoveContainer for \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\" returns successfully" Jul 2 10:32:45.320287 kubelet[2068]: I0702 10:32:45.319725 2068 scope.go:117] "RemoveContainer" containerID="34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906" Jul 2 10:32:45.324233 env[1203]: time="2024-07-02T10:32:45.324178231Z" level=info msg="RemoveContainer for \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\"" Jul 2 10:32:45.329779 env[1203]: time="2024-07-02T10:32:45.329736054Z" level=info msg="RemoveContainer for \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\" returns successfully" Jul 2 10:32:45.330290 kubelet[2068]: I0702 10:32:45.330245 2068 scope.go:117] "RemoveContainer" containerID="cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f" Jul 2 10:32:45.330707 kubelet[2068]: I0702 10:32:45.330681 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cni-path\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.330873 kubelet[2068]: I0702 10:32:45.330849 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-run\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.331043 kubelet[2068]: I0702 10:32:45.331020 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-xtables-lock\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.331244 kubelet[2068]: I0702 10:32:45.331192 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45dc5c0d-44eb-40ff-bde9-0241ecb69730-clustermesh-secrets\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.331433 kubelet[2068]: I0702 10:32:45.331402 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgvv\" (UniqueName: \"kubernetes.io/projected/61beabe7-eaa6-467c-b430-59ed80f1e6e0-kube-api-access-ccgvv\") pod \"61beabe7-eaa6-467c-b430-59ed80f1e6e0\" (UID: \"61beabe7-eaa6-467c-b430-59ed80f1e6e0\") " Jul 2 10:32:45.331595 kubelet[2068]: I0702 10:32:45.331572 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-lib-modules\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.331810 kubelet[2068]: I0702 10:32:45.331788 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-bpf-maps\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.331976 kubelet[2068]: I0702 10:32:45.331954 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hubble-tls\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.332138 kubelet[2068]: I0702 10:32:45.332116 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61beabe7-eaa6-467c-b430-59ed80f1e6e0-cilium-config-path\") pod \"61beabe7-eaa6-467c-b430-59ed80f1e6e0\" (UID: \"61beabe7-eaa6-467c-b430-59ed80f1e6e0\") " Jul 2 10:32:45.332317 kubelet[2068]: I0702 10:32:45.332295 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hostproc\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.332488 kubelet[2068]: I0702 10:32:45.332466 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-kernel\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.332673 kubelet[2068]: I0702 10:32:45.332642 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-cgroup\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.332829 kubelet[2068]: I0702 10:32:45.332807 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-config-path\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.332987 kubelet[2068]: I0702 10:32:45.332964 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-etc-cni-netd\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.333343 kubelet[2068]: I0702 10:32:45.333323 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-net\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.333718 kubelet[2068]: I0702 10:32:45.333540 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h4nl\" (UniqueName: \"kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-kube-api-access-6h4nl\") pod \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\" (UID: \"45dc5c0d-44eb-40ff-bde9-0241ecb69730\") " Jul 2 10:32:45.335996 kubelet[2068]: I0702 10:32:45.333613 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cni-path" (OuterVolumeSpecName: "cni-path") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.336226 kubelet[2068]: I0702 10:32:45.336182 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.337023 kubelet[2068]: I0702 10:32:45.336986 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hostproc" (OuterVolumeSpecName: "hostproc") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.337262 kubelet[2068]: I0702 10:32:45.337231 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.339682 kubelet[2068]: I0702 10:32:45.337411 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.339837 kubelet[2068]: I0702 10:32:45.337429 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.339989 kubelet[2068]: I0702 10:32:45.337480 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.340139 kubelet[2068]: I0702 10:32:45.337438 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.340298 kubelet[2068]: I0702 10:32:45.337775 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.340425 kubelet[2068]: I0702 10:32:45.338023 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:45.346363 kubelet[2068]: I0702 10:32:45.346317 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61beabe7-eaa6-467c-b430-59ed80f1e6e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61beabe7-eaa6-467c-b430-59ed80f1e6e0" (UID: "61beabe7-eaa6-467c-b430-59ed80f1e6e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:32:45.348079 kubelet[2068]: I0702 10:32:45.348044 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:32:45.348660 env[1203]: time="2024-07-02T10:32:45.348612033Z" level=info msg="RemoveContainer for \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\"" Jul 2 10:32:45.349931 kubelet[2068]: I0702 10:32:45.349867 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:32:45.352300 kubelet[2068]: I0702 10:32:45.352265 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61beabe7-eaa6-467c-b430-59ed80f1e6e0-kube-api-access-ccgvv" (OuterVolumeSpecName: "kube-api-access-ccgvv") pod "61beabe7-eaa6-467c-b430-59ed80f1e6e0" (UID: "61beabe7-eaa6-467c-b430-59ed80f1e6e0"). InnerVolumeSpecName "kube-api-access-ccgvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:32:45.356813 env[1203]: time="2024-07-02T10:32:45.356753240Z" level=info msg="RemoveContainer for \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\" returns successfully" Jul 2 10:32:45.357396 kubelet[2068]: I0702 10:32:45.357370 2068 scope.go:117] "RemoveContainer" containerID="6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228" Jul 2 10:32:45.358182 env[1203]: time="2024-07-02T10:32:45.358097066Z" level=error msg="ContainerStatus for \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\": not found" Jul 2 10:32:45.358425 kubelet[2068]: E0702 10:32:45.358394 2068 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\": not found" containerID="6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228" Jul 2 10:32:45.358660 kubelet[2068]: I0702 10:32:45.358632 2068 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228"} err="failed to get container status \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228\": not found" Jul 2 10:32:45.358890 kubelet[2068]: I0702 10:32:45.358807 2068 scope.go:117] "RemoveContainer" containerID="ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb" Jul 2 10:32:45.359296 env[1203]: time="2024-07-02T10:32:45.359235581Z" level=error msg="ContainerStatus for \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\": not found" Jul 2 10:32:45.359537 kubelet[2068]: E0702 10:32:45.359489 2068 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\": not found" containerID="ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb" Jul 2 10:32:45.359722 kubelet[2068]: I0702 10:32:45.359699 2068 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb"} err="failed to get container status \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec4d9008016e5ea4cbe200a804e7f135cc7346634befa9f525632aee2c92aecb\": not found" Jul 2 10:32:45.359987 kubelet[2068]: I0702 10:32:45.359882 2068 scope.go:117] "RemoveContainer" containerID="1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad" Jul 2 10:32:45.360396 env[1203]: time="2024-07-02T10:32:45.360287329Z" level=error msg="ContainerStatus for \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\": not found" Jul 2 10:32:45.360612 kubelet[2068]: E0702 10:32:45.360588 2068 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\": not found" containerID="1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad" Jul 2 10:32:45.360793 kubelet[2068]: I0702 10:32:45.360769 2068 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad"} err="failed to get container status \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ef2e90f0931cfed13d3c0b20b13275027384d81866c0c21b7b145ea47a7c6ad\": not found" Jul 2 10:32:45.360914 kubelet[2068]: I0702 10:32:45.360891 2068 scope.go:117] "RemoveContainer" containerID="34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906" Jul 2 10:32:45.361721 kubelet[2068]: I0702 10:32:45.361338 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dc5c0d-44eb-40ff-bde9-0241ecb69730-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:32:45.362010 kubelet[2068]: I0702 10:32:45.361241 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-kube-api-access-6h4nl" (OuterVolumeSpecName: "kube-api-access-6h4nl") pod "45dc5c0d-44eb-40ff-bde9-0241ecb69730" (UID: "45dc5c0d-44eb-40ff-bde9-0241ecb69730"). InnerVolumeSpecName "kube-api-access-6h4nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:32:45.366010 env[1203]: time="2024-07-02T10:32:45.363640860Z" level=error msg="ContainerStatus for \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\": not found" Jul 2 10:32:45.366390 kubelet[2068]: E0702 10:32:45.366365 2068 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\": not found" containerID="34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906" Jul 2 10:32:45.367144 kubelet[2068]: I0702 10:32:45.366548 2068 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906"} err="failed to get container status \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\": rpc error: code = NotFound desc = an error occurred when try to find container \"34249fdaa05ad28305b2491b7f4c3ec45b3dbd3dc2b21bef466c972cf001e906\": not found" Jul 2 10:32:45.367144 kubelet[2068]: I0702 10:32:45.366576 2068 scope.go:117] "RemoveContainer" containerID="cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f" Jul 2 10:32:45.367364 env[1203]: time="2024-07-02T10:32:45.366972106Z" level=error msg="ContainerStatus for \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\": not found" Jul 2 10:32:45.367557 kubelet[2068]: E0702 10:32:45.367532 2068 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\": not found" containerID="cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f" Jul 2 10:32:45.367694 kubelet[2068]: I0702 10:32:45.367671 2068 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f"} err="failed to get container status \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdc71612238ecb86f1b4547487c843c4c90b9ff1b1b4fe23b643531ea6b3076f\": not found" Jul 2 10:32:45.373359 systemd[1]: Removed slice kubepods-besteffort-pod61beabe7_eaa6_467c_b430_59ed80f1e6e0.slice. Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.434943 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-config-path\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435009 2068 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-etc-cni-netd\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435035 2068 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-net\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435061 2068 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6h4nl\" (UniqueName: \"kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-kube-api-access-6h4nl\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435081 2068 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cni-path\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435099 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-run\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435117 2068 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-xtables-lock\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.435140 kubelet[2068]: I0702 10:32:45.435134 2068 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45dc5c0d-44eb-40ff-bde9-0241ecb69730-clustermesh-secrets\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435154 2068 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ccgvv\" (UniqueName: \"kubernetes.io/projected/61beabe7-eaa6-467c-b430-59ed80f1e6e0-kube-api-access-ccgvv\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435172 2068 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-lib-modules\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435189 2068 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-bpf-maps\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435233 2068 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hubble-tls\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435253 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61beabe7-eaa6-467c-b430-59ed80f1e6e0-cilium-config-path\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435269 2068 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-hostproc\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435287 2068 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-host-proc-sys-kernel\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.436673 kubelet[2068]: I0702 10:32:45.435304 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45dc5c0d-44eb-40ff-bde9-0241ecb69730-cilium-cgroup\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:45.578004 systemd[1]: Removed slice kubepods-burstable-pod45dc5c0d_44eb_40ff_bde9_0241ecb69730.slice. Jul 2 10:32:45.578136 systemd[1]: kubepods-burstable-pod45dc5c0d_44eb_40ff_bde9_0241ecb69730.slice: Consumed 10.058s CPU time. Jul 2 10:32:45.921008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ef2043be7381e6dc98b230ea608054f4bf3a3f0370b8cfecf20e856fc050228-rootfs.mount: Deactivated successfully. Jul 2 10:32:45.921154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd-rootfs.mount: Deactivated successfully. Jul 2 10:32:45.921284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9-rootfs.mount: Deactivated successfully. Jul 2 10:32:45.921412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9-shm.mount: Deactivated successfully. Jul 2 10:32:45.921538 systemd[1]: var-lib-kubelet-pods-61beabe7\x2deaa6\x2d467c\x2db430\x2d59ed80f1e6e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccgvv.mount: Deactivated successfully. Jul 2 10:32:45.921649 systemd[1]: var-lib-kubelet-pods-45dc5c0d\x2d44eb\x2d40ff\x2dbde9\x2d0241ecb69730-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6h4nl.mount: Deactivated successfully. Jul 2 10:32:45.921756 systemd[1]: var-lib-kubelet-pods-45dc5c0d\x2d44eb\x2d40ff\x2dbde9\x2d0241ecb69730-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 10:32:45.921873 systemd[1]: var-lib-kubelet-pods-45dc5c0d\x2d44eb\x2d40ff\x2dbde9\x2d0241ecb69730-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 10:32:46.875404 sshd[3627]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:46.885167 systemd[1]: sshd@31-10.230.55.230:22-147.75.109.163:42810.service: Deactivated successfully. Jul 2 10:32:46.886151 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 10:32:46.886409 systemd[1]: session-23.scope: Consumed 1.042s CPU time. Jul 2 10:32:46.888841 systemd-logind[1191]: Session 23 logged out. Waiting for processes to exit. Jul 2 10:32:46.890135 systemd-logind[1191]: Removed session 23. Jul 2 10:32:47.022487 systemd[1]: Started sshd@32-10.230.55.230:22-147.75.109.163:48348.service. Jul 2 10:32:47.354722 kubelet[2068]: I0702 10:32:47.354680 2068 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" path="/var/lib/kubelet/pods/45dc5c0d-44eb-40ff-bde9-0241ecb69730/volumes" Jul 2 10:32:47.359352 kubelet[2068]: I0702 10:32:47.359154 2068 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="61beabe7-eaa6-467c-b430-59ed80f1e6e0" path="/var/lib/kubelet/pods/61beabe7-eaa6-467c-b430-59ed80f1e6e0/volumes" Jul 2 10:32:47.599707 kubelet[2068]: E0702 10:32:47.599668 2068 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:32:47.936860 sshd[3793]: Accepted publickey for core from 147.75.109.163 port 48348 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:47.938772 sshd[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:47.948777 systemd-logind[1191]: New session 24 of user core. Jul 2 10:32:47.950007 systemd[1]: Started session-24.scope. Jul 2 10:32:49.397092 kubelet[2068]: I0702 10:32:49.397050 2068 topology_manager.go:215] "Topology Admit Handler" podUID="bdf98e0b-39e6-4808-9384-5690d79b4da7" podNamespace="kube-system" podName="cilium-g6b7n" Jul 2 10:32:49.400772 kubelet[2068]: E0702 10:32:49.400739 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" containerName="mount-bpf-fs" Jul 2 10:32:49.401083 kubelet[2068]: E0702 10:32:49.401057 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61beabe7-eaa6-467c-b430-59ed80f1e6e0" containerName="cilium-operator" Jul 2 10:32:49.401275 kubelet[2068]: E0702 10:32:49.401251 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" containerName="mount-cgroup" Jul 2 10:32:49.401409 kubelet[2068]: E0702 10:32:49.401386 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" containerName="apply-sysctl-overwrites" Jul 2 10:32:49.401523 kubelet[2068]: E0702 10:32:49.401501 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" containerName="clean-cilium-state" Jul 2 10:32:49.401652 kubelet[2068]: E0702 10:32:49.401629 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" containerName="cilium-agent" Jul 2 10:32:49.404434 kubelet[2068]: I0702 10:32:49.404404 2068 memory_manager.go:346] "RemoveStaleState removing state" podUID="61beabe7-eaa6-467c-b430-59ed80f1e6e0" containerName="cilium-operator" Jul 2 10:32:49.404761 kubelet[2068]: I0702 10:32:49.404650 2068 memory_manager.go:346] "RemoveStaleState removing state" podUID="45dc5c0d-44eb-40ff-bde9-0241ecb69730" containerName="cilium-agent" Jul 2 10:32:49.416096 systemd[1]: Created slice kubepods-burstable-podbdf98e0b_39e6_4808_9384_5690d79b4da7.slice. Jul 2 10:32:49.429155 kubelet[2068]: W0702 10:32:49.429110 2068 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.429477 kubelet[2068]: E0702 10:32:49.429446 2068 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.429736 kubelet[2068]: W0702 10:32:49.429709 2068 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.429942 kubelet[2068]: E0702 10:32:49.429916 2068 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.430136 kubelet[2068]: W0702 10:32:49.430111 2068 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.430280 kubelet[2068]: E0702 10:32:49.430258 2068 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.430418 kubelet[2068]: W0702 10:32:49.430329 2068 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.430559 kubelet[2068]: E0702 10:32:49.430538 2068 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-ehxin.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-ehxin.gb1.brightbox.com' and this object Jul 2 10:32:49.465935 kubelet[2068]: I0702 10:32:49.465892 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-xtables-lock\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.469970 kubelet[2068]: I0702 10:32:49.469935 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-run\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.470315 kubelet[2068]: I0702 10:32:49.470291 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-ipsec-secrets\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.470512 kubelet[2068]: I0702 10:32:49.470489 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-net\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.470746 kubelet[2068]: I0702 10:32:49.470715 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-hubble-tls\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.470956 kubelet[2068]: I0702 10:32:49.470934 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzxl\" (UniqueName: \"kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-kube-api-access-hzzxl\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.471166 kubelet[2068]: I0702 10:32:49.471143 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-etc-cni-netd\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.472283 kubelet[2068]: I0702 10:32:49.472260 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-config-path\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.472532 kubelet[2068]: I0702 10:32:49.472509 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-cgroup\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.472750 kubelet[2068]: I0702 10:32:49.472719 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cni-path\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.473026 kubelet[2068]: I0702 10:32:49.472987 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-hostproc\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.473215 kubelet[2068]: I0702 10:32:49.473175 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-lib-modules\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.473426 kubelet[2068]: I0702 10:32:49.473385 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-bpf-maps\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.473614 kubelet[2068]: I0702 10:32:49.473593 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-clustermesh-secrets\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.473815 kubelet[2068]: I0702 10:32:49.473793 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-kernel\") pod \"cilium-g6b7n\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " pod="kube-system/cilium-g6b7n" Jul 2 10:32:49.548007 sshd[3793]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:49.552019 systemd[1]: sshd@32-10.230.55.230:22-147.75.109.163:48348.service: Deactivated successfully. Jul 2 10:32:49.553039 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 10:32:49.553950 systemd-logind[1191]: Session 24 logged out. Waiting for processes to exit. Jul 2 10:32:49.559046 systemd-logind[1191]: Removed session 24. Jul 2 10:32:49.692557 systemd[1]: Started sshd@33-10.230.55.230:22-147.75.109.163:48358.service. Jul 2 10:32:50.570788 sshd[3805]: Accepted publickey for core from 147.75.109.163 port 48358 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:50.574009 sshd[3805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:50.576695 kubelet[2068]: E0702 10:32:50.576660 2068 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 10:32:50.577263 kubelet[2068]: E0702 10:32:50.577236 2068 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-config-path podName:bdf98e0b-39e6-4808-9384-5690d79b4da7 nodeName:}" failed. No retries permitted until 2024-07-02 10:32:51.077154239 +0000 UTC m=+164.046819531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-config-path") pod "cilium-g6b7n" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7") : failed to sync configmap cache: timed out waiting for the condition Jul 2 10:32:50.578048 kubelet[2068]: E0702 10:32:50.578010 2068 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 10:32:50.580673 kubelet[2068]: E0702 10:32:50.580449 2068 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-g6b7n: failed to sync secret cache: timed out waiting for the condition Jul 2 10:32:50.580875 kubelet[2068]: E0702 10:32:50.580389 2068 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 2 10:32:50.581029 kubelet[2068]: E0702 10:32:50.581004 2068 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-hubble-tls podName:bdf98e0b-39e6-4808-9384-5690d79b4da7 nodeName:}" failed. No retries permitted until 2024-07-02 10:32:51.080830086 +0000 UTC m=+164.050495377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-hubble-tls") pod "cilium-g6b7n" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7") : failed to sync secret cache: timed out waiting for the condition Jul 2 10:32:50.581249 kubelet[2068]: E0702 10:32:50.581226 2068 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-clustermesh-secrets podName:bdf98e0b-39e6-4808-9384-5690d79b4da7 nodeName:}" failed. No retries permitted until 2024-07-02 10:32:51.081188222 +0000 UTC m=+164.050853526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-clustermesh-secrets") pod "cilium-g6b7n" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7") : failed to sync secret cache: timed out waiting for the condition Jul 2 10:32:50.590545 systemd[1]: Started session-25.scope. Jul 2 10:32:50.591262 systemd-logind[1191]: New session 25 of user core. Jul 2 10:32:51.222356 env[1203]: time="2024-07-02T10:32:51.222255100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6b7n,Uid:bdf98e0b-39e6-4808-9384-5690d79b4da7,Namespace:kube-system,Attempt:0,}" Jul 2 10:32:51.263919 env[1203]: time="2024-07-02T10:32:51.262883038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:32:51.263919 env[1203]: time="2024-07-02T10:32:51.262972535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:32:51.263919 env[1203]: time="2024-07-02T10:32:51.263003777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:32:51.263919 env[1203]: time="2024-07-02T10:32:51.263287939Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd pid=3825 runtime=io.containerd.runc.v2 Jul 2 10:32:51.296648 systemd[1]: Started cri-containerd-a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd.scope. Jul 2 10:32:51.362926 env[1203]: time="2024-07-02T10:32:51.362769898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6b7n,Uid:bdf98e0b-39e6-4808-9384-5690d79b4da7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\"" Jul 2 10:32:51.373404 env[1203]: time="2024-07-02T10:32:51.373328191Z" level=info msg="CreateContainer within sandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:32:51.392126 env[1203]: time="2024-07-02T10:32:51.392042453Z" level=info msg="CreateContainer within sandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\"" Jul 2 10:32:51.394642 env[1203]: time="2024-07-02T10:32:51.394603203Z" level=info msg="StartContainer for \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\"" Jul 2 10:32:51.402997 sshd[3805]: pam_unix(sshd:session): session closed for user core Jul 2 10:32:51.407159 systemd[1]: sshd@33-10.230.55.230:22-147.75.109.163:48358.service: Deactivated successfully. Jul 2 10:32:51.408232 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 10:32:51.409583 systemd-logind[1191]: Session 25 logged out. Waiting for processes to exit. Jul 2 10:32:51.410796 systemd-logind[1191]: Removed session 25. Jul 2 10:32:51.434254 systemd[1]: Started cri-containerd-f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35.scope. Jul 2 10:32:51.457382 systemd[1]: cri-containerd-f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35.scope: Deactivated successfully. Jul 2 10:32:51.477975 env[1203]: time="2024-07-02T10:32:51.477172148Z" level=info msg="shim disconnected" id=f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35 Jul 2 10:32:51.477975 env[1203]: time="2024-07-02T10:32:51.477268478Z" level=warning msg="cleaning up after shim disconnected" id=f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35 namespace=k8s.io Jul 2 10:32:51.477975 env[1203]: time="2024-07-02T10:32:51.477285760Z" level=info msg="cleaning up dead shim" Jul 2 10:32:51.493266 env[1203]: time="2024-07-02T10:32:51.493176780Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3884 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T10:32:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 10:32:51.493698 env[1203]: time="2024-07-02T10:32:51.493527381Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Jul 2 10:32:51.494040 env[1203]: time="2024-07-02T10:32:51.493975683Z" level=error msg="Failed to pipe stdout of container \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\"" error="reading from a closed fifo" Jul 2 10:32:51.494350 env[1203]: time="2024-07-02T10:32:51.494293946Z" level=error msg="Failed to pipe stderr of container \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\"" error="reading from a closed fifo" Jul 2 10:32:51.495814 env[1203]: time="2024-07-02T10:32:51.495761108Z" level=error msg="StartContainer for \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 10:32:51.497347 kubelet[2068]: E0702 10:32:51.496092 2068 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35" Jul 2 10:32:51.501385 kubelet[2068]: E0702 10:32:51.501244 2068 kuberuntime_manager.go:1261] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 10:32:51.501385 kubelet[2068]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 10:32:51.501385 kubelet[2068]: rm /hostbin/cilium-mount Jul 2 10:32:51.501618 kubelet[2068]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hzzxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-g6b7n_kube-system(bdf98e0b-39e6-4808-9384-5690d79b4da7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 10:32:51.501618 kubelet[2068]: E0702 10:32:51.501327 2068 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g6b7n" podUID="bdf98e0b-39e6-4808-9384-5690d79b4da7" Jul 2 10:32:51.548605 systemd[1]: Started sshd@34-10.230.55.230:22-147.75.109.163:48364.service. Jul 2 10:32:51.711656 kubelet[2068]: I0702 10:32:51.711591 2068 setters.go:552] "Node became not ready" node="srv-ehxin.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T10:32:51Z","lastTransitionTime":"2024-07-02T10:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 10:32:52.098598 systemd[1]: run-containerd-runc-k8s.io-a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd-runc.OTI842.mount: Deactivated successfully. Jul 2 10:32:52.289175 env[1203]: time="2024-07-02T10:32:52.289108223Z" level=info msg="StopPodSandbox for \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\"" Jul 2 10:32:52.289718 env[1203]: time="2024-07-02T10:32:52.289217160Z" level=info msg="Container to stop \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 10:32:52.292140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd-shm.mount: Deactivated successfully. Jul 2 10:32:52.311075 systemd[1]: cri-containerd-a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd.scope: Deactivated successfully. Jul 2 10:32:52.343659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd-rootfs.mount: Deactivated successfully. Jul 2 10:32:52.351536 env[1203]: time="2024-07-02T10:32:52.351011942Z" level=info msg="shim disconnected" id=a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd Jul 2 10:32:52.351536 env[1203]: time="2024-07-02T10:32:52.351166260Z" level=warning msg="cleaning up after shim disconnected" id=a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd namespace=k8s.io Jul 2 10:32:52.351536 env[1203]: time="2024-07-02T10:32:52.351189097Z" level=info msg="cleaning up dead shim" Jul 2 10:32:52.362652 env[1203]: time="2024-07-02T10:32:52.362594607Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Jul 2 10:32:52.363300 env[1203]: time="2024-07-02T10:32:52.363262277Z" level=info msg="TearDown network for sandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" successfully" Jul 2 10:32:52.363455 env[1203]: time="2024-07-02T10:32:52.363420881Z" level=info msg="StopPodSandbox for \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" returns successfully" Jul 2 10:32:52.401165 kubelet[2068]: I0702 10:32:52.401112 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-lib-modules\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.401499 kubelet[2068]: I0702 10:32:52.401355 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.402307 kubelet[2068]: I0702 10:32:52.401673 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-hubble-tls\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.402816 kubelet[2068]: I0702 10:32:52.402463 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-clustermesh-secrets\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.403033 kubelet[2068]: I0702 10:32:52.402992 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-kernel\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.403186 kubelet[2068]: I0702 10:32:52.403163 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-ipsec-secrets\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.403359 kubelet[2068]: I0702 10:32:52.403336 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzzxl\" (UniqueName: \"kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-kube-api-access-hzzxl\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.403487 kubelet[2068]: I0702 10:32:52.403465 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-cgroup\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.403702 kubelet[2068]: I0702 10:32:52.403677 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-run\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.403849 kubelet[2068]: I0702 10:32:52.403827 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-net\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.404230 kubelet[2068]: I0702 10:32:52.403995 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-config-path\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.404230 kubelet[2068]: I0702 10:32:52.404032 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-xtables-lock\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.404230 kubelet[2068]: I0702 10:32:52.404063 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-hostproc\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.404230 kubelet[2068]: I0702 10:32:52.404098 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-bpf-maps\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.404230 kubelet[2068]: I0702 10:32:52.404133 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-etc-cni-netd\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.404844 kubelet[2068]: I0702 10:32:52.404820 2068 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cni-path\") pod \"bdf98e0b-39e6-4808-9384-5690d79b4da7\" (UID: \"bdf98e0b-39e6-4808-9384-5690d79b4da7\") " Jul 2 10:32:52.405191 kubelet[2068]: I0702 10:32:52.405167 2068 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-lib-modules\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.405357 kubelet[2068]: I0702 10:32:52.404965 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cni-path" (OuterVolumeSpecName: "cni-path") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.405485 kubelet[2068]: I0702 10:32:52.404273 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.405611 kubelet[2068]: I0702 10:32:52.404784 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.405742 kubelet[2068]: I0702 10:32:52.404249 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.409430 systemd[1]: var-lib-kubelet-pods-bdf98e0b\x2d39e6\x2d4808\x2d9384\x2d5690d79b4da7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 10:32:52.411302 kubelet[2068]: I0702 10:32:52.410614 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.411302 kubelet[2068]: I0702 10:32:52.410680 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-hostproc" (OuterVolumeSpecName: "hostproc") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.411302 kubelet[2068]: I0702 10:32:52.410713 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.411582 kubelet[2068]: I0702 10:32:52.411554 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.411737 kubelet[2068]: I0702 10:32:52.411710 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 10:32:52.414018 kubelet[2068]: I0702 10:32:52.413753 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 10:32:52.414296 kubelet[2068]: I0702 10:32:52.414266 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:32:52.416934 systemd[1]: var-lib-kubelet-pods-bdf98e0b\x2d39e6\x2d4808\x2d9384\x2d5690d79b4da7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 10:32:52.419348 kubelet[2068]: I0702 10:32:52.419282 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:32:52.420549 kubelet[2068]: I0702 10:32:52.420511 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 10:32:52.422059 kubelet[2068]: I0702 10:32:52.422024 2068 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-kube-api-access-hzzxl" (OuterVolumeSpecName: "kube-api-access-hzzxl") pod "bdf98e0b-39e6-4808-9384-5690d79b4da7" (UID: "bdf98e0b-39e6-4808-9384-5690d79b4da7"). InnerVolumeSpecName "kube-api-access-hzzxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 10:32:52.433147 sshd[3899]: Accepted publickey for core from 147.75.109.163 port 48364 ssh2: RSA SHA256:tplVoPuf7nNE4yvFHu+9Y9e9LG8fTMx2zzRxkTkSEBg Jul 2 10:32:52.436041 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 10:32:52.446007 systemd-logind[1191]: New session 26 of user core. Jul 2 10:32:52.447090 systemd[1]: Started session-26.scope. Jul 2 10:32:52.506391 kubelet[2068]: I0702 10:32:52.506348 2068 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cni-path\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.506635 kubelet[2068]: I0702 10:32:52.506612 2068 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-hubble-tls\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.506798 kubelet[2068]: I0702 10:32:52.506776 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-cgroup\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.506944 kubelet[2068]: I0702 10:32:52.506921 2068 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-clustermesh-secrets\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.507136 kubelet[2068]: I0702 10:32:52.507113 2068 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-kernel\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.507287 kubelet[2068]: I0702 10:32:52.507264 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-ipsec-secrets\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.507429 kubelet[2068]: I0702 10:32:52.507407 2068 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hzzxl\" (UniqueName: \"kubernetes.io/projected/bdf98e0b-39e6-4808-9384-5690d79b4da7-kube-api-access-hzzxl\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.507580 kubelet[2068]: I0702 10:32:52.507558 2068 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-xtables-lock\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.507714 kubelet[2068]: I0702 10:32:52.507693 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-run\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.507859 kubelet[2068]: I0702 10:32:52.507837 2068 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-host-proc-sys-net\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.508000 kubelet[2068]: I0702 10:32:52.507979 2068 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bdf98e0b-39e6-4808-9384-5690d79b4da7-cilium-config-path\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.508899 kubelet[2068]: I0702 10:32:52.508866 2068 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-hostproc\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.509019 kubelet[2068]: I0702 10:32:52.508999 2068 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-bpf-maps\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.509155 kubelet[2068]: I0702 10:32:52.509134 2068 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bdf98e0b-39e6-4808-9384-5690d79b4da7-etc-cni-netd\") on node \"srv-ehxin.gb1.brightbox.com\" DevicePath \"\"" Jul 2 10:32:52.601450 kubelet[2068]: E0702 10:32:52.601409 2068 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 10:32:53.098298 systemd[1]: var-lib-kubelet-pods-bdf98e0b\x2d39e6\x2d4808\x2d9384\x2d5690d79b4da7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 10:32:53.098432 systemd[1]: var-lib-kubelet-pods-bdf98e0b\x2d39e6\x2d4808\x2d9384\x2d5690d79b4da7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhzzxl.mount: Deactivated successfully. Jul 2 10:32:53.293256 kubelet[2068]: I0702 10:32:53.293189 2068 scope.go:117] "RemoveContainer" containerID="f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35" Jul 2 10:32:53.298170 env[1203]: time="2024-07-02T10:32:53.297754857Z" level=info msg="RemoveContainer for \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\"" Jul 2 10:32:53.300762 systemd[1]: Removed slice kubepods-burstable-podbdf98e0b_39e6_4808_9384_5690d79b4da7.slice. Jul 2 10:32:53.302795 env[1203]: time="2024-07-02T10:32:53.302758022Z" level=info msg="RemoveContainer for \"f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35\" returns successfully" Jul 2 10:32:53.467423 kubelet[2068]: I0702 10:32:53.467342 2068 topology_manager.go:215] "Topology Admit Handler" podUID="73b6c6c3-f420-4beb-800c-9e6467827442" podNamespace="kube-system" podName="cilium-pxk45" Jul 2 10:32:53.467642 kubelet[2068]: E0702 10:32:53.467475 2068 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bdf98e0b-39e6-4808-9384-5690d79b4da7" containerName="mount-cgroup" Jul 2 10:32:53.467642 kubelet[2068]: I0702 10:32:53.467519 2068 memory_manager.go:346] "RemoveStaleState removing state" podUID="bdf98e0b-39e6-4808-9384-5690d79b4da7" containerName="mount-cgroup" Jul 2 10:32:53.475637 systemd[1]: Created slice kubepods-burstable-pod73b6c6c3_f420_4beb_800c_9e6467827442.slice. Jul 2 10:32:53.515266 kubelet[2068]: I0702 10:32:53.515221 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-hostproc\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515482 kubelet[2068]: I0702 10:32:53.515299 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-bpf-maps\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515482 kubelet[2068]: I0702 10:32:53.515354 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-host-proc-sys-kernel\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515482 kubelet[2068]: I0702 10:32:53.515393 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-cilium-cgroup\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515482 kubelet[2068]: I0702 10:32:53.515446 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-etc-cni-netd\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515743 kubelet[2068]: I0702 10:32:53.515482 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-host-proc-sys-net\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515743 kubelet[2068]: I0702 10:32:53.515546 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-cilium-run\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515743 kubelet[2068]: I0702 10:32:53.515597 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-cni-path\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515743 kubelet[2068]: I0702 10:32:53.515639 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-lib-modules\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.515743 kubelet[2068]: I0702 10:32:53.515693 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73b6c6c3-f420-4beb-800c-9e6467827442-cilium-config-path\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.516210 kubelet[2068]: I0702 10:32:53.515756 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73b6c6c3-f420-4beb-800c-9e6467827442-hubble-tls\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.516210 kubelet[2068]: I0702 10:32:53.515795 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgcj\" (UniqueName: \"kubernetes.io/projected/73b6c6c3-f420-4beb-800c-9e6467827442-kube-api-access-lpgcj\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.516210 kubelet[2068]: I0702 10:32:53.515846 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73b6c6c3-f420-4beb-800c-9e6467827442-xtables-lock\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.516210 kubelet[2068]: I0702 10:32:53.515883 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73b6c6c3-f420-4beb-800c-9e6467827442-clustermesh-secrets\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.516210 kubelet[2068]: I0702 10:32:53.515976 2068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73b6c6c3-f420-4beb-800c-9e6467827442-cilium-ipsec-secrets\") pod \"cilium-pxk45\" (UID: \"73b6c6c3-f420-4beb-800c-9e6467827442\") " pod="kube-system/cilium-pxk45" Jul 2 10:32:53.780906 env[1203]: time="2024-07-02T10:32:53.780755830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxk45,Uid:73b6c6c3-f420-4beb-800c-9e6467827442,Namespace:kube-system,Attempt:0,}" Jul 2 10:32:53.806837 env[1203]: time="2024-07-02T10:32:53.806747849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 10:32:53.807094 env[1203]: time="2024-07-02T10:32:53.806803972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 10:32:53.807094 env[1203]: time="2024-07-02T10:32:53.806822127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 10:32:53.807094 env[1203]: time="2024-07-02T10:32:53.807028620Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13 pid=3955 runtime=io.containerd.runc.v2 Jul 2 10:32:53.842516 systemd[1]: Started cri-containerd-c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13.scope. Jul 2 10:32:53.880336 env[1203]: time="2024-07-02T10:32:53.880279319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxk45,Uid:73b6c6c3-f420-4beb-800c-9e6467827442,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\"" Jul 2 10:32:53.885986 env[1203]: time="2024-07-02T10:32:53.885061639Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 10:32:53.898775 env[1203]: time="2024-07-02T10:32:53.898713649Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77\"" Jul 2 10:32:53.902399 env[1203]: time="2024-07-02T10:32:53.902350387Z" level=info msg="StartContainer for \"217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77\"" Jul 2 10:32:53.930918 systemd[1]: Started cri-containerd-217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77.scope. Jul 2 10:32:53.975689 env[1203]: time="2024-07-02T10:32:53.975625427Z" level=info msg="StartContainer for \"217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77\" returns successfully" Jul 2 10:32:53.999762 systemd[1]: cri-containerd-217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77.scope: Deactivated successfully. Jul 2 10:32:54.036317 env[1203]: time="2024-07-02T10:32:54.035799031Z" level=info msg="shim disconnected" id=217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77 Jul 2 10:32:54.036690 env[1203]: time="2024-07-02T10:32:54.036659379Z" level=warning msg="cleaning up after shim disconnected" id=217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77 namespace=k8s.io Jul 2 10:32:54.036838 env[1203]: time="2024-07-02T10:32:54.036810896Z" level=info msg="cleaning up dead shim" Jul 2 10:32:54.049490 env[1203]: time="2024-07-02T10:32:54.049435357Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4037 runtime=io.containerd.runc.v2\n" Jul 2 10:32:54.303065 env[1203]: time="2024-07-02T10:32:54.302720869Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 10:32:54.334462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293042406.mount: Deactivated successfully. Jul 2 10:32:54.342844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620537676.mount: Deactivated successfully. Jul 2 10:32:54.347997 env[1203]: time="2024-07-02T10:32:54.347945984Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f\"" Jul 2 10:32:54.350527 env[1203]: time="2024-07-02T10:32:54.349149512Z" level=info msg="StartContainer for \"617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f\"" Jul 2 10:32:54.369765 systemd[1]: Started cri-containerd-617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f.scope. Jul 2 10:32:54.411756 env[1203]: time="2024-07-02T10:32:54.411700786Z" level=info msg="StartContainer for \"617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f\" returns successfully" Jul 2 10:32:54.425792 systemd[1]: cri-containerd-617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f.scope: Deactivated successfully. Jul 2 10:32:54.456064 env[1203]: time="2024-07-02T10:32:54.455992909Z" level=info msg="shim disconnected" id=617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f Jul 2 10:32:54.456467 env[1203]: time="2024-07-02T10:32:54.456433668Z" level=warning msg="cleaning up after shim disconnected" id=617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f namespace=k8s.io Jul 2 10:32:54.456668 env[1203]: time="2024-07-02T10:32:54.456639196Z" level=info msg="cleaning up dead shim" Jul 2 10:32:54.468654 env[1203]: time="2024-07-02T10:32:54.468592609Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4100 runtime=io.containerd.runc.v2\n" Jul 2 10:32:54.595791 kubelet[2068]: W0702 10:32:54.595074 2068 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf98e0b_39e6_4808_9384_5690d79b4da7.slice/cri-containerd-f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35.scope WatchSource:0}: container "f3d665cdd0bb58231cbd8c629d892816e9c59c1e35e290c8bb84516900c7de35" in namespace "k8s.io": not found Jul 2 10:32:55.316628 env[1203]: time="2024-07-02T10:32:55.316565513Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 10:32:55.349862 env[1203]: time="2024-07-02T10:32:55.349807015Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951\"" Jul 2 10:32:55.350967 env[1203]: time="2024-07-02T10:32:55.350931231Z" level=info msg="StartContainer for \"7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951\"" Jul 2 10:32:55.358578 kubelet[2068]: I0702 10:32:55.358543 2068 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bdf98e0b-39e6-4808-9384-5690d79b4da7" path="/var/lib/kubelet/pods/bdf98e0b-39e6-4808-9384-5690d79b4da7/volumes" Jul 2 10:32:55.395227 systemd[1]: Started cri-containerd-7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951.scope. Jul 2 10:32:55.460778 env[1203]: time="2024-07-02T10:32:55.460725041Z" level=info msg="StartContainer for \"7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951\" returns successfully" Jul 2 10:32:55.472772 systemd[1]: cri-containerd-7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951.scope: Deactivated successfully. Jul 2 10:32:55.503032 env[1203]: time="2024-07-02T10:32:55.502965444Z" level=info msg="shim disconnected" id=7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951 Jul 2 10:32:55.503377 env[1203]: time="2024-07-02T10:32:55.503344901Z" level=warning msg="cleaning up after shim disconnected" id=7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951 namespace=k8s.io Jul 2 10:32:55.503569 env[1203]: time="2024-07-02T10:32:55.503540399Z" level=info msg="cleaning up dead shim" Jul 2 10:32:55.519314 env[1203]: time="2024-07-02T10:32:55.519259327Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4155 runtime=io.containerd.runc.v2\n" Jul 2 10:32:56.098733 systemd[1]: run-containerd-runc-k8s.io-7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951-runc.fZScpS.mount: Deactivated successfully. Jul 2 10:32:56.098866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951-rootfs.mount: Deactivated successfully. Jul 2 10:32:56.322739 env[1203]: time="2024-07-02T10:32:56.322675555Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 10:32:56.346783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002111722.mount: Deactivated successfully. Jul 2 10:32:56.361189 env[1203]: time="2024-07-02T10:32:56.361120433Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3\"" Jul 2 10:32:56.363465 env[1203]: time="2024-07-02T10:32:56.363432960Z" level=info msg="StartContainer for \"88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3\"" Jul 2 10:32:56.368490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124781914.mount: Deactivated successfully. Jul 2 10:32:56.389242 systemd[1]: Started cri-containerd-88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3.scope. Jul 2 10:32:56.428763 systemd[1]: cri-containerd-88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3.scope: Deactivated successfully. Jul 2 10:32:56.430427 env[1203]: time="2024-07-02T10:32:56.430369022Z" level=info msg="StartContainer for \"88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3\" returns successfully" Jul 2 10:32:56.473149 env[1203]: time="2024-07-02T10:32:56.473077400Z" level=info msg="shim disconnected" id=88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3 Jul 2 10:32:56.473149 env[1203]: time="2024-07-02T10:32:56.473147885Z" level=warning msg="cleaning up after shim disconnected" id=88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3 namespace=k8s.io Jul 2 10:32:56.473550 env[1203]: time="2024-07-02T10:32:56.473165502Z" level=info msg="cleaning up dead shim" Jul 2 10:32:56.483494 env[1203]: time="2024-07-02T10:32:56.483442677Z" level=warning msg="cleanup warnings time=\"2024-07-02T10:32:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4211 runtime=io.containerd.runc.v2\n" Jul 2 10:32:57.328436 env[1203]: time="2024-07-02T10:32:57.328354845Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 10:32:57.366844 env[1203]: time="2024-07-02T10:32:57.366769889Z" level=info msg="CreateContainer within sandbox \"c9b4191d3b48cf2fd26327af97f2e53f2fd3dc2bcb589d1765b8c62dcccb1f13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4\"" Jul 2 10:32:57.367845 env[1203]: time="2024-07-02T10:32:57.367812302Z" level=info msg="StartContainer for \"379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4\"" Jul 2 10:32:57.397920 systemd[1]: Started cri-containerd-379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4.scope. Jul 2 10:32:57.445079 env[1203]: time="2024-07-02T10:32:57.444143815Z" level=info msg="StartContainer for \"379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4\" returns successfully" Jul 2 10:32:57.712273 kubelet[2068]: W0702 10:32:57.712175 2068 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b6c6c3_f420_4beb_800c_9e6467827442.slice/cri-containerd-217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77.scope WatchSource:0}: task 217e8d2622ca7fcf8a6287d3357d85b9ed4afb910c9b61612f0cd510800f0d77 not found: not found Jul 2 10:32:58.327283 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 10:32:58.365066 kubelet[2068]: I0702 10:32:58.364983 2068 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pxk45" podStartSLOduration=5.364830513 podCreationTimestamp="2024-07-02 10:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 10:32:58.362236672 +0000 UTC m=+171.331901977" watchObservedRunningTime="2024-07-02 10:32:58.364830513 +0000 UTC m=+171.334495806" Jul 2 10:32:59.397414 systemd[1]: run-containerd-runc-k8s.io-379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4-runc.gVzg3c.mount: Deactivated successfully. Jul 2 10:33:00.823661 kubelet[2068]: W0702 10:33:00.823588 2068 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b6c6c3_f420_4beb_800c_9e6467827442.slice/cri-containerd-617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f.scope WatchSource:0}: task 617a2d0c7862d0c44e0ce87fd8b79e01f810ce58bd6fe8010bce23c6fdcf8d8f not found: not found Jul 2 10:33:01.606446 systemd[1]: run-containerd-runc-k8s.io-379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4-runc.YnIkCT.mount: Deactivated successfully. Jul 2 10:33:01.726704 kubelet[2068]: E0702 10:33:01.726650 2068 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:45362->127.0.0.1:37721: read: connection reset by peer Jul 2 10:33:01.773901 systemd-networkd[1028]: lxc_health: Link UP Jul 2 10:33:01.797261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 10:33:01.802334 systemd-networkd[1028]: lxc_health: Gained carrier Jul 2 10:33:03.683452 systemd-networkd[1028]: lxc_health: Gained IPv6LL Jul 2 10:33:03.906619 systemd[1]: run-containerd-runc-k8s.io-379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4-runc.bQUqfI.mount: Deactivated successfully. Jul 2 10:33:03.936482 kubelet[2068]: W0702 10:33:03.936022 2068 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b6c6c3_f420_4beb_800c_9e6467827442.slice/cri-containerd-7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951.scope WatchSource:0}: task 7eafabbdd87185902e8ffd535556fad969ec780e0e08fe2c3feae01c394df951 not found: not found Jul 2 10:33:06.274098 systemd[1]: run-containerd-runc-k8s.io-379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4-runc.XaIg3p.mount: Deactivated successfully. Jul 2 10:33:07.050775 kubelet[2068]: W0702 10:33:07.050679 2068 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b6c6c3_f420_4beb_800c_9e6467827442.slice/cri-containerd-88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3.scope WatchSource:0}: task 88bd7aa7295d87bc3aac150f8528325af588f8e77ba53d7fdaf764e5f787f0f3 not found: not found Jul 2 10:33:07.306027 env[1203]: time="2024-07-02T10:33:07.305588136Z" level=info msg="StopPodSandbox for \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\"" Jul 2 10:33:07.306766 env[1203]: time="2024-07-02T10:33:07.306703481Z" level=info msg="TearDown network for sandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" successfully" Jul 2 10:33:07.306944 env[1203]: time="2024-07-02T10:33:07.306910634Z" level=info msg="StopPodSandbox for \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" returns successfully" Jul 2 10:33:07.307669 env[1203]: time="2024-07-02T10:33:07.307633484Z" level=info msg="RemovePodSandbox for \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\"" Jul 2 10:33:07.307868 env[1203]: time="2024-07-02T10:33:07.307792060Z" level=info msg="Forcibly stopping sandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\"" Jul 2 10:33:07.308079 env[1203]: time="2024-07-02T10:33:07.308037034Z" level=info msg="TearDown network for sandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" successfully" Jul 2 10:33:07.314863 env[1203]: time="2024-07-02T10:33:07.314822329Z" level=info msg="RemovePodSandbox \"a0f6c748b0b0389a97c5d36c53f2c17e13c74827e7d924a0b1e7c00e4f3d3ddd\" returns successfully" Jul 2 10:33:07.315650 env[1203]: time="2024-07-02T10:33:07.315617328Z" level=info msg="StopPodSandbox for \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\"" Jul 2 10:33:07.315921 env[1203]: time="2024-07-02T10:33:07.315866774Z" level=info msg="TearDown network for sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" successfully" Jul 2 10:33:07.316084 env[1203]: time="2024-07-02T10:33:07.316055850Z" level=info msg="StopPodSandbox for \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" returns successfully" Jul 2 10:33:07.316675 env[1203]: time="2024-07-02T10:33:07.316643700Z" level=info msg="RemovePodSandbox for \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\"" Jul 2 10:33:07.316889 env[1203]: time="2024-07-02T10:33:07.316844018Z" level=info msg="Forcibly stopping sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\"" Jul 2 10:33:07.317117 env[1203]: time="2024-07-02T10:33:07.317067847Z" level=info msg="TearDown network for sandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" successfully" Jul 2 10:33:07.320980 env[1203]: time="2024-07-02T10:33:07.320947794Z" level=info msg="RemovePodSandbox \"545535414e0b7db68c4747b18b95213be286b893a0e4ce3e08c4b7b2685fcfb9\" returns successfully" Jul 2 10:33:07.321913 env[1203]: time="2024-07-02T10:33:07.321880872Z" level=info msg="StopPodSandbox for \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\"" Jul 2 10:33:07.322173 env[1203]: time="2024-07-02T10:33:07.322121809Z" level=info msg="TearDown network for sandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" successfully" Jul 2 10:33:07.322353 env[1203]: time="2024-07-02T10:33:07.322319430Z" level=info msg="StopPodSandbox for \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" returns successfully" Jul 2 10:33:07.322954 env[1203]: time="2024-07-02T10:33:07.322897262Z" level=info msg="RemovePodSandbox for \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\"" Jul 2 10:33:07.323165 env[1203]: time="2024-07-02T10:33:07.323111129Z" level=info msg="Forcibly stopping sandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\"" Jul 2 10:33:07.323398 env[1203]: time="2024-07-02T10:33:07.323348272Z" level=info msg="TearDown network for sandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" successfully" Jul 2 10:33:07.327174 env[1203]: time="2024-07-02T10:33:07.327131582Z" level=info msg="RemovePodSandbox \"c34f15cc8d2a1ba692b046ba45a49e3c943eab574cf07ed48b2c3114a7b17afd\" returns successfully" Jul 2 10:33:08.517419 systemd[1]: run-containerd-runc-k8s.io-379047fd96c11082c3798913d8ac850e0088ea509a3f9c5574d7260821da8da4-runc.sdvfar.mount: Deactivated successfully. Jul 2 10:33:08.597147 kubelet[2068]: E0702 10:33:08.596978 2068 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34328->127.0.0.1:37721: write tcp 127.0.0.1:34328->127.0.0.1:37721: write: connection reset by peer Jul 2 10:33:08.816139 sshd[3899]: pam_unix(sshd:session): session closed for user core Jul 2 10:33:08.820887 systemd-logind[1191]: Session 26 logged out. Waiting for processes to exit. Jul 2 10:33:08.822475 systemd[1]: sshd@34-10.230.55.230:22-147.75.109.163:48364.service: Deactivated successfully. Jul 2 10:33:08.823509 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 10:33:08.824139 systemd-logind[1191]: Removed session 26.