Mar 17 21:19:29.106861 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 21:19:29.106898 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 21:19:29.106917 kernel: BIOS-provided physical RAM map: Mar 17 21:19:29.106927 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 21:19:29.106936 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 21:19:29.106945 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 21:19:29.106955 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 17 21:19:29.106965 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 17 21:19:29.106986 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 21:19:29.106995 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 21:19:29.107009 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 21:19:29.107019 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 21:19:29.107028 kernel: NX (Execute Disable) protection: active Mar 17 21:19:29.107037 kernel: SMBIOS 2.8 present. Mar 17 21:19:29.107049 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 17 21:19:29.107059 kernel: Hypervisor detected: KVM Mar 17 21:19:29.107073 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 21:19:29.107084 kernel: kvm-clock: cpu 0, msr 7c19a001, primary cpu clock Mar 17 21:19:29.111524 kernel: kvm-clock: using sched offset of 5238370510 cycles Mar 17 21:19:29.111537 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 21:19:29.111548 kernel: tsc: Detected 2799.998 MHz processor Mar 17 21:19:29.111559 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 21:19:29.111570 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 21:19:29.111580 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 17 21:19:29.111590 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 21:19:29.111608 kernel: Using GB pages for direct mapping Mar 17 21:19:29.111618 kernel: ACPI: Early table checksum verification disabled Mar 17 21:19:29.111628 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 17 21:19:29.111639 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111649 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111668 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111679 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 17 21:19:29.111690 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111700 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111715 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111725 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 21:19:29.111736 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 17 21:19:29.111746 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 17 21:19:29.111756 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 17 21:19:29.111767 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 17 21:19:29.111783 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 17 21:19:29.111797 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 17 21:19:29.111808 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 17 21:19:29.111819 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 21:19:29.111830 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 21:19:29.111841 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 17 21:19:29.111851 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 17 21:19:29.111862 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 17 21:19:29.111877 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 17 21:19:29.111887 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 17 21:19:29.111898 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 17 21:19:29.111909 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 17 21:19:29.111919 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 17 21:19:29.111930 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 17 21:19:29.111941 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 17 21:19:29.111951 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 17 21:19:29.111962 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 17 21:19:29.111983 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 17 21:19:29.111999 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 17 21:19:29.112016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 21:19:29.112027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 21:19:29.112038 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 17 21:19:29.112049 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 17 21:19:29.112060 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 17 21:19:29.112071 kernel: Zone ranges: Mar 17 21:19:29.112082 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 21:19:29.112116 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 17 21:19:29.112133 kernel: Normal empty Mar 17 21:19:29.112144 kernel: Movable zone start for each node Mar 17 21:19:29.112154 kernel: Early memory node ranges Mar 17 21:19:29.112165 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 21:19:29.112176 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 17 21:19:29.112187 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 17 21:19:29.112198 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 21:19:29.112208 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 21:19:29.112219 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 17 21:19:29.112240 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 21:19:29.112253 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 21:19:29.112264 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 21:19:29.112275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 21:19:29.112286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 21:19:29.112297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 21:19:29.112308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 21:19:29.112319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 21:19:29.112330 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 21:19:29.112345 kernel: TSC deadline timer available Mar 17 21:19:29.112356 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 17 21:19:29.112367 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 21:19:29.112378 kernel: Booting paravirtualized kernel on KVM Mar 17 21:19:29.112389 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 21:19:29.112400 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Mar 17 21:19:29.112411 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Mar 17 21:19:29.112422 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Mar 17 21:19:29.112432 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 17 21:19:29.112447 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Mar 17 21:19:29.112458 kernel: kvm-guest: PV spinlocks enabled Mar 17 21:19:29.112469 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 21:19:29.112480 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 17 21:19:29.112490 kernel: Policy zone: DMA32 Mar 17 21:19:29.112534 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 21:19:29.112548 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 21:19:29.112559 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 21:19:29.112590 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 21:19:29.112604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 21:19:29.112616 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 192524K reserved, 0K cma-reserved) Mar 17 21:19:29.112627 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 17 21:19:29.112638 kernel: Kernel/User page tables isolation: enabled Mar 17 21:19:29.112648 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 21:19:29.112659 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 21:19:29.112670 kernel: rcu: Hierarchical RCU implementation. Mar 17 21:19:29.112681 kernel: rcu: RCU event tracing is enabled. Mar 17 21:19:29.112697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 17 21:19:29.112708 kernel: Rude variant of Tasks RCU enabled. Mar 17 21:19:29.112726 kernel: Tracing variant of Tasks RCU enabled. Mar 17 21:19:29.112738 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 21:19:29.112749 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 17 21:19:29.112760 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 17 21:19:29.112771 kernel: random: crng init done Mar 17 21:19:29.112795 kernel: Console: colour VGA+ 80x25 Mar 17 21:19:29.112807 kernel: printk: console [tty0] enabled Mar 17 21:19:29.112818 kernel: printk: console [ttyS0] enabled Mar 17 21:19:29.112830 kernel: ACPI: Core revision 20210730 Mar 17 21:19:29.112841 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 21:19:29.112860 kernel: x2apic enabled Mar 17 21:19:29.112873 kernel: Switched APIC routing to physical x2apic. Mar 17 21:19:29.112885 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 17 21:19:29.112896 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Mar 17 21:19:29.112908 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 21:19:29.112924 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 21:19:29.112935 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 21:19:29.112947 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 21:19:29.112958 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 21:19:29.112979 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 21:19:29.112992 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 21:19:29.113004 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 21:19:29.113015 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 21:19:29.113026 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 21:19:29.113037 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 21:19:29.113053 kernel: MMIO Stale Data: Unknown: No mitigations Mar 17 21:19:29.113064 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 17 21:19:29.113076 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 21:19:29.113097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 21:19:29.113110 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 21:19:29.113121 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 21:19:29.113133 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 21:19:29.113144 kernel: Freeing SMP alternatives memory: 32K Mar 17 21:19:29.113155 kernel: pid_max: default: 32768 minimum: 301 Mar 17 21:19:29.113167 kernel: LSM: Security Framework initializing Mar 17 21:19:29.113178 kernel: SELinux: Initializing. Mar 17 21:19:29.113194 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 21:19:29.113206 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 21:19:29.113217 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 17 21:19:29.113229 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 17 21:19:29.113240 kernel: signal: max sigframe size: 1776 Mar 17 21:19:29.113252 kernel: rcu: Hierarchical SRCU implementation. Mar 17 21:19:29.113263 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 21:19:29.113275 kernel: smp: Bringing up secondary CPUs ... Mar 17 21:19:29.113286 kernel: x86: Booting SMP configuration: Mar 17 21:19:29.113297 kernel: .... node #0, CPUs: #1 Mar 17 21:19:29.113313 kernel: kvm-clock: cpu 1, msr 7c19a041, secondary cpu clock Mar 17 21:19:29.113324 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 17 21:19:29.113336 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Mar 17 21:19:29.113347 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 21:19:29.113358 kernel: smpboot: Max logical packages: 16 Mar 17 21:19:29.113370 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Mar 17 21:19:29.113381 kernel: devtmpfs: initialized Mar 17 21:19:29.113393 kernel: x86/mm: Memory block size: 128MB Mar 17 21:19:29.113404 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 21:19:29.113419 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 17 21:19:29.113431 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 21:19:29.113443 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 21:19:29.113454 kernel: audit: initializing netlink subsys (disabled) Mar 17 21:19:29.113466 kernel: audit: type=2000 audit(1742246367.091:1): state=initialized audit_enabled=0 res=1 Mar 17 21:19:29.113477 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 21:19:29.113489 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 21:19:29.113500 kernel: cpuidle: using governor menu Mar 17 21:19:29.113511 kernel: ACPI: bus type PCI registered Mar 17 21:19:29.113526 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 21:19:29.113538 kernel: dca service started, version 1.12.1 Mar 17 21:19:29.113549 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 21:19:29.113561 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 21:19:29.113572 kernel: PCI: Using configuration type 1 for base access Mar 17 21:19:29.113584 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 21:19:29.113595 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 21:19:29.113607 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 21:19:29.113618 kernel: ACPI: Added _OSI(Module Device) Mar 17 21:19:29.113634 kernel: ACPI: Added _OSI(Processor Device) Mar 17 21:19:29.113645 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 21:19:29.113656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 21:19:29.113668 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 21:19:29.113679 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 21:19:29.113691 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 21:19:29.113702 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 21:19:29.113713 kernel: ACPI: Interpreter enabled Mar 17 21:19:29.113725 kernel: ACPI: PM: (supports S0 S5) Mar 17 21:19:29.113740 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 21:19:29.113752 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 21:19:29.113763 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 21:19:29.113775 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 21:19:29.114077 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 21:19:29.118323 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 21:19:29.118484 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 21:19:29.118536 kernel: PCI host bridge to bus 0000:00 Mar 17 21:19:29.118706 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 21:19:29.118853 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 21:19:29.119011 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 21:19:29.119172 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 17 21:19:29.119314 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 21:19:29.119454 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 17 21:19:29.119602 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 21:19:29.119792 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 21:19:29.119995 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 17 21:19:29.120170 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 17 21:19:29.120329 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 17 21:19:29.120483 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 17 21:19:29.120637 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 21:19:29.120817 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.120988 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 17 21:19:29.121213 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.121406 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 17 21:19:29.121597 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.121788 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 17 21:19:29.121986 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.122219 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 17 21:19:29.122413 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.122570 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 17 21:19:29.122731 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.122883 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 17 21:19:29.132159 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.132396 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 17 21:19:29.132576 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 21:19:29.132736 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 17 21:19:29.132922 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 21:19:29.133143 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 21:19:29.133315 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 17 21:19:29.133470 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 21:19:29.133624 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 17 21:19:29.133789 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 21:19:29.133944 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 21:19:29.134127 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 17 21:19:29.134282 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 17 21:19:29.134483 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 21:19:29.134649 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 21:19:29.134823 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 21:19:29.135031 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 17 21:19:29.135237 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 17 21:19:29.135437 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 21:19:29.135638 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 21:19:29.135976 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 17 21:19:29.136168 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 17 21:19:29.136329 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 21:19:29.136482 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 21:19:29.136637 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 21:19:29.136827 kernel: pci_bus 0000:02: extended config space not accessible Mar 17 21:19:29.137059 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 17 21:19:29.137282 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 17 21:19:29.137447 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 21:19:29.137606 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 21:19:29.137788 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 21:19:29.137951 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 17 21:19:29.138134 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 21:19:29.138332 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 21:19:29.138490 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 21:19:29.138678 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 21:19:29.138845 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 21:19:29.139016 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 21:19:29.139188 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 21:19:29.139343 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 21:19:29.139507 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 21:19:29.139662 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 21:19:29.139814 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 21:19:29.139995 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 21:19:29.140203 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 21:19:29.140362 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 21:19:29.140522 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 21:19:29.140711 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 21:19:29.140876 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 21:19:29.141048 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 21:19:29.141214 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 21:19:29.141372 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 21:19:29.141529 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 21:19:29.141685 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 21:19:29.141877 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 21:19:29.141896 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 21:19:29.141909 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 21:19:29.141927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 21:19:29.141939 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 21:19:29.141951 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 21:19:29.141963 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 21:19:29.142028 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 21:19:29.142042 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 21:19:29.142054 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 21:19:29.142065 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 21:19:29.142077 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 21:19:29.150153 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 21:19:29.150169 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 21:19:29.150182 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 21:19:29.150194 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 21:19:29.150206 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 21:19:29.150218 kernel: iommu: Default domain type: Translated Mar 17 21:19:29.150230 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 21:19:29.150456 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 21:19:29.150645 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 21:19:29.150806 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 21:19:29.150824 kernel: vgaarb: loaded Mar 17 21:19:29.150836 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 21:19:29.150849 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 21:19:29.150860 kernel: PTP clock support registered Mar 17 21:19:29.150872 kernel: PCI: Using ACPI for IRQ routing Mar 17 21:19:29.150884 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 21:19:29.150895 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 21:19:29.150913 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 17 21:19:29.150924 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 21:19:29.150936 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 21:19:29.150948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 21:19:29.150959 kernel: pnp: PnP ACPI init Mar 17 21:19:29.151207 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 21:19:29.151228 kernel: pnp: PnP ACPI: found 5 devices Mar 17 21:19:29.151241 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 21:19:29.151260 kernel: NET: Registered PF_INET protocol family Mar 17 21:19:29.151272 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 21:19:29.151284 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 21:19:29.151296 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 21:19:29.151308 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 21:19:29.151319 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 21:19:29.151331 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 21:19:29.151343 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 21:19:29.151355 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 21:19:29.151370 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 21:19:29.151382 kernel: NET: Registered PF_XDP protocol family Mar 17 21:19:29.151551 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 17 21:19:29.151726 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 21:19:29.151883 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 21:19:29.152053 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 21:19:29.152225 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 21:19:29.152387 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 21:19:29.152544 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 21:19:29.152698 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 21:19:29.152852 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 21:19:29.153017 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 21:19:29.153220 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 21:19:29.153424 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 21:19:29.153581 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 21:19:29.153733 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 21:19:29.153891 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 21:19:29.154106 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 17 21:19:29.154288 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 21:19:29.154452 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 21:19:29.154642 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 21:19:29.154806 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 17 21:19:29.154963 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 21:19:29.155160 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 21:19:29.155339 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 21:19:29.155496 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 17 21:19:29.155652 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 21:19:29.155844 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 21:19:29.156015 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 21:19:29.156224 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 17 21:19:29.156389 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 21:19:29.156541 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 21:19:29.156693 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 21:19:29.156848 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 17 21:19:29.157017 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 21:19:29.163524 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 21:19:29.163691 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 21:19:29.163854 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 17 21:19:29.164034 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 21:19:29.164203 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 21:19:29.164359 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 21:19:29.164510 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 17 21:19:29.164696 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 21:19:29.164852 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 21:19:29.165022 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 21:19:29.165204 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 17 21:19:29.165359 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 21:19:29.165512 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 21:19:29.165664 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 21:19:29.165819 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 17 21:19:29.166017 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 21:19:29.166189 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 21:19:29.166339 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 21:19:29.166481 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 21:19:29.166620 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 21:19:29.166762 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 17 21:19:29.166903 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 21:19:29.167065 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 17 21:19:29.167283 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 17 21:19:29.167468 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 17 21:19:29.167618 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 21:19:29.167780 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 21:19:29.167978 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 17 21:19:29.173961 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 21:19:29.174196 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 21:19:29.174381 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 17 21:19:29.174532 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 21:19:29.174679 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 21:19:29.174846 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 17 21:19:29.175032 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 21:19:29.175198 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 21:19:29.175416 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 17 21:19:29.175566 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 21:19:29.175715 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 21:19:29.175873 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 17 21:19:29.176037 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 21:19:29.176214 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 21:19:29.176424 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 17 21:19:29.176585 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 21:19:29.176768 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 21:19:29.176946 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 17 21:19:29.177122 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 21:19:29.177273 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 21:19:29.177293 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 21:19:29.177306 kernel: PCI: CLS 0 bytes, default 64 Mar 17 21:19:29.177335 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 21:19:29.177348 kernel: software IO TLB: mapped [mem 0x0000000073000000-0x0000000077000000] (64MB) Mar 17 21:19:29.177361 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 21:19:29.177373 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 17 21:19:29.177385 kernel: Initialise system trusted keyrings Mar 17 21:19:29.177398 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 21:19:29.177410 kernel: Key type asymmetric registered Mar 17 21:19:29.177422 kernel: Asymmetric key parser 'x509' registered Mar 17 21:19:29.177434 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 21:19:29.177451 kernel: io scheduler mq-deadline registered Mar 17 21:19:29.177463 kernel: io scheduler kyber registered Mar 17 21:19:29.177476 kernel: io scheduler bfq registered Mar 17 21:19:29.177650 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 21:19:29.177812 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 21:19:29.177978 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.178191 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 21:19:29.178351 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 21:19:29.178512 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.178669 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 21:19:29.178824 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 21:19:29.178989 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.180233 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 21:19:29.180396 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 21:19:29.180560 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.180717 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 21:19:29.180871 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 21:19:29.181041 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.181212 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 21:19:29.181403 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 21:19:29.181568 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.181741 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 21:19:29.181896 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 21:19:29.182153 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.182328 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 21:19:29.182485 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 21:19:29.182648 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 21:19:29.182668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 21:19:29.182681 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 21:19:29.182694 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 21:19:29.182706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 21:19:29.182719 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 21:19:29.182731 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 21:19:29.182759 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 21:19:29.182772 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 21:19:29.182929 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 21:19:29.182949 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 21:19:29.183120 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 21:19:29.183303 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T21:19:28 UTC (1742246368) Mar 17 21:19:29.183451 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 21:19:29.183469 kernel: intel_pstate: CPU model not supported Mar 17 21:19:29.183496 kernel: NET: Registered PF_INET6 protocol family Mar 17 21:19:29.183510 kernel: Segment Routing with IPv6 Mar 17 21:19:29.183522 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 21:19:29.183535 kernel: NET: Registered PF_PACKET protocol family Mar 17 21:19:29.183547 kernel: Key type dns_resolver registered Mar 17 21:19:29.183559 kernel: IPI shorthand broadcast: enabled Mar 17 21:19:29.183572 kernel: sched_clock: Marking stable (1105186547, 212769803)->(1612617968, -294661618) Mar 17 21:19:29.183584 kernel: registered taskstats version 1 Mar 17 21:19:29.183596 kernel: Loading compiled-in X.509 certificates Mar 17 21:19:29.183618 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 21:19:29.183631 kernel: Key type .fscrypt registered Mar 17 21:19:29.183643 kernel: Key type fscrypt-provisioning registered Mar 17 21:19:29.183655 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 21:19:29.183667 kernel: ima: Allocated hash algorithm: sha1 Mar 17 21:19:29.183679 kernel: ima: No architecture policies found Mar 17 21:19:29.183691 kernel: clk: Disabling unused clocks Mar 17 21:19:29.183704 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 21:19:29.183716 kernel: Write protecting the kernel read-only data: 28672k Mar 17 21:19:29.183742 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 21:19:29.183754 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 21:19:29.183766 kernel: Run /init as init process Mar 17 21:19:29.183779 kernel: with arguments: Mar 17 21:19:29.183791 kernel: /init Mar 17 21:19:29.183802 kernel: with environment: Mar 17 21:19:29.183814 kernel: HOME=/ Mar 17 21:19:29.183826 kernel: TERM=linux Mar 17 21:19:29.183837 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 21:19:29.183863 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 21:19:29.183878 systemd[1]: Detected virtualization kvm. Mar 17 21:19:29.183896 systemd[1]: Detected architecture x86-64. Mar 17 21:19:29.183909 systemd[1]: Running in initrd. Mar 17 21:19:29.183922 systemd[1]: No hostname configured, using default hostname. Mar 17 21:19:29.183934 systemd[1]: Hostname set to . Mar 17 21:19:29.183948 systemd[1]: Initializing machine ID from VM UUID. Mar 17 21:19:29.183980 systemd[1]: Queued start job for default target initrd.target. Mar 17 21:19:29.183995 systemd[1]: Started systemd-ask-password-console.path. Mar 17 21:19:29.184007 systemd[1]: Reached target cryptsetup.target. Mar 17 21:19:29.184020 systemd[1]: Reached target paths.target. Mar 17 21:19:29.184033 systemd[1]: Reached target slices.target. Mar 17 21:19:29.184045 systemd[1]: Reached target swap.target. Mar 17 21:19:29.184058 systemd[1]: Reached target timers.target. Mar 17 21:19:29.184071 systemd[1]: Listening on iscsid.socket. Mar 17 21:19:29.184112 systemd[1]: Listening on iscsiuio.socket. Mar 17 21:19:29.184126 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 21:19:29.184139 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 21:19:29.184152 systemd[1]: Listening on systemd-journald.socket. Mar 17 21:19:29.184164 systemd[1]: Listening on systemd-networkd.socket. Mar 17 21:19:29.184177 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 21:19:29.184190 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 21:19:29.184203 systemd[1]: Reached target sockets.target. Mar 17 21:19:29.184217 systemd[1]: Starting kmod-static-nodes.service... Mar 17 21:19:29.184241 systemd[1]: Finished network-cleanup.service. Mar 17 21:19:29.184254 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 21:19:29.184267 systemd[1]: Starting systemd-journald.service... Mar 17 21:19:29.184280 systemd[1]: Starting systemd-modules-load.service... Mar 17 21:19:29.184293 systemd[1]: Starting systemd-resolved.service... Mar 17 21:19:29.184306 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 21:19:29.184318 systemd[1]: Finished kmod-static-nodes.service. Mar 17 21:19:29.184331 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 21:19:29.184364 systemd-journald[202]: Journal started Mar 17 21:19:29.184442 systemd-journald[202]: Runtime Journal (/run/log/journal/b7df7fb1a06c41ad903e58a7026df6be) is 4.7M, max 38.1M, 33.3M free. Mar 17 21:19:29.108367 systemd-modules-load[203]: Inserted module 'overlay' Mar 17 21:19:29.200582 kernel: Bridge firewalling registered Mar 17 21:19:29.200614 systemd[1]: Started systemd-resolved.service. Mar 17 21:19:29.200636 kernel: audit: type=1130 audit(1742246369.193:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.151405 systemd-resolved[204]: Positive Trust Anchors: Mar 17 21:19:29.207460 systemd[1]: Started systemd-journald.service. Mar 17 21:19:29.207489 kernel: audit: type=1130 audit(1742246369.201:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.151417 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 21:19:29.213599 kernel: audit: type=1130 audit(1742246369.208:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.151456 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 21:19:29.229023 kernel: audit: type=1130 audit(1742246369.214:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.229055 kernel: audit: type=1130 audit(1742246369.221:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.229074 kernel: SCSI subsystem initialized Mar 17 21:19:29.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.154827 systemd-resolved[204]: Defaulting to hostname 'linux'. Mar 17 21:19:29.193386 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 17 21:19:29.208490 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 21:19:29.214457 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 21:19:29.221889 systemd[1]: Reached target nss-lookup.target. Mar 17 21:19:29.223649 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 21:19:29.233371 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 21:19:29.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.247035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 21:19:29.253399 kernel: audit: type=1130 audit(1742246369.247:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.258124 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 21:19:29.258159 kernel: device-mapper: uevent: version 1.0.3 Mar 17 21:19:29.260365 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 21:19:29.260765 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 21:19:29.262559 systemd[1]: Starting dracut-cmdline.service... Mar 17 21:19:29.268360 kernel: audit: type=1130 audit(1742246369.261:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.271391 systemd-modules-load[203]: Inserted module 'dm_multipath' Mar 17 21:19:29.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.273081 systemd[1]: Finished systemd-modules-load.service. Mar 17 21:19:29.281808 kernel: audit: type=1130 audit(1742246369.273:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.274718 systemd[1]: Starting systemd-sysctl.service... Mar 17 21:19:29.295823 systemd[1]: Finished systemd-sysctl.service. Mar 17 21:19:29.297252 dracut-cmdline[219]: dracut-dracut-053 Mar 17 21:19:29.297252 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 21:19:29.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.306140 kernel: audit: type=1130 audit(1742246369.301:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.403152 kernel: Loading iSCSI transport class v2.0-870. Mar 17 21:19:29.431439 kernel: iscsi: registered transport (tcp) Mar 17 21:19:29.456455 kernel: iscsi: registered transport (qla4xxx) Mar 17 21:19:29.456529 kernel: QLogic iSCSI HBA Driver Mar 17 21:19:29.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.508181 systemd[1]: Finished dracut-cmdline.service. Mar 17 21:19:29.510956 systemd[1]: Starting dracut-pre-udev.service... Mar 17 21:19:29.574006 kernel: raid6: sse2x4 gen() 14463 MB/s Mar 17 21:19:29.591137 kernel: raid6: sse2x4 xor() 8153 MB/s Mar 17 21:19:29.609132 kernel: raid6: sse2x2 gen() 10239 MB/s Mar 17 21:19:29.627999 kernel: raid6: sse2x2 xor() 8583 MB/s Mar 17 21:19:29.645138 kernel: raid6: sse2x1 gen() 10148 MB/s Mar 17 21:19:29.663734 kernel: raid6: sse2x1 xor() 7661 MB/s Mar 17 21:19:29.663808 kernel: raid6: using algorithm sse2x4 gen() 14463 MB/s Mar 17 21:19:29.663827 kernel: raid6: .... xor() 8153 MB/s, rmw enabled Mar 17 21:19:29.664965 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 21:19:29.681121 kernel: xor: automatically using best checksumming function avx Mar 17 21:19:29.793136 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 21:19:29.805978 systemd[1]: Finished dracut-pre-udev.service. Mar 17 21:19:29.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.807000 audit: BPF prog-id=7 op=LOAD Mar 17 21:19:29.807000 audit: BPF prog-id=8 op=LOAD Mar 17 21:19:29.808291 systemd[1]: Starting systemd-udevd.service... Mar 17 21:19:29.824518 systemd-udevd[401]: Using default interface naming scheme 'v252'. Mar 17 21:19:29.832005 systemd[1]: Started systemd-udevd.service. Mar 17 21:19:29.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.836463 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 21:19:29.856102 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Mar 17 21:19:29.900276 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 21:19:29.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:29.906119 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 21:19:30.002914 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 21:19:30.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:30.107246 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 21:19:30.149326 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 21:19:30.149360 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 21:19:30.149378 kernel: GPT:17805311 != 125829119 Mar 17 21:19:30.149409 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 21:19:30.149427 kernel: GPT:17805311 != 125829119 Mar 17 21:19:30.149442 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 21:19:30.149457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 21:19:30.149473 kernel: ACPI: bus type USB registered Mar 17 21:19:30.149488 kernel: usbcore: registered new interface driver usbfs Mar 17 21:19:30.153116 kernel: usbcore: registered new interface driver hub Mar 17 21:19:30.155757 kernel: AVX version of gcm_enc/dec engaged. Mar 17 21:19:30.155790 kernel: AES CTR mode by8 optimization enabled Mar 17 21:19:30.155824 kernel: usbcore: registered new device driver usb Mar 17 21:19:30.190123 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Mar 17 21:19:30.194114 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 21:19:30.318972 kernel: libata version 3.00 loaded. Mar 17 21:19:30.319025 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 21:19:30.319348 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 21:19:30.319371 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 21:19:30.319568 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 21:19:30.319750 kernel: scsi host0: ahci Mar 17 21:19:30.320007 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 21:19:30.320236 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 17 21:19:30.320420 kernel: scsi host1: ahci Mar 17 21:19:30.320627 kernel: scsi host2: ahci Mar 17 21:19:30.320904 kernel: scsi host3: ahci Mar 17 21:19:30.321132 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 21:19:30.321326 kernel: scsi host4: ahci Mar 17 21:19:30.321541 kernel: scsi host5: ahci Mar 17 21:19:30.321740 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 17 21:19:30.321759 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 17 21:19:30.321776 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 17 21:19:30.321792 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 17 21:19:30.321823 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 17 21:19:30.321841 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 17 21:19:30.321868 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 21:19:30.322069 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 17 21:19:30.322260 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 17 21:19:30.322433 kernel: hub 1-0:1.0: USB hub found Mar 17 21:19:30.322657 kernel: hub 1-0:1.0: 4 ports detected Mar 17 21:19:30.322863 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 21:19:30.323173 kernel: hub 2-0:1.0: USB hub found Mar 17 21:19:30.323395 kernel: hub 2-0:1.0: 4 ports detected Mar 17 21:19:30.318201 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 21:19:30.324209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 21:19:30.331394 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 21:19:30.336451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 21:19:30.338323 systemd[1]: Starting disk-uuid.service... Mar 17 21:19:30.350118 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 21:19:30.353340 disk-uuid[528]: Primary Header is updated. Mar 17 21:19:30.353340 disk-uuid[528]: Secondary Entries is updated. Mar 17 21:19:30.353340 disk-uuid[528]: Secondary Header is updated. Mar 17 21:19:30.484136 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 21:19:30.553213 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 21:19:30.553334 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 21:19:30.554115 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 21:19:30.559920 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 21:19:30.559971 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 21:19:30.559992 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 21:19:30.628120 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 21:19:30.634545 kernel: usbcore: registered new interface driver usbhid Mar 17 21:19:30.634601 kernel: usbhid: USB HID core driver Mar 17 21:19:30.643320 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 17 21:19:30.643380 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 17 21:19:31.364125 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 21:19:31.365144 disk-uuid[529]: The operation has completed successfully. Mar 17 21:19:31.370111 kernel: block device autoloading is deprecated. It will be removed in Linux 5.19 Mar 17 21:19:31.426711 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 21:19:31.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.426877 systemd[1]: Finished disk-uuid.service. Mar 17 21:19:31.433185 systemd[1]: Starting verity-setup.service... Mar 17 21:19:31.452134 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 17 21:19:31.508781 systemd[1]: Found device dev-mapper-usr.device. Mar 17 21:19:31.511665 systemd[1]: Mounting sysusr-usr.mount... Mar 17 21:19:31.513352 systemd[1]: Finished verity-setup.service. Mar 17 21:19:31.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.612110 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 21:19:31.612633 systemd[1]: Mounted sysusr-usr.mount. Mar 17 21:19:31.613504 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 21:19:31.614694 systemd[1]: Starting ignition-setup.service... Mar 17 21:19:31.616402 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 21:19:31.634947 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 21:19:31.634997 kernel: BTRFS info (device vda6): using free space tree Mar 17 21:19:31.635016 kernel: BTRFS info (device vda6): has skinny extents Mar 17 21:19:31.657226 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 21:19:31.665489 systemd[1]: Finished ignition-setup.service. Mar 17 21:19:31.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.667376 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 21:19:31.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.773000 audit: BPF prog-id=9 op=LOAD Mar 17 21:19:31.770995 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 21:19:31.779724 systemd[1]: Starting systemd-networkd.service... Mar 17 21:19:31.840298 systemd-networkd[714]: lo: Link UP Mar 17 21:19:31.841389 systemd-networkd[714]: lo: Gained carrier Mar 17 21:19:31.843697 systemd-networkd[714]: Enumeration completed Mar 17 21:19:31.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.844534 systemd[1]: Started systemd-networkd.service. Mar 17 21:19:31.845346 systemd[1]: Reached target network.target. Mar 17 21:19:31.850868 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 21:19:31.854034 systemd-networkd[714]: eth0: Link UP Mar 17 21:19:31.854040 systemd-networkd[714]: eth0: Gained carrier Mar 17 21:19:31.854982 systemd[1]: Starting iscsiuio.service... Mar 17 21:19:31.875317 systemd-networkd[714]: eth0: DHCPv4 address 10.230.48.190/30, gateway 10.230.48.189 acquired from 10.230.48.189 Mar 17 21:19:31.891756 systemd[1]: Started iscsiuio.service. Mar 17 21:19:31.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.893989 systemd[1]: Starting iscsid.service... Mar 17 21:19:31.900237 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 21:19:31.900237 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 21:19:31.900237 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 21:19:31.900237 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 21:19:31.900237 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 21:19:31.900237 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 21:19:31.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.907991 ignition[640]: Ignition 2.14.0 Mar 17 21:19:31.903049 systemd[1]: Started iscsid.service. Mar 17 21:19:31.908012 ignition[640]: Stage: fetch-offline Mar 17 21:19:31.908163 ignition[640]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:31.911927 systemd[1]: Starting dracut-initqueue.service... Mar 17 21:19:31.908230 ignition[640]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:31.910044 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:31.910231 ignition[640]: parsed url from cmdline: "" Mar 17 21:19:31.910238 ignition[640]: no config URL provided Mar 17 21:19:31.910248 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 21:19:31.910264 ignition[640]: no config at "/usr/lib/ignition/user.ign" Mar 17 21:19:31.910296 ignition[640]: failed to fetch config: resource requires networking Mar 17 21:19:31.911066 ignition[640]: Ignition finished successfully Mar 17 21:19:31.920923 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 21:19:31.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.949428 systemd[1]: Starting ignition-fetch.service... Mar 17 21:19:31.965106 systemd[1]: Finished dracut-initqueue.service. Mar 17 21:19:31.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.965965 systemd[1]: Reached target remote-fs-pre.target. Mar 17 21:19:31.966581 ignition[721]: Ignition 2.14.0 Mar 17 21:19:31.966591 ignition[721]: Stage: fetch Mar 17 21:19:31.967725 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 21:19:31.966743 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:31.968385 systemd[1]: Reached target remote-fs.target. Mar 17 21:19:31.966776 ignition[721]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:31.971472 systemd[1]: Starting dracut-pre-mount.service... Mar 17 21:19:31.970936 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:31.971045 ignition[721]: parsed url from cmdline: "" Mar 17 21:19:31.971052 ignition[721]: no config URL provided Mar 17 21:19:31.971062 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 21:19:31.971076 ignition[721]: no config at "/usr/lib/ignition/user.ign" Mar 17 21:19:31.974666 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 21:19:31.974697 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 21:19:31.974889 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 21:19:31.991234 systemd[1]: Finished dracut-pre-mount.service. Mar 17 21:19:31.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:31.992275 ignition[721]: GET result: OK Mar 17 21:19:31.993224 ignition[721]: parsing config with SHA512: de2de1e580414f56312cd702f24441e5be567ebb26d4fe3ff742becc049c2a488a6a06106af5447207a786eddbb963edf9c84fea630edd170181d12f00968479 Mar 17 21:19:32.001799 unknown[721]: fetched base config from "system" Mar 17 21:19:32.002634 unknown[721]: fetched base config from "system" Mar 17 21:19:32.003384 unknown[721]: fetched user config from "openstack" Mar 17 21:19:32.004613 ignition[721]: fetch: fetch complete Mar 17 21:19:32.005299 ignition[721]: fetch: fetch passed Mar 17 21:19:32.005364 ignition[721]: Ignition finished successfully Mar 17 21:19:32.008083 systemd[1]: Finished ignition-fetch.service. Mar 17 21:19:32.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.009996 systemd[1]: Starting ignition-kargs.service... Mar 17 21:19:32.021947 ignition[739]: Ignition 2.14.0 Mar 17 21:19:32.021966 ignition[739]: Stage: kargs Mar 17 21:19:32.022144 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:32.022180 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:32.023373 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:32.024883 ignition[739]: kargs: kargs passed Mar 17 21:19:32.025968 systemd[1]: Finished ignition-kargs.service. Mar 17 21:19:32.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.024959 ignition[739]: Ignition finished successfully Mar 17 21:19:32.029170 systemd[1]: Starting ignition-disks.service... Mar 17 21:19:32.038548 ignition[744]: Ignition 2.14.0 Mar 17 21:19:32.039460 ignition[744]: Stage: disks Mar 17 21:19:32.040256 ignition[744]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:32.041189 ignition[744]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:32.042612 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:32.045332 ignition[744]: disks: disks passed Mar 17 21:19:32.045526 ignition[744]: Ignition finished successfully Mar 17 21:19:32.046392 systemd[1]: Finished ignition-disks.service. Mar 17 21:19:32.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.047566 systemd[1]: Reached target initrd-root-device.target. Mar 17 21:19:32.048602 systemd[1]: Reached target local-fs-pre.target. Mar 17 21:19:32.049867 systemd[1]: Reached target local-fs.target. Mar 17 21:19:32.051058 systemd[1]: Reached target sysinit.target. Mar 17 21:19:32.052180 systemd[1]: Reached target basic.target. Mar 17 21:19:32.054628 systemd[1]: Starting systemd-fsck-root.service... Mar 17 21:19:32.074421 systemd-fsck[751]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 21:19:32.078815 systemd[1]: Finished systemd-fsck-root.service. Mar 17 21:19:32.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.080641 systemd[1]: Mounting sysroot.mount... Mar 17 21:19:32.094127 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 21:19:32.093951 systemd[1]: Mounted sysroot.mount. Mar 17 21:19:32.094664 systemd[1]: Reached target initrd-root-fs.target. Mar 17 21:19:32.097272 systemd[1]: Mounting sysroot-usr.mount... Mar 17 21:19:32.098443 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 21:19:32.099433 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 17 21:19:32.102385 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 21:19:32.102426 systemd[1]: Reached target ignition-diskful.target. Mar 17 21:19:32.104916 systemd[1]: Mounted sysroot-usr.mount. Mar 17 21:19:32.107834 systemd[1]: Starting initrd-setup-root.service... Mar 17 21:19:32.116377 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 21:19:32.134948 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Mar 17 21:19:32.146011 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 21:19:32.154535 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 21:19:32.227751 systemd[1]: Finished initrd-setup-root.service. Mar 17 21:19:32.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.230582 systemd[1]: Starting ignition-mount.service... Mar 17 21:19:32.232389 systemd[1]: Starting sysroot-boot.service... Mar 17 21:19:32.243241 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 21:19:32.260765 coreos-metadata[757]: Mar 17 21:19:32.260 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 21:19:32.265636 ignition[806]: INFO : Ignition 2.14.0 Mar 17 21:19:32.266624 ignition[806]: INFO : Stage: mount Mar 17 21:19:32.267553 ignition[806]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:32.268504 ignition[806]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:32.271244 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:32.273731 ignition[806]: INFO : mount: mount passed Mar 17 21:19:32.274543 ignition[806]: INFO : Ignition finished successfully Mar 17 21:19:32.276391 systemd[1]: Finished ignition-mount.service. Mar 17 21:19:32.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.278514 coreos-metadata[757]: Mar 17 21:19:32.278 INFO Fetch successful Mar 17 21:19:32.278514 coreos-metadata[757]: Mar 17 21:19:32.278 INFO wrote hostname srv-y0snw.gb1.brightbox.com to /sysroot/etc/hostname Mar 17 21:19:32.281650 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 21:19:32.281825 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 17 21:19:32.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:32.299122 systemd[1]: Finished sysroot-boot.service. Mar 17 21:19:32.530934 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 21:19:32.546516 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (815) Mar 17 21:19:32.550288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 21:19:32.550327 kernel: BTRFS info (device vda6): using free space tree Mar 17 21:19:32.550346 kernel: BTRFS info (device vda6): has skinny extents Mar 17 21:19:32.557284 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 21:19:32.559812 systemd[1]: Starting ignition-files.service... Mar 17 21:19:32.582610 ignition[835]: INFO : Ignition 2.14.0 Mar 17 21:19:32.582610 ignition[835]: INFO : Stage: files Mar 17 21:19:32.584441 ignition[835]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:32.584441 ignition[835]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:32.584441 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:32.587720 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Mar 17 21:19:32.588601 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 21:19:32.588601 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 21:19:32.591957 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 21:19:32.593297 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 21:19:32.595563 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 21:19:32.594533 unknown[835]: wrote ssh authorized keys file for user: core Mar 17 21:19:32.598379 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 21:19:32.598379 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 21:19:32.775215 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 21:19:33.380726 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 21:19:33.382115 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 21:19:33.382115 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 21:19:33.511880 systemd-networkd[714]: eth0: Gained IPv6LL Mar 17 21:19:33.995649 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 21:19:34.332463 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 21:19:34.332463 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 21:19:34.340266 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 21:19:34.941385 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 21:19:35.098248 systemd-networkd[714]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c2f:24:19ff:fee6:30be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c2f:24:19ff:fee6:30be/64 assigned by NDisc. Mar 17 21:19:35.098263 systemd-networkd[714]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 21:19:37.048166 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 21:19:37.048166 ignition[835]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 21:19:37.048166 ignition[835]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 21:19:37.048166 ignition[835]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 21:19:37.054524 ignition[835]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 21:19:37.063719 ignition[835]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 21:19:37.063719 ignition[835]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 21:19:37.063719 ignition[835]: INFO : files: files passed Mar 17 21:19:37.063719 ignition[835]: INFO : Ignition finished successfully Mar 17 21:19:37.082504 kernel: kauditd_printk_skb: 28 callbacks suppressed Mar 17 21:19:37.082573 kernel: audit: type=1130 audit(1742246377.065:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.062998 systemd[1]: Finished ignition-files.service. Mar 17 21:19:37.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.069287 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 21:19:37.100021 kernel: audit: type=1130 audit(1742246377.083:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.100054 kernel: audit: type=1131 audit(1742246377.083:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.100115 kernel: audit: type=1130 audit(1742246377.094:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.075888 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 21:19:37.101783 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 21:19:37.077193 systemd[1]: Starting ignition-quench.service... Mar 17 21:19:37.082317 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 21:19:37.082455 systemd[1]: Finished ignition-quench.service. Mar 17 21:19:37.093655 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 21:19:37.094839 systemd[1]: Reached target ignition-complete.target. Mar 17 21:19:37.101815 systemd[1]: Starting initrd-parse-etc.service... Mar 17 21:19:37.124584 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 21:19:37.136850 kernel: audit: type=1130 audit(1742246377.125:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.136883 kernel: audit: type=1131 audit(1742246377.125:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.124739 systemd[1]: Finished initrd-parse-etc.service. Mar 17 21:19:37.125552 systemd[1]: Reached target initrd-fs.target. Mar 17 21:19:37.126196 systemd[1]: Reached target initrd.target. Mar 17 21:19:37.136261 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 21:19:37.137543 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 21:19:37.154979 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 21:19:37.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.158360 systemd[1]: Starting initrd-cleanup.service... Mar 17 21:19:37.162869 kernel: audit: type=1130 audit(1742246377.155:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.171737 systemd[1]: Stopped target nss-lookup.target. Mar 17 21:19:37.172597 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 21:19:37.173909 systemd[1]: Stopped target timers.target. Mar 17 21:19:37.175179 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 21:19:37.181663 kernel: audit: type=1131 audit(1742246377.176:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.175343 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 21:19:37.176722 systemd[1]: Stopped target initrd.target. Mar 17 21:19:37.182334 systemd[1]: Stopped target basic.target. Mar 17 21:19:37.183747 systemd[1]: Stopped target ignition-complete.target. Mar 17 21:19:37.184866 systemd[1]: Stopped target ignition-diskful.target. Mar 17 21:19:37.186009 systemd[1]: Stopped target initrd-root-device.target. Mar 17 21:19:37.187237 systemd[1]: Stopped target remote-fs.target. Mar 17 21:19:37.188414 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 21:19:37.189635 systemd[1]: Stopped target sysinit.target. Mar 17 21:19:37.190978 systemd[1]: Stopped target local-fs.target. Mar 17 21:19:37.192192 systemd[1]: Stopped target local-fs-pre.target. Mar 17 21:19:37.193354 systemd[1]: Stopped target swap.target. Mar 17 21:19:37.200672 kernel: audit: type=1131 audit(1742246377.195:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.194452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 21:19:37.194707 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 21:19:37.207611 kernel: audit: type=1131 audit(1742246377.202:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.195952 systemd[1]: Stopped target cryptsetup.target. Mar 17 21:19:37.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.201494 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 21:19:37.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.201736 systemd[1]: Stopped dracut-initqueue.service. Mar 17 21:19:37.202867 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 21:19:37.203099 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 21:19:37.208565 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 21:19:37.208799 systemd[1]: Stopped ignition-files.service. Mar 17 21:19:37.211298 systemd[1]: Stopping ignition-mount.service... Mar 17 21:19:37.217007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 21:19:37.218313 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 21:19:37.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.221954 systemd[1]: Stopping sysroot-boot.service... Mar 17 21:19:37.222599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 21:19:37.222853 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 21:19:37.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.223722 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 21:19:37.224137 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 21:19:37.229184 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 21:19:37.229315 systemd[1]: Finished initrd-cleanup.service. Mar 17 21:19:37.242403 ignition[873]: INFO : Ignition 2.14.0 Mar 17 21:19:37.242403 ignition[873]: INFO : Stage: umount Mar 17 21:19:37.242403 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 21:19:37.242403 ignition[873]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 21:19:37.242403 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 21:19:37.242403 ignition[873]: INFO : umount: umount passed Mar 17 21:19:37.242403 ignition[873]: INFO : Ignition finished successfully Mar 17 21:19:37.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.242803 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 21:19:37.242953 systemd[1]: Stopped ignition-mount.service. Mar 17 21:19:37.245335 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 21:19:37.245427 systemd[1]: Stopped ignition-disks.service. Mar 17 21:19:37.246378 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 21:19:37.246467 systemd[1]: Stopped ignition-kargs.service. Mar 17 21:19:37.250648 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 21:19:37.250734 systemd[1]: Stopped ignition-fetch.service. Mar 17 21:19:37.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.254208 systemd[1]: Stopped target network.target. Mar 17 21:19:37.255438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 21:19:37.255514 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 21:19:37.256182 systemd[1]: Stopped target paths.target. Mar 17 21:19:37.256733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 21:19:37.259196 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 21:19:37.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.259955 systemd[1]: Stopped target slices.target. Mar 17 21:19:37.261231 systemd[1]: Stopped target sockets.target. Mar 17 21:19:37.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.262499 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 21:19:37.283000 audit: BPF prog-id=6 op=UNLOAD Mar 17 21:19:37.262557 systemd[1]: Closed iscsid.socket. Mar 17 21:19:37.263674 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 21:19:37.263717 systemd[1]: Closed iscsiuio.socket. Mar 17 21:19:37.264738 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 21:19:37.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.264824 systemd[1]: Stopped ignition-setup.service. Mar 17 21:19:37.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.267078 systemd[1]: Stopping systemd-networkd.service... Mar 17 21:19:37.269119 systemd[1]: Stopping systemd-resolved.service... Mar 17 21:19:37.270184 systemd-networkd[714]: eth0: DHCPv6 lease lost Mar 17 21:19:37.275301 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 21:19:37.276103 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 21:19:37.276302 systemd[1]: Stopped systemd-networkd.service. Mar 17 21:19:37.291000 audit: BPF prog-id=9 op=UNLOAD Mar 17 21:19:37.280221 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 21:19:37.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.280423 systemd[1]: Stopped systemd-resolved.service. Mar 17 21:19:37.282396 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 21:19:37.282461 systemd[1]: Closed systemd-networkd.socket. Mar 17 21:19:37.284322 systemd[1]: Stopping network-cleanup.service... Mar 17 21:19:37.288585 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 21:19:37.288662 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 21:19:37.290010 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 21:19:37.290104 systemd[1]: Stopped systemd-sysctl.service. Mar 17 21:19:37.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.291598 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 21:19:37.291658 systemd[1]: Stopped systemd-modules-load.service. Mar 17 21:19:37.296955 systemd[1]: Stopping systemd-udevd.service... Mar 17 21:19:37.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.307249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 21:19:37.308138 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 21:19:37.308363 systemd[1]: Stopped systemd-udevd.service. Mar 17 21:19:37.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.312023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 21:19:37.312140 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 21:19:37.312915 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 21:19:37.312985 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 21:19:37.314368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 21:19:37.314471 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 21:19:37.315650 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 21:19:37.315709 systemd[1]: Stopped dracut-cmdline.service. Mar 17 21:19:37.318904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 21:19:37.318988 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 21:19:37.323487 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 21:19:37.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.345268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 21:19:37.345463 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 21:19:37.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.347653 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 21:19:37.347840 systemd[1]: Stopped network-cleanup.service. Mar 17 21:19:37.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.349692 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 21:19:37.350341 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 21:19:37.371511 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 21:19:37.371725 systemd[1]: Stopped sysroot-boot.service. Mar 17 21:19:37.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.373471 systemd[1]: Reached target initrd-switch-root.target. Mar 17 21:19:37.374334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 21:19:37.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:37.374416 systemd[1]: Stopped initrd-setup-root.service. Mar 17 21:19:37.376756 systemd[1]: Starting initrd-switch-root.service... Mar 17 21:19:37.393858 systemd[1]: Switching root. Mar 17 21:19:37.415855 iscsid[719]: iscsid shutting down. Mar 17 21:19:37.416691 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 17 21:19:37.416794 systemd-journald[202]: Journal stopped Mar 17 21:19:42.036247 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 21:19:42.036472 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 21:19:42.036509 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 21:19:42.036572 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 21:19:42.036600 kernel: SELinux: policy capability open_perms=1 Mar 17 21:19:42.036625 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 21:19:42.036687 kernel: SELinux: policy capability always_check_network=0 Mar 17 21:19:42.036717 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 21:19:42.036742 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 21:19:42.036789 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 21:19:42.036819 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 21:19:42.036849 systemd[1]: Successfully loaded SELinux policy in 68.439ms. Mar 17 21:19:42.036942 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.381ms. Mar 17 21:19:42.036975 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 21:19:42.036997 systemd[1]: Detected virtualization kvm. Mar 17 21:19:42.037051 systemd[1]: Detected architecture x86-64. Mar 17 21:19:42.037081 systemd[1]: Detected first boot. Mar 17 21:19:42.037145 systemd[1]: Hostname set to . Mar 17 21:19:42.037179 systemd[1]: Initializing machine ID from VM UUID. Mar 17 21:19:42.037205 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 21:19:42.037234 systemd[1]: Populated /etc with preset unit settings. Mar 17 21:19:42.037262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:19:42.037306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:19:42.037335 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:19:42.037390 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 21:19:42.037419 systemd[1]: Stopped iscsiuio.service. Mar 17 21:19:42.037446 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 21:19:42.037479 systemd[1]: Stopped iscsid.service. Mar 17 21:19:42.037509 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 21:19:42.037531 systemd[1]: Stopped initrd-switch-root.service. Mar 17 21:19:42.037595 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 21:19:42.037642 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 21:19:42.037686 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 21:19:42.037708 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 21:19:42.037739 systemd[1]: Created slice system-getty.slice. Mar 17 21:19:42.037761 systemd[1]: Created slice system-modprobe.slice. Mar 17 21:19:42.037781 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 21:19:42.037801 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 21:19:42.037822 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 21:19:42.037849 systemd[1]: Created slice user.slice. Mar 17 21:19:42.037901 systemd[1]: Started systemd-ask-password-console.path. Mar 17 21:19:42.037936 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 21:19:42.037961 systemd[1]: Set up automount boot.automount. Mar 17 21:19:42.037982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 21:19:42.038013 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 21:19:42.038032 systemd[1]: Stopped target initrd-fs.target. Mar 17 21:19:42.038104 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 21:19:42.043817 systemd[1]: Reached target integritysetup.target. Mar 17 21:19:42.043851 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 21:19:42.043873 systemd[1]: Reached target remote-fs.target. Mar 17 21:19:42.043900 systemd[1]: Reached target slices.target. Mar 17 21:19:42.043936 systemd[1]: Reached target swap.target. Mar 17 21:19:42.045301 systemd[1]: Reached target torcx.target. Mar 17 21:19:42.045330 systemd[1]: Reached target veritysetup.target. Mar 17 21:19:42.045376 systemd[1]: Listening on systemd-coredump.socket. Mar 17 21:19:42.045401 systemd[1]: Listening on systemd-initctl.socket. Mar 17 21:19:42.049117 systemd[1]: Listening on systemd-networkd.socket. Mar 17 21:19:42.049166 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 21:19:42.049195 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 21:19:42.049226 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 21:19:42.049246 systemd[1]: Mounting dev-hugepages.mount... Mar 17 21:19:42.049278 systemd[1]: Mounting dev-mqueue.mount... Mar 17 21:19:42.049299 systemd[1]: Mounting media.mount... Mar 17 21:19:42.049319 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:42.049355 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 21:19:42.049406 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 21:19:42.049437 systemd[1]: Mounting tmp.mount... Mar 17 21:19:42.049468 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 21:19:42.049495 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:19:42.049516 systemd[1]: Starting kmod-static-nodes.service... Mar 17 21:19:42.049546 systemd[1]: Starting modprobe@configfs.service... Mar 17 21:19:42.049567 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:19:42.049593 systemd[1]: Starting modprobe@drm.service... Mar 17 21:19:42.049614 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:19:42.049674 systemd[1]: Starting modprobe@fuse.service... Mar 17 21:19:42.049699 systemd[1]: Starting modprobe@loop.service... Mar 17 21:19:42.049719 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 21:19:42.049741 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 21:19:42.049768 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 21:19:42.049789 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 21:19:42.049815 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 21:19:42.049835 systemd[1]: Stopped systemd-journald.service. Mar 17 21:19:42.049861 systemd[1]: Starting systemd-journald.service... Mar 17 21:19:42.049907 kernel: fuse: init (API version 7.34) Mar 17 21:19:42.049948 systemd[1]: Starting systemd-modules-load.service... Mar 17 21:19:42.049969 systemd[1]: Starting systemd-network-generator.service... Mar 17 21:19:42.050007 systemd[1]: Starting systemd-remount-fs.service... Mar 17 21:19:42.050028 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 21:19:42.050052 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 21:19:42.050078 systemd[1]: Stopped verity-setup.service. Mar 17 21:19:42.050133 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:42.050166 kernel: loop: module loaded Mar 17 21:19:42.050213 systemd[1]: Mounted dev-hugepages.mount. Mar 17 21:19:42.050238 systemd[1]: Mounted dev-mqueue.mount. Mar 17 21:19:42.050266 systemd[1]: Mounted media.mount. Mar 17 21:19:42.050287 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 21:19:42.050309 systemd-journald[975]: Journal started Mar 17 21:19:42.050400 systemd-journald[975]: Runtime Journal (/run/log/journal/b7df7fb1a06c41ad903e58a7026df6be) is 4.7M, max 38.1M, 33.3M free. Mar 17 21:19:37.611000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 21:19:37.682000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 21:19:37.682000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 21:19:37.682000 audit: BPF prog-id=10 op=LOAD Mar 17 21:19:37.682000 audit: BPF prog-id=10 op=UNLOAD Mar 17 21:19:37.682000 audit: BPF prog-id=11 op=LOAD Mar 17 21:19:37.682000 audit: BPF prog-id=11 op=UNLOAD Mar 17 21:19:37.795000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 21:19:37.795000 audit[905]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:19:37.795000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 21:19:37.798000 audit[905]: AVC avc: denied { associate } for pid=905 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 21:19:37.798000 audit[905]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a5 a2=1ed a3=0 items=2 ppid=888 pid=905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:19:37.798000 audit: CWD cwd="/" Mar 17 21:19:37.798000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:37.798000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:37.798000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 21:19:41.778000 audit: BPF prog-id=12 op=LOAD Mar 17 21:19:41.778000 audit: BPF prog-id=3 op=UNLOAD Mar 17 21:19:41.778000 audit: BPF prog-id=13 op=LOAD Mar 17 21:19:41.778000 audit: BPF prog-id=14 op=LOAD Mar 17 21:19:41.779000 audit: BPF prog-id=4 op=UNLOAD Mar 17 21:19:41.779000 audit: BPF prog-id=5 op=UNLOAD Mar 17 21:19:41.782000 audit: BPF prog-id=15 op=LOAD Mar 17 21:19:41.782000 audit: BPF prog-id=12 op=UNLOAD Mar 17 21:19:41.782000 audit: BPF prog-id=16 op=LOAD Mar 17 21:19:41.782000 audit: BPF prog-id=17 op=LOAD Mar 17 21:19:41.782000 audit: BPF prog-id=13 op=UNLOAD Mar 17 21:19:41.782000 audit: BPF prog-id=14 op=UNLOAD Mar 17 21:19:41.783000 audit: BPF prog-id=18 op=LOAD Mar 17 21:19:41.783000 audit: BPF prog-id=15 op=UNLOAD Mar 17 21:19:41.784000 audit: BPF prog-id=19 op=LOAD Mar 17 21:19:41.784000 audit: BPF prog-id=20 op=LOAD Mar 17 21:19:41.784000 audit: BPF prog-id=16 op=UNLOAD Mar 17 21:19:41.784000 audit: BPF prog-id=17 op=UNLOAD Mar 17 21:19:41.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.794000 audit: BPF prog-id=18 op=UNLOAD Mar 17 21:19:41.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.056175 systemd[1]: Started systemd-journald.service. Mar 17 21:19:41.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.972000 audit: BPF prog-id=21 op=LOAD Mar 17 21:19:41.972000 audit: BPF prog-id=22 op=LOAD Mar 17 21:19:41.972000 audit: BPF prog-id=23 op=LOAD Mar 17 21:19:41.972000 audit: BPF prog-id=19 op=UNLOAD Mar 17 21:19:41.972000 audit: BPF prog-id=20 op=UNLOAD Mar 17 21:19:42.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.033000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 21:19:42.033000 audit[975]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffeceef9a30 a2=4000 a3=7ffeceef9acc items=0 ppid=1 pid=975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:19:42.033000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 21:19:42.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:41.773828 systemd[1]: Queued start job for default target multi-user.target. Mar 17 21:19:37.792399 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:19:41.773858 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 21:19:37.793115 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 21:19:41.786112 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 21:19:37.793171 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 21:19:42.053875 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 21:19:37.793250 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 21:19:42.055303 systemd[1]: Mounted tmp.mount. Mar 17 21:19:37.793270 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 21:19:42.056166 systemd[1]: Finished kmod-static-nodes.service. Mar 17 21:19:37.793322 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 21:19:42.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.073877 kernel: kauditd_printk_skb: 93 callbacks suppressed Mar 17 21:19:42.073913 kernel: audit: type=1130 audit(1742246382.066:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.057127 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 21:19:37.793345 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 21:19:42.057378 systemd[1]: Finished modprobe@configfs.service. Mar 17 21:19:37.793805 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 21:19:42.061609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:19:37.793871 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 21:19:42.061862 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:19:37.793896 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 21:19:42.062901 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 21:19:37.794697 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 21:19:42.063083 systemd[1]: Finished modprobe@drm.service. Mar 17 21:19:37.794770 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 21:19:42.064112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:19:37.794805 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 21:19:42.064315 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:19:37.794833 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 21:19:42.065370 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 21:19:37.794865 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 21:19:42.065538 systemd[1]: Finished modprobe@fuse.service. Mar 17 21:19:37.794891 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 21:19:42.066663 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:19:41.178275 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:19:42.066845 systemd[1]: Finished modprobe@loop.service. Mar 17 21:19:41.179718 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:19:41.180103 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:19:41.180715 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 21:19:41.180884 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 21:19:41.181134 /usr/lib/systemd/system-generators/torcx-generator[905]: time="2025-03-17T21:19:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 21:19:42.084641 kernel: audit: type=1131 audit(1742246382.066:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.084706 kernel: audit: type=1130 audit(1742246382.073:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.085307 systemd[1]: Finished systemd-modules-load.service. Mar 17 21:19:42.088279 systemd[1]: Finished systemd-network-generator.service. Mar 17 21:19:42.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.093153 systemd[1]: Finished systemd-remount-fs.service. Mar 17 21:19:42.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.097130 kernel: audit: type=1131 audit(1742246382.073:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.097177 kernel: audit: type=1130 audit(1742246382.085:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.097204 kernel: audit: type=1130 audit(1742246382.088:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.116111 kernel: audit: type=1130 audit(1742246382.110:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.111368 systemd[1]: Reached target network-pre.target. Mar 17 21:19:42.118776 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 21:19:42.122178 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 21:19:42.126572 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 21:19:42.130444 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 21:19:42.135575 systemd[1]: Starting systemd-journal-flush.service... Mar 17 21:19:42.136408 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 21:19:42.139805 systemd[1]: Starting systemd-random-seed.service... Mar 17 21:19:42.140773 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 21:19:42.144070 systemd[1]: Starting systemd-sysctl.service... Mar 17 21:19:42.150869 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 21:19:42.157137 kernel: audit: type=1130 audit(1742246382.151:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.153570 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 21:19:42.157790 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 21:19:42.160389 systemd[1]: Starting systemd-sysusers.service... Mar 17 21:19:42.161175 systemd-journald[975]: Time spent on flushing to /var/log/journal/b7df7fb1a06c41ad903e58a7026df6be is 57.588ms for 1307 entries. Mar 17 21:19:42.161175 systemd-journald[975]: System Journal (/var/log/journal/b7df7fb1a06c41ad903e58a7026df6be) is 8.0M, max 584.8M, 576.8M free. Mar 17 21:19:42.236701 systemd-journald[975]: Received client request to flush runtime journal. Mar 17 21:19:42.236775 kernel: audit: type=1130 audit(1742246382.182:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.236813 kernel: audit: type=1130 audit(1742246382.203:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.181767 systemd[1]: Finished systemd-random-seed.service. Mar 17 21:19:42.182653 systemd[1]: Reached target first-boot-complete.target. Mar 17 21:19:42.203039 systemd[1]: Finished systemd-sysctl.service. Mar 17 21:19:42.226617 systemd[1]: Finished systemd-sysusers.service. Mar 17 21:19:42.238072 systemd[1]: Finished systemd-journal-flush.service. Mar 17 21:19:42.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.320905 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 21:19:42.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.323954 systemd[1]: Starting systemd-udev-settle.service... Mar 17 21:19:42.335739 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 21:19:42.977072 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 21:19:42.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:42.983000 audit: BPF prog-id=24 op=LOAD Mar 17 21:19:42.983000 audit: BPF prog-id=25 op=LOAD Mar 17 21:19:42.983000 audit: BPF prog-id=7 op=UNLOAD Mar 17 21:19:42.983000 audit: BPF prog-id=8 op=UNLOAD Mar 17 21:19:42.987020 systemd[1]: Starting systemd-udevd.service... Mar 17 21:19:43.018443 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Mar 17 21:19:43.053475 systemd[1]: Started systemd-udevd.service. Mar 17 21:19:43.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.059000 audit: BPF prog-id=26 op=LOAD Mar 17 21:19:43.061345 systemd[1]: Starting systemd-networkd.service... Mar 17 21:19:43.094000 audit: BPF prog-id=27 op=LOAD Mar 17 21:19:43.095000 audit: BPF prog-id=28 op=LOAD Mar 17 21:19:43.095000 audit: BPF prog-id=29 op=LOAD Mar 17 21:19:43.096552 systemd[1]: Starting systemd-userdbd.service... Mar 17 21:19:43.146949 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 21:19:43.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.155452 systemd[1]: Started systemd-userdbd.service. Mar 17 21:19:43.202537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 21:19:43.267129 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 21:19:43.275115 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 21:19:43.290153 kernel: ACPI: button: Power Button [PWRF] Mar 17 21:19:43.301638 systemd-networkd[1026]: lo: Link UP Mar 17 21:19:43.301651 systemd-networkd[1026]: lo: Gained carrier Mar 17 21:19:43.302636 systemd-networkd[1026]: Enumeration completed Mar 17 21:19:43.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.302778 systemd[1]: Started systemd-networkd.service. Mar 17 21:19:43.303637 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 21:19:43.306895 systemd-networkd[1026]: eth0: Link UP Mar 17 21:19:43.306909 systemd-networkd[1026]: eth0: Gained carrier Mar 17 21:19:43.329415 systemd-networkd[1026]: eth0: DHCPv4 address 10.230.48.190/30, gateway 10.230.48.189 acquired from 10.230.48.189 Mar 17 21:19:43.337000 audit[1030]: AVC avc: denied { confidentiality } for pid=1030 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 21:19:43.337000 audit[1030]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558ec0b38c20 a1=338ac a2=7f0a30e46bc5 a3=5 items=110 ppid=1016 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:19:43.337000 audit: CWD cwd="/" Mar 17 21:19:43.337000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=1 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=2 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=3 name=(null) inode=16095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=4 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=5 name=(null) inode=16096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=6 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=7 name=(null) inode=16097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=8 name=(null) inode=16097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=9 name=(null) inode=16098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=10 name=(null) inode=16097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=11 name=(null) inode=16099 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=12 name=(null) inode=16097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=13 name=(null) inode=16100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=14 name=(null) inode=16097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=15 name=(null) inode=16101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=16 name=(null) inode=16097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=17 name=(null) inode=16102 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=18 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=19 name=(null) inode=16103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=20 name=(null) inode=16103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=21 name=(null) inode=16104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=22 name=(null) inode=16103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=23 name=(null) inode=16105 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=24 name=(null) inode=16103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=25 name=(null) inode=16106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=26 name=(null) inode=16103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=27 name=(null) inode=16107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=28 name=(null) inode=16103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=29 name=(null) inode=16108 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=30 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=31 name=(null) inode=16109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=32 name=(null) inode=16109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=33 name=(null) inode=16110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=34 name=(null) inode=16109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=35 name=(null) inode=16111 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=36 name=(null) inode=16109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=37 name=(null) inode=16112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=38 name=(null) inode=16109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=39 name=(null) inode=16113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=40 name=(null) inode=16109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=41 name=(null) inode=16114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=42 name=(null) inode=16094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=43 name=(null) inode=16115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=44 name=(null) inode=16115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=45 name=(null) inode=16116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=46 name=(null) inode=16115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=47 name=(null) inode=16117 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=48 name=(null) inode=16115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=49 name=(null) inode=16118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=50 name=(null) inode=16115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=51 name=(null) inode=16119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=52 name=(null) inode=16115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=53 name=(null) inode=16120 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=55 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=56 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=57 name=(null) inode=16122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=58 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=59 name=(null) inode=16123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=60 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=61 name=(null) inode=16124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=62 name=(null) inode=16124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=63 name=(null) inode=16125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=64 name=(null) inode=16124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=65 name=(null) inode=16126 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=66 name=(null) inode=16124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=67 name=(null) inode=16127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=68 name=(null) inode=16124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=69 name=(null) inode=16128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=70 name=(null) inode=16124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=71 name=(null) inode=16129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=72 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=73 name=(null) inode=16130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=74 name=(null) inode=16130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=75 name=(null) inode=16131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=76 name=(null) inode=16130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=77 name=(null) inode=16132 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=78 name=(null) inode=16130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=79 name=(null) inode=16133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=80 name=(null) inode=16130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=81 name=(null) inode=16134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=82 name=(null) inode=16130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=83 name=(null) inode=16135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=84 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=85 name=(null) inode=16136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=86 name=(null) inode=16136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=87 name=(null) inode=16137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=88 name=(null) inode=16136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=89 name=(null) inode=16138 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=90 name=(null) inode=16136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=91 name=(null) inode=16139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=92 name=(null) inode=16136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=93 name=(null) inode=16140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=94 name=(null) inode=16136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=95 name=(null) inode=16141 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=96 name=(null) inode=16121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=97 name=(null) inode=16142 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=98 name=(null) inode=16142 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=99 name=(null) inode=16143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=100 name=(null) inode=16142 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=101 name=(null) inode=16144 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=102 name=(null) inode=16142 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=103 name=(null) inode=16145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=104 name=(null) inode=16142 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=105 name=(null) inode=16146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=106 name=(null) inode=16142 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=107 name=(null) inode=16147 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PATH item=109 name=(null) inode=16148 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 21:19:43.337000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 21:19:43.379115 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 21:19:43.394118 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 21:19:43.405601 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 21:19:43.405883 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 21:19:43.540801 systemd[1]: Finished systemd-udev-settle.service. Mar 17 21:19:43.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.543513 systemd[1]: Starting lvm2-activation-early.service... Mar 17 21:19:43.567572 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 21:19:43.601750 systemd[1]: Finished lvm2-activation-early.service. Mar 17 21:19:43.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.602703 systemd[1]: Reached target cryptsetup.target. Mar 17 21:19:43.605206 systemd[1]: Starting lvm2-activation.service... Mar 17 21:19:43.611285 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 21:19:43.640702 systemd[1]: Finished lvm2-activation.service. Mar 17 21:19:43.641564 systemd[1]: Reached target local-fs-pre.target. Mar 17 21:19:43.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.642220 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 21:19:43.642268 systemd[1]: Reached target local-fs.target. Mar 17 21:19:43.642847 systemd[1]: Reached target machines.target. Mar 17 21:19:43.645282 systemd[1]: Starting ldconfig.service... Mar 17 21:19:43.646666 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:19:43.646746 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:43.649439 systemd[1]: Starting systemd-boot-update.service... Mar 17 21:19:43.652692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 21:19:43.658508 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 21:19:43.662954 systemd[1]: Starting systemd-sysext.service... Mar 17 21:19:43.664228 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1048 (bootctl) Mar 17 21:19:43.669305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 21:19:43.680043 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 21:19:43.692956 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 21:19:43.693243 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 21:19:43.789161 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 21:19:43.949937 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 21:19:43.950974 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 21:19:43.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:43.970665 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 21:19:43.994121 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 21:19:44.011420 (sd-sysext)[1060]: Using extensions 'kubernetes'. Mar 17 21:19:44.013333 (sd-sysext)[1060]: Merged extensions into '/usr'. Mar 17 21:19:44.028220 systemd-fsck[1057]: fsck.fat 4.2 (2021-01-31) Mar 17 21:19:44.028220 systemd-fsck[1057]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 21:19:44.053285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 21:19:44.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.054764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 21:19:44.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.066422 systemd[1]: Mounting boot.mount... Mar 17 21:19:44.067017 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:44.071271 systemd[1]: Mounting usr-share-oem.mount... Mar 17 21:19:44.072147 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.074038 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:19:44.076383 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:19:44.079523 systemd[1]: Starting modprobe@loop.service... Mar 17 21:19:44.081235 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.081415 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:44.081567 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:44.087884 systemd[1]: Mounted usr-share-oem.mount. Mar 17 21:19:44.089585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:19:44.089824 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:19:44.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.091649 systemd[1]: Finished systemd-sysext.service. Mar 17 21:19:44.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.093053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:19:44.093274 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:19:44.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.095845 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:19:44.096531 systemd[1]: Finished modprobe@loop.service. Mar 17 21:19:44.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.100614 systemd[1]: Mounted boot.mount. Mar 17 21:19:44.109788 systemd[1]: Starting ensure-sysext.service... Mar 17 21:19:44.113357 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 21:19:44.113764 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.117935 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 21:19:44.130788 systemd[1]: Reloading. Mar 17 21:19:44.153040 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 21:19:44.160584 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 21:19:44.172362 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 21:19:44.391539 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2025-03-17T21:19:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:19:44.392358 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2025-03-17T21:19:44Z" level=info msg="torcx already run" Mar 17 21:19:44.431240 ldconfig[1047]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 21:19:44.468103 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:19:44.468442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:19:44.500341 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:19:44.594000 audit: BPF prog-id=30 op=LOAD Mar 17 21:19:44.594000 audit: BPF prog-id=27 op=UNLOAD Mar 17 21:19:44.594000 audit: BPF prog-id=31 op=LOAD Mar 17 21:19:44.594000 audit: BPF prog-id=32 op=LOAD Mar 17 21:19:44.595000 audit: BPF prog-id=28 op=UNLOAD Mar 17 21:19:44.595000 audit: BPF prog-id=29 op=UNLOAD Mar 17 21:19:44.596000 audit: BPF prog-id=33 op=LOAD Mar 17 21:19:44.596000 audit: BPF prog-id=26 op=UNLOAD Mar 17 21:19:44.599000 audit: BPF prog-id=34 op=LOAD Mar 17 21:19:44.599000 audit: BPF prog-id=21 op=UNLOAD Mar 17 21:19:44.599000 audit: BPF prog-id=35 op=LOAD Mar 17 21:19:44.599000 audit: BPF prog-id=36 op=LOAD Mar 17 21:19:44.599000 audit: BPF prog-id=22 op=UNLOAD Mar 17 21:19:44.600000 audit: BPF prog-id=23 op=UNLOAD Mar 17 21:19:44.601000 audit: BPF prog-id=37 op=LOAD Mar 17 21:19:44.601000 audit: BPF prog-id=38 op=LOAD Mar 17 21:19:44.601000 audit: BPF prog-id=24 op=UNLOAD Mar 17 21:19:44.601000 audit: BPF prog-id=25 op=UNLOAD Mar 17 21:19:44.607864 systemd[1]: Finished ldconfig.service. Mar 17 21:19:44.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.609670 systemd[1]: Finished systemd-boot-update.service. Mar 17 21:19:44.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.612682 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 21:19:44.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.619431 systemd[1]: Starting audit-rules.service... Mar 17 21:19:44.622432 systemd[1]: Starting clean-ca-certificates.service... Mar 17 21:19:44.628305 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 21:19:44.630000 audit: BPF prog-id=39 op=LOAD Mar 17 21:19:44.636000 audit: BPF prog-id=40 op=LOAD Mar 17 21:19:44.632051 systemd[1]: Starting systemd-resolved.service... Mar 17 21:19:44.639011 systemd[1]: Starting systemd-timesyncd.service... Mar 17 21:19:44.643391 systemd[1]: Starting systemd-update-utmp.service... Mar 17 21:19:44.645215 systemd[1]: Finished clean-ca-certificates.service. Mar 17 21:19:44.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.648887 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 21:19:44.653000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.658871 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.663137 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:19:44.666960 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:19:44.671629 systemd[1]: Starting modprobe@loop.service... Mar 17 21:19:44.672422 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.672807 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:44.673198 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 21:19:44.678370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:19:44.678562 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:19:44.679995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:19:44.680209 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:19:44.681760 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:19:44.681917 systemd[1]: Finished modprobe@loop.service. Mar 17 21:19:44.687276 systemd[1]: Finished systemd-update-utmp.service. Mar 17 21:19:44.704598 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:44.704969 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.708021 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:19:44.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 21:19:44.712932 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:19:44.715970 systemd[1]: Starting modprobe@loop.service... Mar 17 21:19:44.719144 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.719320 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:44.719497 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 21:19:44.719642 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:44.721606 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 21:19:44.722943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:19:44.723143 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:19:44.724462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:19:44.724639 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:19:44.726759 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:19:44.726956 systemd[1]: Finished modprobe@loop.service. Mar 17 21:19:44.733641 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:44.734006 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.735979 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 21:19:44.740410 systemd[1]: Starting modprobe@drm.service... Mar 17 21:19:44.744542 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 21:19:44.764495 systemd[1]: Starting modprobe@loop.service... Mar 17 21:19:44.765526 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.765851 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:44.768695 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 21:19:44.776501 systemd[1]: Starting systemd-update-done.service... Mar 17 21:19:44.777513 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 21:19:44.777861 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 21:19:44.780239 systemd-networkd[1026]: eth0: Gained IPv6LL Mar 17 21:19:44.784000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 21:19:44.784000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe66144280 a2=420 a3=0 items=0 ppid=1136 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 21:19:44.784000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 21:19:44.785711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 21:19:44.787470 augenrules[1165]: No rules Mar 17 21:19:44.785934 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 21:19:44.787451 systemd[1]: Finished audit-rules.service. Mar 17 21:19:44.788656 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 21:19:44.788842 systemd[1]: Finished modprobe@drm.service. Mar 17 21:19:44.790160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 21:19:44.790352 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 21:19:44.794996 systemd[1]: Finished ensure-sysext.service. Mar 17 21:19:44.798056 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 21:19:44.798647 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 21:19:44.798836 systemd[1]: Finished modprobe@loop.service. Mar 17 21:19:44.799677 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.811998 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 21:19:44.814497 systemd[1]: Finished systemd-update-done.service. Mar 17 21:19:44.825820 systemd[1]: Started systemd-timesyncd.service. Mar 17 21:19:44.826736 systemd[1]: Reached target time-set.target. Mar 17 21:19:44.854414 systemd-resolved[1139]: Positive Trust Anchors: Mar 17 21:19:44.854437 systemd-resolved[1139]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 21:19:44.854475 systemd-resolved[1139]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 21:19:44.864054 systemd-resolved[1139]: Using system hostname 'srv-y0snw.gb1.brightbox.com'. Mar 17 21:19:44.866911 systemd[1]: Started systemd-resolved.service. Mar 17 21:19:44.867687 systemd[1]: Reached target network.target. Mar 17 21:19:44.868288 systemd[1]: Reached target network-online.target. Mar 17 21:19:44.868882 systemd[1]: Reached target nss-lookup.target. Mar 17 21:19:44.869534 systemd[1]: Reached target sysinit.target. Mar 17 21:19:44.870277 systemd[1]: Started motdgen.path. Mar 17 21:19:44.870876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 21:19:44.872008 systemd[1]: Started logrotate.timer. Mar 17 21:19:44.872779 systemd[1]: Started mdadm.timer. Mar 17 21:19:44.873412 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 21:19:44.874034 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 21:19:44.874095 systemd[1]: Reached target paths.target. Mar 17 21:19:44.874671 systemd[1]: Reached target timers.target. Mar 17 21:19:44.875787 systemd[1]: Listening on dbus.socket. Mar 17 21:19:44.878007 systemd[1]: Starting docker.socket... Mar 17 21:19:44.882542 systemd[1]: Listening on sshd.socket. Mar 17 21:19:44.883323 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:44.883953 systemd[1]: Listening on docker.socket. Mar 17 21:19:44.884701 systemd[1]: Reached target sockets.target. Mar 17 21:19:44.885301 systemd[1]: Reached target basic.target. Mar 17 21:19:44.885953 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.886008 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 21:19:44.887542 systemd[1]: Starting containerd.service... Mar 17 21:19:44.889659 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 21:19:44.892027 systemd[1]: Starting dbus.service... Mar 17 21:19:44.896138 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 21:19:44.903145 systemd[1]: Starting extend-filesystems.service... Mar 17 21:19:44.906297 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 21:19:44.908633 systemd[1]: Starting kubelet.service... Mar 17 21:19:44.912672 jq[1180]: false Mar 17 21:19:44.912928 systemd[1]: Starting motdgen.service... Mar 17 21:19:44.919271 systemd[1]: Starting prepare-helm.service... Mar 17 21:19:44.921699 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 21:19:44.925425 systemd[1]: Starting sshd-keygen.service... Mar 17 21:19:44.933009 systemd[1]: Starting systemd-logind.service... Mar 17 21:19:44.933770 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 21:19:44.933944 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 21:19:44.934775 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 21:19:44.937351 systemd[1]: Starting update-engine.service... Mar 17 21:19:44.943265 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 21:19:44.951144 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 21:19:44.951411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 21:19:44.954562 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 21:19:44.954844 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 21:19:44.955678 jq[1195]: true Mar 17 21:19:45.002101 tar[1199]: linux-amd64/helm Mar 17 21:19:45.021808 dbus-daemon[1178]: [system] SELinux support is enabled Mar 17 21:19:45.028484 systemd[1]: Started dbus.service. Mar 17 21:19:45.031940 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 21:19:45.032003 systemd[1]: Reached target system-config.target. Mar 17 21:19:45.032678 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 21:19:45.032712 systemd[1]: Reached target user-config.target. Mar 17 21:19:45.033693 jq[1201]: true Mar 17 21:19:45.035023 dbus-daemon[1178]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1026 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 21:19:45.044715 dbus-daemon[1178]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 21:19:45.057180 systemd[1]: Starting systemd-hostnamed.service... Mar 17 21:19:45.059443 extend-filesystems[1182]: Found loop1 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda1 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda2 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda3 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found usr Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda4 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda6 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda7 Mar 17 21:19:45.060478 extend-filesystems[1182]: Found vda9 Mar 17 21:19:45.060478 extend-filesystems[1182]: Checking size of /dev/vda9 Mar 17 21:19:45.073321 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 21:19:45.073579 systemd[1]: Finished motdgen.service. Mar 17 21:19:45.113599 extend-filesystems[1182]: Resized partition /dev/vda9 Mar 17 21:19:45.131987 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 21:19:45.143353 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 17 21:19:45.145294 update_engine[1192]: I0317 21:19:45.144069 1192 main.cc:92] Flatcar Update Engine starting Mar 17 21:19:45.187865 systemd[1]: Started update-engine.service. Mar 17 21:19:45.192051 systemd[1]: Started locksmithd.service. Mar 17 21:19:45.193704 update_engine[1192]: I0317 21:19:45.193550 1192 update_check_scheduler.cc:74] Next update check in 6m11s Mar 17 21:19:45.257691 bash[1234]: Updated "/home/core/.ssh/authorized_keys" Mar 17 21:19:45.258312 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 21:19:45.288162 systemd-logind[1190]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 21:19:45.288232 systemd-logind[1190]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 21:19:45.301224 systemd-logind[1190]: New seat seat0. Mar 17 21:19:45.309941 systemd[1]: Started systemd-logind.service. Mar 17 21:19:45.367134 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 21:19:45.382463 dbus-daemon[1178]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 21:19:45.382893 systemd[1]: Started systemd-hostnamed.service. Mar 17 21:19:45.392862 env[1202]: time="2025-03-17T21:19:45.389943840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 21:19:45.393326 extend-filesystems[1226]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 21:19:45.393326 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 21:19:45.393326 extend-filesystems[1226]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 21:19:45.397020 extend-filesystems[1182]: Resized filesystem in /dev/vda9 Mar 17 21:19:45.396812 dbus-daemon[1178]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1211 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 21:19:45.393759 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 21:19:45.393986 systemd[1]: Finished extend-filesystems.service. Mar 17 21:19:45.400988 systemd[1]: Starting polkit.service... Mar 17 21:19:45.446945 systemd-networkd[1026]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c2f:24:19ff:fee6:30be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c2f:24:19ff:fee6:30be/64 assigned by NDisc. Mar 17 21:19:45.446958 systemd-networkd[1026]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 21:19:45.464833 env[1202]: time="2025-03-17T21:19:45.464771873Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 21:19:45.465078 env[1202]: time="2025-03-17T21:19:45.465047091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:19:45.468188 env[1202]: time="2025-03-17T21:19:45.468136590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 21:19:45.468188 env[1202]: time="2025-03-17T21:19:45.468184532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:19:45.468728 env[1202]: time="2025-03-17T21:19:45.468688927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 21:19:45.468728 env[1202]: time="2025-03-17T21:19:45.468726023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 21:19:45.468851 env[1202]: time="2025-03-17T21:19:45.468755229Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 21:19:45.468851 env[1202]: time="2025-03-17T21:19:45.468775002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 21:19:45.468965 env[1202]: time="2025-03-17T21:19:45.468936150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:19:45.470208 env[1202]: time="2025-03-17T21:19:45.470175662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 21:19:45.470404 env[1202]: time="2025-03-17T21:19:45.470369902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 21:19:45.470462 env[1202]: time="2025-03-17T21:19:45.470405542Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 21:19:45.470521 env[1202]: time="2025-03-17T21:19:45.470502583Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 21:19:45.470749 env[1202]: time="2025-03-17T21:19:45.470531185Z" level=info msg="metadata content store policy set" policy=shared Mar 17 21:19:45.473909 env[1202]: time="2025-03-17T21:19:45.473874119Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 21:19:45.473981 env[1202]: time="2025-03-17T21:19:45.473920768Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 21:19:45.473981 env[1202]: time="2025-03-17T21:19:45.473967348Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 21:19:45.474122 env[1202]: time="2025-03-17T21:19:45.474075107Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474212 env[1202]: time="2025-03-17T21:19:45.474126962Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474212 env[1202]: time="2025-03-17T21:19:45.474159551Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474212 env[1202]: time="2025-03-17T21:19:45.474191907Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474341 env[1202]: time="2025-03-17T21:19:45.474246289Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474341 env[1202]: time="2025-03-17T21:19:45.474276822Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474341 env[1202]: time="2025-03-17T21:19:45.474313345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474449 env[1202]: time="2025-03-17T21:19:45.474341066Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.474449 env[1202]: time="2025-03-17T21:19:45.474371552Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 21:19:45.474633 env[1202]: time="2025-03-17T21:19:45.474587426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 21:19:45.474836 env[1202]: time="2025-03-17T21:19:45.474802468Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 21:19:45.475477 env[1202]: time="2025-03-17T21:19:45.475442256Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 21:19:45.475693 env[1202]: time="2025-03-17T21:19:45.475662854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.475772 env[1202]: time="2025-03-17T21:19:45.475710947Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 21:19:45.475888 env[1202]: time="2025-03-17T21:19:45.475855917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.475975 env[1202]: time="2025-03-17T21:19:45.475892606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.475975 env[1202]: time="2025-03-17T21:19:45.475921610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.475975 env[1202]: time="2025-03-17T21:19:45.475962259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476101 env[1202]: time="2025-03-17T21:19:45.475991232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476101 env[1202]: time="2025-03-17T21:19:45.476013643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476101 env[1202]: time="2025-03-17T21:19:45.476032062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476101 env[1202]: time="2025-03-17T21:19:45.476061043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476299 env[1202]: time="2025-03-17T21:19:45.476105949Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 21:19:45.476479 env[1202]: time="2025-03-17T21:19:45.476341640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476479 env[1202]: time="2025-03-17T21:19:45.476382770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476479 env[1202]: time="2025-03-17T21:19:45.476405990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.476479 env[1202]: time="2025-03-17T21:19:45.476424305Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 21:19:45.476479 env[1202]: time="2025-03-17T21:19:45.476451345Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 21:19:45.476479 env[1202]: time="2025-03-17T21:19:45.476472261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 21:19:45.476771 env[1202]: time="2025-03-17T21:19:45.476537415Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 21:19:45.476771 env[1202]: time="2025-03-17T21:19:45.476636279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 21:19:45.477253 env[1202]: time="2025-03-17T21:19:45.477167703Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 21:19:45.479842 env[1202]: time="2025-03-17T21:19:45.477273399Z" level=info msg="Connect containerd service" Mar 17 21:19:45.479842 env[1202]: time="2025-03-17T21:19:45.477361696Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 21:19:45.479842 env[1202]: time="2025-03-17T21:19:45.478452282Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 21:19:45.479842 env[1202]: time="2025-03-17T21:19:45.479073388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 21:19:45.479842 env[1202]: time="2025-03-17T21:19:45.479177560Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 21:19:45.479398 systemd[1]: Started containerd.service. Mar 17 21:19:45.481577 env[1202]: time="2025-03-17T21:19:45.481492151Z" level=info msg="Start subscribing containerd event" Mar 17 21:19:45.481665 env[1202]: time="2025-03-17T21:19:45.481615023Z" level=info msg="Start recovering state" Mar 17 21:19:45.481785 env[1202]: time="2025-03-17T21:19:45.481748376Z" level=info msg="Start event monitor" Mar 17 21:19:45.481846 env[1202]: time="2025-03-17T21:19:45.481794645Z" level=info msg="Start snapshots syncer" Mar 17 21:19:45.481846 env[1202]: time="2025-03-17T21:19:45.481823743Z" level=info msg="Start cni network conf syncer for default" Mar 17 21:19:45.481846 env[1202]: time="2025-03-17T21:19:45.481839479Z" level=info msg="Start streaming server" Mar 17 21:19:45.491388 env[1202]: time="2025-03-17T21:19:45.489969704Z" level=info msg="containerd successfully booted in 0.150897s" Mar 17 21:19:45.494876 polkitd[1241]: Started polkitd version 121 Mar 17 21:19:45.531793 polkitd[1241]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 21:19:45.535177 polkitd[1241]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 21:19:45.541262 polkitd[1241]: Finished loading, compiling and executing 2 rules Mar 17 21:19:45.542878 dbus-daemon[1178]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 21:19:45.543492 systemd[1]: Started polkit.service. Mar 17 21:19:45.546149 polkitd[1241]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 21:19:45.595803 systemd-hostnamed[1211]: Hostname set to (static) Mar 17 21:19:46.250716 systemd-timesyncd[1141]: Contacted time server 129.250.35.250:123 (0.flatcar.pool.ntp.org). Mar 17 21:19:46.251152 systemd-timesyncd[1141]: Initial clock synchronization to Mon 2025-03-17 21:19:46.237667 UTC. Mar 17 21:19:46.495126 tar[1199]: linux-amd64/LICENSE Mar 17 21:19:46.497074 tar[1199]: linux-amd64/README.md Mar 17 21:19:46.509120 systemd[1]: Finished prepare-helm.service. Mar 17 21:19:46.628952 locksmithd[1235]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 21:19:47.040865 systemd[1]: Started kubelet.service. Mar 17 21:19:47.902896 kubelet[1259]: E0317 21:19:47.900030 1259 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:19:47.909233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:19:47.909649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:19:47.910986 systemd[1]: kubelet.service: Consumed 1.548s CPU time. Mar 17 21:19:47.969430 sshd_keygen[1200]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 21:19:48.002777 systemd[1]: Finished sshd-keygen.service. Mar 17 21:19:48.011945 systemd[1]: Starting issuegen.service... Mar 17 21:19:48.021645 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 21:19:48.021935 systemd[1]: Finished issuegen.service. Mar 17 21:19:48.025518 systemd[1]: Starting systemd-user-sessions.service... Mar 17 21:19:48.038906 systemd[1]: Finished systemd-user-sessions.service. Mar 17 21:19:48.043204 systemd[1]: Started getty@tty1.service. Mar 17 21:19:48.046191 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 21:19:48.047373 systemd[1]: Reached target getty.target. Mar 17 21:19:52.160541 coreos-metadata[1177]: Mar 17 21:19:52.159 WARN failed to locate config-drive, using the metadata service API instead Mar 17 21:19:52.212966 coreos-metadata[1177]: Mar 17 21:19:52.212 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 21:19:52.244542 coreos-metadata[1177]: Mar 17 21:19:52.244 INFO Fetch successful Mar 17 21:19:52.245177 coreos-metadata[1177]: Mar 17 21:19:52.244 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 21:19:52.298864 coreos-metadata[1177]: Mar 17 21:19:52.298 INFO Fetch successful Mar 17 21:19:52.301200 unknown[1177]: wrote ssh authorized keys file for user: core Mar 17 21:19:52.315376 update-ssh-keys[1282]: Updated "/home/core/.ssh/authorized_keys" Mar 17 21:19:52.316845 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 21:19:52.321595 systemd[1]: Reached target multi-user.target. Mar 17 21:19:52.326528 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 21:19:52.343900 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 21:19:52.344406 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 21:19:52.349451 systemd[1]: Startup finished in 1.433s (kernel) + 8.686s (initrd) + 14.818s (userspace) = 24.938s. Mar 17 21:19:54.946668 systemd[1]: Created slice system-sshd.slice. Mar 17 21:19:54.948736 systemd[1]: Started sshd@0-10.230.48.190:22-139.178.89.65:44570.service. Mar 17 21:19:55.869540 sshd[1285]: Accepted publickey for core from 139.178.89.65 port 44570 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:19:55.872831 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:19:55.891541 systemd[1]: Created slice user-500.slice. Mar 17 21:19:55.893814 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 21:19:55.903061 systemd-logind[1190]: New session 1 of user core. Mar 17 21:19:55.910564 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 21:19:55.913626 systemd[1]: Starting user@500.service... Mar 17 21:19:55.919433 (systemd)[1288]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:19:56.029584 systemd[1288]: Queued start job for default target default.target. Mar 17 21:19:56.030699 systemd[1288]: Reached target paths.target. Mar 17 21:19:56.030736 systemd[1288]: Reached target sockets.target. Mar 17 21:19:56.030756 systemd[1288]: Reached target timers.target. Mar 17 21:19:56.030774 systemd[1288]: Reached target basic.target. Mar 17 21:19:56.030960 systemd[1]: Started user@500.service. Mar 17 21:19:56.032759 systemd[1]: Started session-1.scope. Mar 17 21:19:56.033905 systemd[1288]: Reached target default.target. Mar 17 21:19:56.034143 systemd[1288]: Startup finished in 104ms. Mar 17 21:19:56.661044 systemd[1]: Started sshd@1-10.230.48.190:22-139.178.89.65:44572.service. Mar 17 21:19:57.551648 sshd[1297]: Accepted publickey for core from 139.178.89.65 port 44572 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:19:57.554431 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:19:57.562156 systemd-logind[1190]: New session 2 of user core. Mar 17 21:19:57.563380 systemd[1]: Started session-2.scope. Mar 17 21:19:57.837656 systemd[1]: Started sshd@2-10.230.48.190:22-103.212.211.155:35770.service. Mar 17 21:19:57.932452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 21:19:57.932734 systemd[1]: Stopped kubelet.service. Mar 17 21:19:57.932822 systemd[1]: kubelet.service: Consumed 1.548s CPU time. Mar 17 21:19:57.935653 systemd[1]: Starting kubelet.service... Mar 17 21:19:58.146764 systemd[1]: Started kubelet.service. Mar 17 21:19:58.172756 sshd[1297]: pam_unix(sshd:session): session closed for user core Mar 17 21:19:58.177313 systemd[1]: sshd@1-10.230.48.190:22-139.178.89.65:44572.service: Deactivated successfully. Mar 17 21:19:58.178331 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 21:19:58.180218 systemd-logind[1190]: Session 2 logged out. Waiting for processes to exit. Mar 17 21:19:58.181477 systemd-logind[1190]: Removed session 2. Mar 17 21:19:58.228323 kubelet[1308]: E0317 21:19:58.228261 1308 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:19:58.233382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:19:58.233601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:19:58.319529 systemd[1]: Started sshd@3-10.230.48.190:22-139.178.89.65:44588.service. Mar 17 21:19:59.208252 sshd[1316]: Accepted publickey for core from 139.178.89.65 port 44588 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:19:59.210808 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:19:59.217828 systemd[1]: Started session-3.scope. Mar 17 21:19:59.218315 systemd-logind[1190]: New session 3 of user core. Mar 17 21:19:59.820825 sshd[1316]: pam_unix(sshd:session): session closed for user core Mar 17 21:19:59.824399 systemd[1]: sshd@3-10.230.48.190:22-139.178.89.65:44588.service: Deactivated successfully. Mar 17 21:19:59.825328 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 21:19:59.826111 systemd-logind[1190]: Session 3 logged out. Waiting for processes to exit. Mar 17 21:19:59.827608 systemd-logind[1190]: Removed session 3. Mar 17 21:19:59.836575 sshd[1301]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.212.211.155 user=root Mar 17 21:19:59.968702 systemd[1]: Started sshd@4-10.230.48.190:22-139.178.89.65:44592.service. Mar 17 21:20:00.868325 sshd[1322]: Accepted publickey for core from 139.178.89.65 port 44592 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:20:00.870192 sshd[1322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:20:00.876154 systemd-logind[1190]: New session 4 of user core. Mar 17 21:20:00.877528 systemd[1]: Started session-4.scope. Mar 17 21:20:01.492535 sshd[1322]: pam_unix(sshd:session): session closed for user core Mar 17 21:20:01.496319 systemd[1]: sshd@4-10.230.48.190:22-139.178.89.65:44592.service: Deactivated successfully. Mar 17 21:20:01.497419 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 21:20:01.498264 systemd-logind[1190]: Session 4 logged out. Waiting for processes to exit. Mar 17 21:20:01.499406 systemd-logind[1190]: Removed session 4. Mar 17 21:20:01.637448 systemd[1]: Started sshd@5-10.230.48.190:22-139.178.89.65:47284.service. Mar 17 21:20:01.953904 sshd[1301]: Failed password for root from 103.212.211.155 port 35770 ssh2 Mar 17 21:20:02.523809 sshd[1328]: Accepted publickey for core from 139.178.89.65 port 47284 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:20:02.526382 sshd[1328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:20:02.532820 systemd-logind[1190]: New session 5 of user core. Mar 17 21:20:02.533626 systemd[1]: Started session-5.scope. Mar 17 21:20:03.011292 sudo[1331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 21:20:03.011659 sudo[1331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 21:20:03.095265 systemd[1]: Starting docker.service... Mar 17 21:20:03.248971 env[1341]: time="2025-03-17T21:20:03.248743694Z" level=info msg="Starting up" Mar 17 21:20:03.252116 env[1341]: time="2025-03-17T21:20:03.252061593Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 21:20:03.252314 env[1341]: time="2025-03-17T21:20:03.252279489Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 21:20:03.252793 env[1341]: time="2025-03-17T21:20:03.252753871Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 21:20:03.252959 env[1341]: time="2025-03-17T21:20:03.252929387Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 21:20:03.259665 env[1341]: time="2025-03-17T21:20:03.259618249Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 21:20:03.259831 env[1341]: time="2025-03-17T21:20:03.259803201Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 21:20:03.260018 env[1341]: time="2025-03-17T21:20:03.259986508Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 21:20:03.260177 env[1341]: time="2025-03-17T21:20:03.260150516Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 21:20:03.271957 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport992560217-merged.mount: Deactivated successfully. Mar 17 21:20:03.303419 env[1341]: time="2025-03-17T21:20:03.303350273Z" level=info msg="Loading containers: start." Mar 17 21:20:03.479242 kernel: Initializing XFRM netlink socket Mar 17 21:20:03.529274 env[1341]: time="2025-03-17T21:20:03.529138225Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 21:20:03.639750 systemd-networkd[1026]: docker0: Link UP Mar 17 21:20:03.654899 env[1341]: time="2025-03-17T21:20:03.654862218Z" level=info msg="Loading containers: done." Mar 17 21:20:03.697994 env[1341]: time="2025-03-17T21:20:03.697917826Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 21:20:03.698324 env[1341]: time="2025-03-17T21:20:03.698292526Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 21:20:03.698625 env[1341]: time="2025-03-17T21:20:03.698597010Z" level=info msg="Daemon has completed initialization" Mar 17 21:20:03.716259 systemd[1]: Started docker.service. Mar 17 21:20:03.729405 env[1341]: time="2025-03-17T21:20:03.729177170Z" level=info msg="API listen on /run/docker.sock" Mar 17 21:20:03.819414 sshd[1301]: Received disconnect from 103.212.211.155 port 35770:11: Bye Bye [preauth] Mar 17 21:20:03.819414 sshd[1301]: Disconnected from authenticating user root 103.212.211.155 port 35770 [preauth] Mar 17 21:20:03.821560 systemd[1]: sshd@2-10.230.48.190:22-103.212.211.155:35770.service: Deactivated successfully. Mar 17 21:20:05.546486 env[1202]: time="2025-03-17T21:20:05.546216151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 21:20:06.358566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1195808555.mount: Deactivated successfully. Mar 17 21:20:08.432214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 21:20:08.432515 systemd[1]: Stopped kubelet.service. Mar 17 21:20:08.438015 systemd[1]: Starting kubelet.service... Mar 17 21:20:08.746696 systemd[1]: Started kubelet.service. Mar 17 21:20:08.857294 kubelet[1480]: E0317 21:20:08.857205 1480 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:20:08.860280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:20:08.860518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:20:09.267701 env[1202]: time="2025-03-17T21:20:09.267637006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:09.271756 env[1202]: time="2025-03-17T21:20:09.271720530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:09.276040 env[1202]: time="2025-03-17T21:20:09.274785407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:09.277759 env[1202]: time="2025-03-17T21:20:09.277717723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:09.278943 env[1202]: time="2025-03-17T21:20:09.278903920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 21:20:09.301922 env[1202]: time="2025-03-17T21:20:09.301438906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 21:20:12.478443 env[1202]: time="2025-03-17T21:20:12.478242946Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:12.481311 env[1202]: time="2025-03-17T21:20:12.481271951Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:12.484599 env[1202]: time="2025-03-17T21:20:12.484551891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:12.489730 env[1202]: time="2025-03-17T21:20:12.489689895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:12.490891 env[1202]: time="2025-03-17T21:20:12.490852191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 21:20:12.508389 env[1202]: time="2025-03-17T21:20:12.508335943Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 21:20:14.711177 env[1202]: time="2025-03-17T21:20:14.711007819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:14.713628 env[1202]: time="2025-03-17T21:20:14.713576933Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:14.719368 env[1202]: time="2025-03-17T21:20:14.719322362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:14.723323 env[1202]: time="2025-03-17T21:20:14.723256512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:14.724442 env[1202]: time="2025-03-17T21:20:14.724401297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 21:20:14.742796 env[1202]: time="2025-03-17T21:20:14.742745772Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 21:20:15.644070 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 21:20:16.483017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515286358.mount: Deactivated successfully. Mar 17 21:20:17.566881 env[1202]: time="2025-03-17T21:20:17.566819080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:17.568385 env[1202]: time="2025-03-17T21:20:17.568341899Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:17.570116 env[1202]: time="2025-03-17T21:20:17.570062055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:17.571698 env[1202]: time="2025-03-17T21:20:17.571664216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:17.572462 env[1202]: time="2025-03-17T21:20:17.572423829Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 21:20:17.593053 env[1202]: time="2025-03-17T21:20:17.592988185Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 21:20:18.328758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398721780.mount: Deactivated successfully. Mar 17 21:20:18.932244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 21:20:18.932539 systemd[1]: Stopped kubelet.service. Mar 17 21:20:18.936016 systemd[1]: Starting kubelet.service... Mar 17 21:20:19.060603 systemd[1]: Started kubelet.service. Mar 17 21:20:19.162558 kubelet[1512]: E0317 21:20:19.162500 1512 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 21:20:19.164842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 21:20:19.165068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 21:20:19.927722 env[1202]: time="2025-03-17T21:20:19.927634634Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:19.930064 env[1202]: time="2025-03-17T21:20:19.930013555Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:19.932532 env[1202]: time="2025-03-17T21:20:19.932441674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:19.934974 env[1202]: time="2025-03-17T21:20:19.934931792Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:19.937397 env[1202]: time="2025-03-17T21:20:19.937348468Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 21:20:19.954080 env[1202]: time="2025-03-17T21:20:19.953976143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 21:20:20.549508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548680979.mount: Deactivated successfully. Mar 17 21:20:20.564945 env[1202]: time="2025-03-17T21:20:20.564887205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:20.566633 env[1202]: time="2025-03-17T21:20:20.566599538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:20.569683 env[1202]: time="2025-03-17T21:20:20.569623349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:20.571072 env[1202]: time="2025-03-17T21:20:20.571006592Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:20.572004 env[1202]: time="2025-03-17T21:20:20.571819788Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 21:20:20.593776 env[1202]: time="2025-03-17T21:20:20.593641395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 21:20:21.140578 systemd[1]: Started sshd@6-10.230.48.190:22-134.209.151.205:60638.service. Mar 17 21:20:21.211921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3669468911.mount: Deactivated successfully. Mar 17 21:20:22.042727 sshd[1530]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.209.151.205 user=root Mar 17 21:20:23.316735 sshd[1530]: Failed password for root from 134.209.151.205 port 60638 ssh2 Mar 17 21:20:24.068928 sshd[1530]: Received disconnect from 134.209.151.205 port 60638:11: Bye Bye [preauth] Mar 17 21:20:24.068928 sshd[1530]: Disconnected from authenticating user root 134.209.151.205 port 60638 [preauth] Mar 17 21:20:24.070223 systemd[1]: sshd@6-10.230.48.190:22-134.209.151.205:60638.service: Deactivated successfully. Mar 17 21:20:25.438261 env[1202]: time="2025-03-17T21:20:25.438185704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:25.440621 env[1202]: time="2025-03-17T21:20:25.440580723Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:25.443302 env[1202]: time="2025-03-17T21:20:25.443255646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:25.445973 env[1202]: time="2025-03-17T21:20:25.445933721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:25.447365 env[1202]: time="2025-03-17T21:20:25.447311742Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 21:20:29.079173 systemd[1]: Stopped kubelet.service. Mar 17 21:20:29.083995 systemd[1]: Starting kubelet.service... Mar 17 21:20:29.112391 systemd[1]: Reloading. Mar 17 21:20:29.263970 /usr/lib/systemd/system-generators/torcx-generator[1619]: time="2025-03-17T21:20:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:20:29.264147 /usr/lib/systemd/system-generators/torcx-generator[1619]: time="2025-03-17T21:20:29Z" level=info msg="torcx already run" Mar 17 21:20:29.373894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:20:29.374437 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:20:29.403686 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:20:29.621211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 21:20:29.621722 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 21:20:29.622188 systemd[1]: Stopped kubelet.service. Mar 17 21:20:29.625583 systemd[1]: Starting kubelet.service... Mar 17 21:20:29.892672 systemd[1]: Started kubelet.service. Mar 17 21:20:29.972533 kubelet[1670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:20:29.972533 kubelet[1670]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 21:20:29.972533 kubelet[1670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:20:29.974113 kubelet[1670]: I0317 21:20:29.973989 1670 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 21:20:30.255994 kubelet[1670]: I0317 21:20:30.255930 1670 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 21:20:30.256373 kubelet[1670]: I0317 21:20:30.256345 1670 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 21:20:30.256850 kubelet[1670]: I0317 21:20:30.256825 1670 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 21:20:30.276221 kubelet[1670]: I0317 21:20:30.276162 1670 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 21:20:30.277779 kubelet[1670]: E0317 21:20:30.277706 1670 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.48.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.305247 kubelet[1670]: I0317 21:20:30.305213 1670 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 21:20:30.308131 kubelet[1670]: I0317 21:20:30.308067 1670 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 21:20:30.308605 kubelet[1670]: I0317 21:20:30.308269 1670 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-y0snw.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 21:20:30.309622 kubelet[1670]: I0317 21:20:30.309581 1670 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 21:20:30.309736 kubelet[1670]: I0317 21:20:30.309714 1670 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 21:20:30.310170 kubelet[1670]: I0317 21:20:30.310147 1670 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:20:30.311301 kubelet[1670]: I0317 21:20:30.311277 1670 kubelet.go:400] "Attempting to sync node with API server" Mar 17 21:20:30.311455 kubelet[1670]: I0317 21:20:30.311419 1670 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 21:20:30.311693 kubelet[1670]: I0317 21:20:30.311655 1670 kubelet.go:312] "Adding apiserver pod source" Mar 17 21:20:30.311767 kubelet[1670]: I0317 21:20:30.311723 1670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 21:20:30.313043 kubelet[1670]: W0317 21:20:30.312948 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.48.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-y0snw.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.313173 kubelet[1670]: E0317 21:20:30.313061 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.48.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-y0snw.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.320342 kubelet[1670]: I0317 21:20:30.320310 1670 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 21:20:30.322272 kubelet[1670]: I0317 21:20:30.322247 1670 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 21:20:30.322601 kubelet[1670]: W0317 21:20:30.322576 1670 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 21:20:30.323786 kubelet[1670]: I0317 21:20:30.323763 1670 server.go:1264] "Started kubelet" Mar 17 21:20:30.324138 kubelet[1670]: W0317 21:20:30.324066 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.48.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.324294 kubelet[1670]: E0317 21:20:30.324270 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.48.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.327130 kubelet[1670]: I0317 21:20:30.327051 1670 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 21:20:30.328979 kubelet[1670]: I0317 21:20:30.328728 1670 server.go:455] "Adding debug handlers to kubelet server" Mar 17 21:20:30.330632 kubelet[1670]: I0317 21:20:30.330526 1670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 21:20:30.331251 kubelet[1670]: I0317 21:20:30.331228 1670 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 21:20:30.336426 kubelet[1670]: E0317 21:20:30.336155 1670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.48.190:6443/api/v1/namespaces/default/events\": dial tcp 10.230.48.190:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-y0snw.gb1.brightbox.com.182db3e5c6a34211 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-y0snw.gb1.brightbox.com,UID:srv-y0snw.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-y0snw.gb1.brightbox.com,},FirstTimestamp:2025-03-17 21:20:30.323720721 +0000 UTC m=+0.425468119,LastTimestamp:2025-03-17 21:20:30.323720721 +0000 UTC m=+0.425468119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-y0snw.gb1.brightbox.com,}" Mar 17 21:20:30.338635 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 21:20:30.338871 kubelet[1670]: I0317 21:20:30.338843 1670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 21:20:30.344824 kubelet[1670]: E0317 21:20:30.344784 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:30.345021 kubelet[1670]: I0317 21:20:30.344998 1670 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 21:20:30.345344 kubelet[1670]: I0317 21:20:30.345307 1670 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 21:20:30.345615 kubelet[1670]: I0317 21:20:30.345593 1670 reconciler.go:26] "Reconciler: start to sync state" Mar 17 21:20:30.346269 kubelet[1670]: W0317 21:20:30.346185 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.48.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.346431 kubelet[1670]: E0317 21:20:30.346406 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.48.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.347566 kubelet[1670]: E0317 21:20:30.347530 1670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-y0snw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.190:6443: connect: connection refused" interval="200ms" Mar 17 21:20:30.347836 kubelet[1670]: E0317 21:20:30.347796 1670 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 21:20:30.348961 kubelet[1670]: I0317 21:20:30.348937 1670 factory.go:221] Registration of the systemd container factory successfully Mar 17 21:20:30.349243 kubelet[1670]: I0317 21:20:30.349215 1670 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 21:20:30.351433 kubelet[1670]: I0317 21:20:30.351407 1670 factory.go:221] Registration of the containerd container factory successfully Mar 17 21:20:30.370247 kubelet[1670]: I0317 21:20:30.370197 1670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 21:20:30.373554 kubelet[1670]: I0317 21:20:30.373526 1670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 21:20:30.373730 kubelet[1670]: I0317 21:20:30.373706 1670 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 21:20:30.373924 kubelet[1670]: I0317 21:20:30.373888 1670 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 21:20:30.374221 kubelet[1670]: E0317 21:20:30.374181 1670 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 21:20:30.381080 kubelet[1670]: W0317 21:20:30.381001 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.48.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.381322 kubelet[1670]: E0317 21:20:30.381281 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.48.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:30.398304 kubelet[1670]: I0317 21:20:30.398276 1670 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 21:20:30.398489 kubelet[1670]: I0317 21:20:30.398453 1670 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 21:20:30.398665 kubelet[1670]: I0317 21:20:30.398643 1670 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:20:30.400984 kubelet[1670]: I0317 21:20:30.400961 1670 policy_none.go:49] "None policy: Start" Mar 17 21:20:30.402103 kubelet[1670]: I0317 21:20:30.402061 1670 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 21:20:30.402246 kubelet[1670]: I0317 21:20:30.402225 1670 state_mem.go:35] "Initializing new in-memory state store" Mar 17 21:20:30.410547 systemd[1]: Created slice kubepods.slice. Mar 17 21:20:30.413128 update_engine[1192]: I0317 21:20:30.412212 1192 update_attempter.cc:509] Updating boot flags... Mar 17 21:20:30.421405 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 21:20:30.430769 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 21:20:30.432487 kubelet[1670]: I0317 21:20:30.432446 1670 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 21:20:30.432937 kubelet[1670]: I0317 21:20:30.432883 1670 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 21:20:30.433297 kubelet[1670]: I0317 21:20:30.433273 1670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 21:20:30.436683 kubelet[1670]: E0317 21:20:30.436649 1670 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:30.488834 kubelet[1670]: I0317 21:20:30.488719 1670 topology_manager.go:215] "Topology Admit Handler" podUID="8185efad9957e44a84887c1543eed109" podNamespace="kube-system" podName="kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.496532 kubelet[1670]: I0317 21:20:30.496489 1670 topology_manager.go:215] "Topology Admit Handler" podUID="7ac6909a328384047c6db486284bcf1c" podNamespace="kube-system" podName="kube-scheduler-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.497995 kubelet[1670]: I0317 21:20:30.497935 1670 kubelet_node_status.go:73] "Attempting to register node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.499406 kubelet[1670]: I0317 21:20:30.499375 1670 topology_manager.go:215] "Topology Admit Handler" podUID="eb8c5bd1d56751c3f3334370566f96b0" podNamespace="kube-system" podName="kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.500148 kubelet[1670]: E0317 21:20:30.499789 1670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.48.190:6443/api/v1/nodes\": dial tcp 10.230.48.190:6443: connect: connection refused" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.508595 systemd[1]: Created slice kubepods-burstable-pod8185efad9957e44a84887c1543eed109.slice. Mar 17 21:20:30.524883 systemd[1]: Created slice kubepods-burstable-pod7ac6909a328384047c6db486284bcf1c.slice. Mar 17 21:20:30.533441 systemd[1]: Created slice kubepods-burstable-podeb8c5bd1d56751c3f3334370566f96b0.slice. Mar 17 21:20:30.548171 kubelet[1670]: E0317 21:20:30.548121 1670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-y0snw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.190:6443: connect: connection refused" interval="400ms" Mar 17 21:20:30.557154 kubelet[1670]: I0317 21:20:30.557044 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-kubeconfig\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.557770 kubelet[1670]: I0317 21:20:30.557328 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.557770 kubelet[1670]: I0317 21:20:30.557389 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ac6909a328384047c6db486284bcf1c-kubeconfig\") pod \"kube-scheduler-srv-y0snw.gb1.brightbox.com\" (UID: \"7ac6909a328384047c6db486284bcf1c\") " pod="kube-system/kube-scheduler-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.557770 kubelet[1670]: I0317 21:20:30.557421 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb8c5bd1d56751c3f3334370566f96b0-ca-certs\") pod \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" (UID: \"eb8c5bd1d56751c3f3334370566f96b0\") " pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.557770 kubelet[1670]: I0317 21:20:30.557460 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb8c5bd1d56751c3f3334370566f96b0-usr-share-ca-certificates\") pod \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" (UID: \"eb8c5bd1d56751c3f3334370566f96b0\") " pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.557770 kubelet[1670]: I0317 21:20:30.557515 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-k8s-certs\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.558158 kubelet[1670]: I0317 21:20:30.557544 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-flexvolume-dir\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.558158 kubelet[1670]: I0317 21:20:30.557568 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb8c5bd1d56751c3f3334370566f96b0-k8s-certs\") pod \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" (UID: \"eb8c5bd1d56751c3f3334370566f96b0\") " pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.558158 kubelet[1670]: I0317 21:20:30.557593 1670 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-ca-certs\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.703236 kubelet[1670]: I0317 21:20:30.703200 1670 kubelet_node_status.go:73] "Attempting to register node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.704225 kubelet[1670]: E0317 21:20:30.704192 1670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.48.190:6443/api/v1/nodes\": dial tcp 10.230.48.190:6443: connect: connection refused" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:30.825163 env[1202]: time="2025-03-17T21:20:30.823404045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-y0snw.gb1.brightbox.com,Uid:8185efad9957e44a84887c1543eed109,Namespace:kube-system,Attempt:0,}" Mar 17 21:20:30.839507 env[1202]: time="2025-03-17T21:20:30.839440577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-y0snw.gb1.brightbox.com,Uid:7ac6909a328384047c6db486284bcf1c,Namespace:kube-system,Attempt:0,}" Mar 17 21:20:30.849654 env[1202]: time="2025-03-17T21:20:30.849584884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-y0snw.gb1.brightbox.com,Uid:eb8c5bd1d56751c3f3334370566f96b0,Namespace:kube-system,Attempt:0,}" Mar 17 21:20:30.949347 kubelet[1670]: E0317 21:20:30.949282 1670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-y0snw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.190:6443: connect: connection refused" interval="800ms" Mar 17 21:20:31.109063 kubelet[1670]: I0317 21:20:31.108061 1670 kubelet_node_status.go:73] "Attempting to register node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:31.109063 kubelet[1670]: E0317 21:20:31.108574 1670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.48.190:6443/api/v1/nodes\": dial tcp 10.230.48.190:6443: connect: connection refused" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:31.416459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280808973.mount: Deactivated successfully. Mar 17 21:20:31.423299 env[1202]: time="2025-03-17T21:20:31.423223029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.426861 env[1202]: time="2025-03-17T21:20:31.426792147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.430176 env[1202]: time="2025-03-17T21:20:31.429345864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.432264 env[1202]: time="2025-03-17T21:20:31.432223824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.433146 env[1202]: time="2025-03-17T21:20:31.433112647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.436134 env[1202]: time="2025-03-17T21:20:31.436082749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.437739 env[1202]: time="2025-03-17T21:20:31.437705278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.440689 env[1202]: time="2025-03-17T21:20:31.440651885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.444561 env[1202]: time="2025-03-17T21:20:31.444499217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.446749 kubelet[1670]: W0317 21:20:31.446645 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.48.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-y0snw.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.446871 kubelet[1670]: E0317 21:20:31.446762 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.48.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-y0snw.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.449191 env[1202]: time="2025-03-17T21:20:31.449141354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.459541 env[1202]: time="2025-03-17T21:20:31.459469820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.461234 env[1202]: time="2025-03-17T21:20:31.461202758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:20:31.503625 env[1202]: time="2025-03-17T21:20:31.503498198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:20:31.503835 env[1202]: time="2025-03-17T21:20:31.503638951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:20:31.503835 env[1202]: time="2025-03-17T21:20:31.503690197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:20:31.504005 env[1202]: time="2025-03-17T21:20:31.503954556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca8f122958649618a3e2d6e071f0920af5dab92cb5ec6f16f00c660d81eb13f7 pid=1738 runtime=io.containerd.runc.v2 Mar 17 21:20:31.505870 env[1202]: time="2025-03-17T21:20:31.505792928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:20:31.506051 env[1202]: time="2025-03-17T21:20:31.505998445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:20:31.506240 env[1202]: time="2025-03-17T21:20:31.506184279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:20:31.506704 env[1202]: time="2025-03-17T21:20:31.506642021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26010589354e1ffaa1d0d3fa1ba36140926fb2468a81aa1600a24f5400502c18 pid=1734 runtime=io.containerd.runc.v2 Mar 17 21:20:31.507855 env[1202]: time="2025-03-17T21:20:31.507783271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:20:31.508041 env[1202]: time="2025-03-17T21:20:31.507821122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:20:31.508272 env[1202]: time="2025-03-17T21:20:31.508207550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:20:31.508662 env[1202]: time="2025-03-17T21:20:31.508590512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/324778b8bf206e1155f84b71ef63d0c06f4ff67b599ca2db3a098bbfc1418624 pid=1735 runtime=io.containerd.runc.v2 Mar 17 21:20:31.561463 systemd[1]: Started cri-containerd-324778b8bf206e1155f84b71ef63d0c06f4ff67b599ca2db3a098bbfc1418624.scope. Mar 17 21:20:31.573309 systemd[1]: Started cri-containerd-ca8f122958649618a3e2d6e071f0920af5dab92cb5ec6f16f00c660d81eb13f7.scope. Mar 17 21:20:31.593141 systemd[1]: Started cri-containerd-26010589354e1ffaa1d0d3fa1ba36140926fb2468a81aa1600a24f5400502c18.scope. Mar 17 21:20:31.694878 kubelet[1670]: W0317 21:20:31.694663 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.48.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.695057 kubelet[1670]: E0317 21:20:31.694934 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.48.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.702641 env[1202]: time="2025-03-17T21:20:31.702587769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-y0snw.gb1.brightbox.com,Uid:7ac6909a328384047c6db486284bcf1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"26010589354e1ffaa1d0d3fa1ba36140926fb2468a81aa1600a24f5400502c18\"" Mar 17 21:20:31.711665 env[1202]: time="2025-03-17T21:20:31.711598169Z" level=info msg="CreateContainer within sandbox \"26010589354e1ffaa1d0d3fa1ba36140926fb2468a81aa1600a24f5400502c18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 21:20:31.720018 env[1202]: time="2025-03-17T21:20:31.719973184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-y0snw.gb1.brightbox.com,Uid:eb8c5bd1d56751c3f3334370566f96b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"324778b8bf206e1155f84b71ef63d0c06f4ff67b599ca2db3a098bbfc1418624\"" Mar 17 21:20:31.723608 env[1202]: time="2025-03-17T21:20:31.723568258Z" level=info msg="CreateContainer within sandbox \"324778b8bf206e1155f84b71ef63d0c06f4ff67b599ca2db3a098bbfc1418624\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 21:20:31.733250 env[1202]: time="2025-03-17T21:20:31.733200839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-y0snw.gb1.brightbox.com,Uid:8185efad9957e44a84887c1543eed109,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca8f122958649618a3e2d6e071f0920af5dab92cb5ec6f16f00c660d81eb13f7\"" Mar 17 21:20:31.736909 kubelet[1670]: W0317 21:20:31.736823 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.48.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.737001 kubelet[1670]: E0317 21:20:31.736936 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.48.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.738500 env[1202]: time="2025-03-17T21:20:31.738425187Z" level=info msg="CreateContainer within sandbox \"ca8f122958649618a3e2d6e071f0920af5dab92cb5ec6f16f00c660d81eb13f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 21:20:31.750231 kubelet[1670]: E0317 21:20:31.750146 1670 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-y0snw.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.190:6443: connect: connection refused" interval="1.6s" Mar 17 21:20:31.751531 env[1202]: time="2025-03-17T21:20:31.751489225Z" level=info msg="CreateContainer within sandbox \"26010589354e1ffaa1d0d3fa1ba36140926fb2468a81aa1600a24f5400502c18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7444ff9de00a60fb82dff948ae3a0640ae682dcfa06c0cf46465567573e9a4f5\"" Mar 17 21:20:31.752297 env[1202]: time="2025-03-17T21:20:31.752255867Z" level=info msg="StartContainer for \"7444ff9de00a60fb82dff948ae3a0640ae682dcfa06c0cf46465567573e9a4f5\"" Mar 17 21:20:31.756370 env[1202]: time="2025-03-17T21:20:31.756231468Z" level=info msg="CreateContainer within sandbox \"324778b8bf206e1155f84b71ef63d0c06f4ff67b599ca2db3a098bbfc1418624\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09b4f1705187b694d59e149c74c77243266e353dfc41892abb3490b7cfe5e610\"" Mar 17 21:20:31.757012 env[1202]: time="2025-03-17T21:20:31.756975679Z" level=info msg="StartContainer for \"09b4f1705187b694d59e149c74c77243266e353dfc41892abb3490b7cfe5e610\"" Mar 17 21:20:31.758911 env[1202]: time="2025-03-17T21:20:31.758869879Z" level=info msg="CreateContainer within sandbox \"ca8f122958649618a3e2d6e071f0920af5dab92cb5ec6f16f00c660d81eb13f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c266ca041dc181d9beea26cbb50e5c61e72764f75fbf7a6aff3761f974d6be51\"" Mar 17 21:20:31.759525 env[1202]: time="2025-03-17T21:20:31.759492435Z" level=info msg="StartContainer for \"c266ca041dc181d9beea26cbb50e5c61e72764f75fbf7a6aff3761f974d6be51\"" Mar 17 21:20:31.761627 kubelet[1670]: W0317 21:20:31.761535 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.48.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.761733 kubelet[1670]: E0317 21:20:31.761653 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.48.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:31.785039 systemd[1]: Started cri-containerd-7444ff9de00a60fb82dff948ae3a0640ae682dcfa06c0cf46465567573e9a4f5.scope. Mar 17 21:20:31.804868 systemd[1]: Started cri-containerd-09b4f1705187b694d59e149c74c77243266e353dfc41892abb3490b7cfe5e610.scope. Mar 17 21:20:31.825175 systemd[1]: Started cri-containerd-c266ca041dc181d9beea26cbb50e5c61e72764f75fbf7a6aff3761f974d6be51.scope. Mar 17 21:20:31.921118 kubelet[1670]: I0317 21:20:31.919600 1670 kubelet_node_status.go:73] "Attempting to register node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:31.921118 kubelet[1670]: E0317 21:20:31.920327 1670 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.48.190:6443/api/v1/nodes\": dial tcp 10.230.48.190:6443: connect: connection refused" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:31.966495 env[1202]: time="2025-03-17T21:20:31.966331236Z" level=info msg="StartContainer for \"c266ca041dc181d9beea26cbb50e5c61e72764f75fbf7a6aff3761f974d6be51\" returns successfully" Mar 17 21:20:31.968752 env[1202]: time="2025-03-17T21:20:31.967321221Z" level=info msg="StartContainer for \"7444ff9de00a60fb82dff948ae3a0640ae682dcfa06c0cf46465567573e9a4f5\" returns successfully" Mar 17 21:20:31.969122 env[1202]: time="2025-03-17T21:20:31.969076805Z" level=info msg="StartContainer for \"09b4f1705187b694d59e149c74c77243266e353dfc41892abb3490b7cfe5e610\" returns successfully" Mar 17 21:20:32.044656 kubelet[1670]: E0317 21:20:32.044387 1670 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.48.190:6443/api/v1/namespaces/default/events\": dial tcp 10.230.48.190:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-y0snw.gb1.brightbox.com.182db3e5c6a34211 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-y0snw.gb1.brightbox.com,UID:srv-y0snw.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-y0snw.gb1.brightbox.com,},FirstTimestamp:2025-03-17 21:20:30.323720721 +0000 UTC m=+0.425468119,LastTimestamp:2025-03-17 21:20:30.323720721 +0000 UTC m=+0.425468119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-y0snw.gb1.brightbox.com,}" Mar 17 21:20:32.312415 kubelet[1670]: E0317 21:20:32.312258 1670 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.48.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:33.104323 kubelet[1670]: W0317 21:20:33.104251 1670 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.48.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-y0snw.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:33.104522 kubelet[1670]: E0317 21:20:33.104331 1670 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.48.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-y0snw.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.48.190:6443: connect: connection refused Mar 17 21:20:33.523837 kubelet[1670]: I0317 21:20:33.523783 1670 kubelet_node_status.go:73] "Attempting to register node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:35.543538 kubelet[1670]: E0317 21:20:35.543468 1670 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-y0snw.gb1.brightbox.com\" not found" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:35.656613 kubelet[1670]: I0317 21:20:35.656562 1670 kubelet_node_status.go:76] "Successfully registered node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:35.668930 kubelet[1670]: E0317 21:20:35.668883 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:35.769950 kubelet[1670]: E0317 21:20:35.769828 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:35.870705 kubelet[1670]: E0317 21:20:35.870030 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:35.971017 kubelet[1670]: E0317 21:20:35.970963 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.072312 kubelet[1670]: E0317 21:20:36.072252 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.173678 kubelet[1670]: E0317 21:20:36.173166 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.274938 kubelet[1670]: E0317 21:20:36.274871 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.375775 kubelet[1670]: E0317 21:20:36.375714 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.476117 kubelet[1670]: E0317 21:20:36.476017 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.576712 kubelet[1670]: E0317 21:20:36.576658 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.677820 kubelet[1670]: E0317 21:20:36.677717 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.778602 kubelet[1670]: E0317 21:20:36.778448 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.880371 kubelet[1670]: E0317 21:20:36.880314 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:36.981115 kubelet[1670]: E0317 21:20:36.981021 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.081721 kubelet[1670]: E0317 21:20:37.081571 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.182627 kubelet[1670]: E0317 21:20:37.182572 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.283757 kubelet[1670]: E0317 21:20:37.283656 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.384913 kubelet[1670]: E0317 21:20:37.384404 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.487136 kubelet[1670]: E0317 21:20:37.486173 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.587050 kubelet[1670]: E0317 21:20:37.586982 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.687738 kubelet[1670]: E0317 21:20:37.687684 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.788741 kubelet[1670]: E0317 21:20:37.788670 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.889692 kubelet[1670]: E0317 21:20:37.889630 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:37.891927 systemd[1]: Reloading. Mar 17 21:20:37.990401 kubelet[1670]: E0317 21:20:37.990274 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:38.048275 /usr/lib/systemd/system-generators/torcx-generator[1978]: time="2025-03-17T21:20:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 21:20:38.049194 /usr/lib/systemd/system-generators/torcx-generator[1978]: time="2025-03-17T21:20:38Z" level=info msg="torcx already run" Mar 17 21:20:38.090481 kubelet[1670]: E0317 21:20:38.090418 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:38.148889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 21:20:38.149309 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 21:20:38.177526 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 21:20:38.191121 kubelet[1670]: E0317 21:20:38.191061 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:38.291930 kubelet[1670]: E0317 21:20:38.291780 1670 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-y0snw.gb1.brightbox.com\" not found" Mar 17 21:20:38.329141 kubelet[1670]: E0317 21:20:38.328847 1670 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{srv-y0snw.gb1.brightbox.com.182db3e5c6a34211 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-y0snw.gb1.brightbox.com,UID:srv-y0snw.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-y0snw.gb1.brightbox.com,},FirstTimestamp:2025-03-17 21:20:30.323720721 +0000 UTC m=+0.425468119,LastTimestamp:2025-03-17 21:20:30.323720721 +0000 UTC m=+0.425468119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-y0snw.gb1.brightbox.com,}" Mar 17 21:20:38.329804 systemd[1]: Stopping kubelet.service... Mar 17 21:20:38.348719 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 21:20:38.349271 systemd[1]: Stopped kubelet.service. Mar 17 21:20:38.353240 systemd[1]: Starting kubelet.service... Mar 17 21:20:39.510312 systemd[1]: Started kubelet.service. Mar 17 21:20:39.645653 kubelet[2028]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:20:39.646363 kubelet[2028]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 21:20:39.646522 kubelet[2028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 21:20:39.646842 kubelet[2028]: I0317 21:20:39.646788 2028 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 21:20:39.655471 kubelet[2028]: I0317 21:20:39.655439 2028 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 21:20:39.655679 kubelet[2028]: I0317 21:20:39.655654 2028 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 21:20:39.656039 kubelet[2028]: I0317 21:20:39.656013 2028 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 21:20:39.657929 sudo[2039]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 21:20:39.658444 sudo[2039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 21:20:39.663533 kubelet[2028]: I0317 21:20:39.663496 2028 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 21:20:39.670230 kubelet[2028]: I0317 21:20:39.670002 2028 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 21:20:39.687053 kubelet[2028]: I0317 21:20:39.687017 2028 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 21:20:39.687557 kubelet[2028]: I0317 21:20:39.687492 2028 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 21:20:39.687806 kubelet[2028]: I0317 21:20:39.687557 2028 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-y0snw.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 21:20:39.687987 kubelet[2028]: I0317 21:20:39.687839 2028 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 21:20:39.687987 kubelet[2028]: I0317 21:20:39.687859 2028 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 21:20:39.687987 kubelet[2028]: I0317 21:20:39.687959 2028 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:20:39.688239 kubelet[2028]: I0317 21:20:39.688213 2028 kubelet.go:400] "Attempting to sync node with API server" Mar 17 21:20:39.688336 kubelet[2028]: I0317 21:20:39.688248 2028 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 21:20:39.688336 kubelet[2028]: I0317 21:20:39.688300 2028 kubelet.go:312] "Adding apiserver pod source" Mar 17 21:20:39.688463 kubelet[2028]: I0317 21:20:39.688344 2028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 21:20:39.698160 kubelet[2028]: I0317 21:20:39.698126 2028 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 21:20:39.698443 kubelet[2028]: I0317 21:20:39.698415 2028 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 21:20:39.699135 kubelet[2028]: I0317 21:20:39.699111 2028 server.go:1264] "Started kubelet" Mar 17 21:20:39.713172 kubelet[2028]: I0317 21:20:39.710215 2028 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 21:20:39.715643 kubelet[2028]: I0317 21:20:39.714353 2028 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 21:20:39.715643 kubelet[2028]: I0317 21:20:39.714814 2028 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 21:20:39.719407 kubelet[2028]: E0317 21:20:39.719374 2028 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 21:20:39.719738 kubelet[2028]: I0317 21:20:39.719692 2028 server.go:455] "Adding debug handlers to kubelet server" Mar 17 21:20:39.732672 kubelet[2028]: I0317 21:20:39.732630 2028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 21:20:39.738513 kubelet[2028]: I0317 21:20:39.738487 2028 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 21:20:39.739245 kubelet[2028]: I0317 21:20:39.739217 2028 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 21:20:39.739677 kubelet[2028]: I0317 21:20:39.739653 2028 reconciler.go:26] "Reconciler: start to sync state" Mar 17 21:20:39.750496 kubelet[2028]: I0317 21:20:39.750458 2028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 21:20:39.766576 kubelet[2028]: I0317 21:20:39.766457 2028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 21:20:39.766821 kubelet[2028]: I0317 21:20:39.766796 2028 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 21:20:39.766994 kubelet[2028]: I0317 21:20:39.766970 2028 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 21:20:39.767176 kubelet[2028]: E0317 21:20:39.767146 2028 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 21:20:39.767294 kubelet[2028]: I0317 21:20:39.766555 2028 factory.go:221] Registration of the containerd container factory successfully Mar 17 21:20:39.767422 kubelet[2028]: I0317 21:20:39.767398 2028 factory.go:221] Registration of the systemd container factory successfully Mar 17 21:20:39.770115 kubelet[2028]: I0317 21:20:39.767632 2028 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 21:20:39.856915 kubelet[2028]: I0317 21:20:39.856857 2028 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 21:20:39.856915 kubelet[2028]: I0317 21:20:39.856895 2028 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 21:20:39.857233 kubelet[2028]: I0317 21:20:39.856930 2028 state_mem.go:36] "Initialized new in-memory state store" Mar 17 21:20:39.857321 kubelet[2028]: I0317 21:20:39.857243 2028 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 21:20:39.857321 kubelet[2028]: I0317 21:20:39.857273 2028 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 21:20:39.857321 kubelet[2028]: I0317 21:20:39.857309 2028 policy_none.go:49] "None policy: Start" Mar 17 21:20:39.858722 kubelet[2028]: I0317 21:20:39.858476 2028 kubelet_node_status.go:73] "Attempting to register node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:39.864712 kubelet[2028]: I0317 21:20:39.864673 2028 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 21:20:39.864828 kubelet[2028]: I0317 21:20:39.864726 2028 state_mem.go:35] "Initializing new in-memory state store" Mar 17 21:20:39.867058 kubelet[2028]: I0317 21:20:39.867009 2028 state_mem.go:75] "Updated machine memory state" Mar 17 21:20:39.867317 kubelet[2028]: E0317 21:20:39.867290 2028 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 21:20:39.872275 kubelet[2028]: I0317 21:20:39.872244 2028 kubelet_node_status.go:112] "Node was previously registered" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:39.872559 kubelet[2028]: I0317 21:20:39.872535 2028 kubelet_node_status.go:76] "Successfully registered node" node="srv-y0snw.gb1.brightbox.com" Mar 17 21:20:39.888377 kubelet[2028]: I0317 21:20:39.888337 2028 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 21:20:39.888766 kubelet[2028]: I0317 21:20:39.888712 2028 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 21:20:39.888995 kubelet[2028]: I0317 21:20:39.888968 2028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 21:20:40.068520 kubelet[2028]: I0317 21:20:40.068357 2028 topology_manager.go:215] "Topology Admit Handler" podUID="eb8c5bd1d56751c3f3334370566f96b0" podNamespace="kube-system" podName="kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.068687 kubelet[2028]: I0317 21:20:40.068578 2028 topology_manager.go:215] "Topology Admit Handler" podUID="8185efad9957e44a84887c1543eed109" podNamespace="kube-system" podName="kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.068773 kubelet[2028]: I0317 21:20:40.068723 2028 topology_manager.go:215] "Topology Admit Handler" podUID="7ac6909a328384047c6db486284bcf1c" podNamespace="kube-system" podName="kube-scheduler-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.092525 kubelet[2028]: W0317 21:20:40.092476 2028 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:20:40.093265 kubelet[2028]: W0317 21:20:40.093237 2028 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:20:40.093679 kubelet[2028]: W0317 21:20:40.093652 2028 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:20:40.142290 kubelet[2028]: I0317 21:20:40.142240 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb8c5bd1d56751c3f3334370566f96b0-k8s-certs\") pod \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" (UID: \"eb8c5bd1d56751c3f3334370566f96b0\") " pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142506 kubelet[2028]: I0317 21:20:40.142313 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb8c5bd1d56751c3f3334370566f96b0-usr-share-ca-certificates\") pod \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" (UID: \"eb8c5bd1d56751c3f3334370566f96b0\") " pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142506 kubelet[2028]: I0317 21:20:40.142382 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-ca-certs\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142506 kubelet[2028]: I0317 21:20:40.142421 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-k8s-certs\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142506 kubelet[2028]: I0317 21:20:40.142480 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-kubeconfig\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142763 kubelet[2028]: I0317 21:20:40.142513 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142763 kubelet[2028]: I0317 21:20:40.142571 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ac6909a328384047c6db486284bcf1c-kubeconfig\") pod \"kube-scheduler-srv-y0snw.gb1.brightbox.com\" (UID: \"7ac6909a328384047c6db486284bcf1c\") " pod="kube-system/kube-scheduler-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142763 kubelet[2028]: I0317 21:20:40.142709 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb8c5bd1d56751c3f3334370566f96b0-ca-certs\") pod \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" (UID: \"eb8c5bd1d56751c3f3334370566f96b0\") " pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.142763 kubelet[2028]: I0317 21:20:40.142744 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8185efad9957e44a84887c1543eed109-flexvolume-dir\") pod \"kube-controller-manager-srv-y0snw.gb1.brightbox.com\" (UID: \"8185efad9957e44a84887c1543eed109\") " pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.217081 systemd[1]: Started sshd@7-10.230.48.190:22-198.20.252.107:40222.service. Mar 17 21:20:40.591452 sudo[2039]: pam_unix(sudo:session): session closed for user root Mar 17 21:20:40.689551 kubelet[2028]: I0317 21:20:40.689509 2028 apiserver.go:52] "Watching apiserver" Mar 17 21:20:40.739860 kubelet[2028]: I0317 21:20:40.739803 2028 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 21:20:40.826733 kubelet[2028]: W0317 21:20:40.826682 2028 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 21:20:40.826956 kubelet[2028]: E0317 21:20:40.826797 2028 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-y0snw.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" Mar 17 21:20:40.840675 kubelet[2028]: I0317 21:20:40.840591 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-y0snw.gb1.brightbox.com" podStartSLOduration=0.840559155 podStartE2EDuration="840.559155ms" podCreationTimestamp="2025-03-17 21:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:20:40.838409543 +0000 UTC m=+1.303299695" watchObservedRunningTime="2025-03-17 21:20:40.840559155 +0000 UTC m=+1.305449307" Mar 17 21:20:40.861814 kubelet[2028]: I0317 21:20:40.861563 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-y0snw.gb1.brightbox.com" podStartSLOduration=0.861545772 podStartE2EDuration="861.545772ms" podCreationTimestamp="2025-03-17 21:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:20:40.851586388 +0000 UTC m=+1.316476534" watchObservedRunningTime="2025-03-17 21:20:40.861545772 +0000 UTC m=+1.326435915" Mar 17 21:20:40.862040 kubelet[2028]: I0317 21:20:40.861911 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-y0snw.gb1.brightbox.com" podStartSLOduration=0.861901685 podStartE2EDuration="861.901685ms" podCreationTimestamp="2025-03-17 21:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:20:40.861808542 +0000 UTC m=+1.326698687" watchObservedRunningTime="2025-03-17 21:20:40.861901685 +0000 UTC m=+1.326791839" Mar 17 21:20:41.018989 sshd[2064]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.252.107 user=root Mar 17 21:20:42.303055 sudo[1331]: pam_unix(sudo:session): session closed for user root Mar 17 21:20:42.449747 sshd[1328]: pam_unix(sshd:session): session closed for user core Mar 17 21:20:42.454063 systemd[1]: sshd@5-10.230.48.190:22-139.178.89.65:47284.service: Deactivated successfully. Mar 17 21:20:42.455087 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 21:20:42.455356 systemd[1]: session-5.scope: Consumed 5.854s CPU time. Mar 17 21:20:42.456001 systemd-logind[1190]: Session 5 logged out. Waiting for processes to exit. Mar 17 21:20:42.457360 systemd-logind[1190]: Removed session 5. Mar 17 21:20:42.569436 sshd[2064]: Failed password for root from 198.20.252.107 port 40222 ssh2 Mar 17 21:20:43.031115 sshd[2064]: Received disconnect from 198.20.252.107 port 40222:11: Bye Bye [preauth] Mar 17 21:20:43.031115 sshd[2064]: Disconnected from authenticating user root 198.20.252.107 port 40222 [preauth] Mar 17 21:20:43.032161 systemd[1]: sshd@7-10.230.48.190:22-198.20.252.107:40222.service: Deactivated successfully. Mar 17 21:20:52.412418 kubelet[2028]: I0317 21:20:52.412372 2028 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 21:20:52.413081 env[1202]: time="2025-03-17T21:20:52.413020844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 21:20:52.413515 kubelet[2028]: I0317 21:20:52.413331 2028 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 21:20:53.320942 kubelet[2028]: I0317 21:20:53.320880 2028 topology_manager.go:215] "Topology Admit Handler" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" podNamespace="kube-system" podName="cilium-52nx8" Mar 17 21:20:53.321527 kubelet[2028]: I0317 21:20:53.321497 2028 topology_manager.go:215] "Topology Admit Handler" podUID="9abf560c-5626-436b-922d-3f8314123101" podNamespace="kube-system" podName="kube-proxy-765fr" Mar 17 21:20:53.328838 kubelet[2028]: I0317 21:20:53.328795 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hostproc\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.328965 kubelet[2028]: I0317 21:20:53.328859 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-cgroup\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.328965 kubelet[2028]: I0317 21:20:53.328890 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hubble-tls\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.328965 kubelet[2028]: I0317 21:20:53.328935 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9abf560c-5626-436b-922d-3f8314123101-kube-proxy\") pod \"kube-proxy-765fr\" (UID: \"9abf560c-5626-436b-922d-3f8314123101\") " pod="kube-system/kube-proxy-765fr" Mar 17 21:20:53.329188 kubelet[2028]: I0317 21:20:53.328966 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvhj\" (UniqueName: \"kubernetes.io/projected/9abf560c-5626-436b-922d-3f8314123101-kube-api-access-pvvhj\") pod \"kube-proxy-765fr\" (UID: \"9abf560c-5626-436b-922d-3f8314123101\") " pod="kube-system/kube-proxy-765fr" Mar 17 21:20:53.329188 kubelet[2028]: I0317 21:20:53.329018 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-run\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329188 kubelet[2028]: I0317 21:20:53.329045 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-lib-modules\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329188 kubelet[2028]: I0317 21:20:53.329103 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-kernel\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329188 kubelet[2028]: I0317 21:20:53.329138 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9abf560c-5626-436b-922d-3f8314123101-xtables-lock\") pod \"kube-proxy-765fr\" (UID: \"9abf560c-5626-436b-922d-3f8314123101\") " pod="kube-system/kube-proxy-765fr" Mar 17 21:20:53.329518 kubelet[2028]: I0317 21:20:53.329186 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-xtables-lock\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329518 kubelet[2028]: I0317 21:20:53.329215 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl5n\" (UniqueName: \"kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-kube-api-access-ngl5n\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329518 kubelet[2028]: I0317 21:20:53.329259 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-etc-cni-netd\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329518 kubelet[2028]: I0317 21:20:53.329288 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-bpf-maps\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329518 kubelet[2028]: I0317 21:20:53.329331 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cni-path\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329518 kubelet[2028]: I0317 21:20:53.329412 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9abf560c-5626-436b-922d-3f8314123101-lib-modules\") pod \"kube-proxy-765fr\" (UID: \"9abf560c-5626-436b-922d-3f8314123101\") " pod="kube-system/kube-proxy-765fr" Mar 17 21:20:53.329854 kubelet[2028]: I0317 21:20:53.329446 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-clustermesh-secrets\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329854 kubelet[2028]: I0317 21:20:53.329504 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-config-path\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.329854 kubelet[2028]: I0317 21:20:53.329533 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-net\") pod \"cilium-52nx8\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " pod="kube-system/cilium-52nx8" Mar 17 21:20:53.331381 systemd[1]: Created slice kubepods-burstable-pod6b4f8c6b_e88d_49fe_bcc7_69b89709e979.slice. Mar 17 21:20:53.339483 systemd[1]: Created slice kubepods-besteffort-pod9abf560c_5626_436b_922d_3f8314123101.slice. Mar 17 21:20:53.524076 kubelet[2028]: I0317 21:20:53.524015 2028 topology_manager.go:215] "Topology Admit Handler" podUID="4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61" podNamespace="kube-system" podName="cilium-operator-599987898-mwwln" Mar 17 21:20:53.531144 systemd[1]: Created slice kubepods-besteffort-pod4f97ebd7_a7e8_4d72_a54a_3564e0f0bb61.slice. Mar 17 21:20:53.532946 kubelet[2028]: I0317 21:20:53.532905 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-cilium-config-path\") pod \"cilium-operator-599987898-mwwln\" (UID: \"4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61\") " pod="kube-system/cilium-operator-599987898-mwwln" Mar 17 21:20:53.533174 kubelet[2028]: I0317 21:20:53.533142 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nh68\" (UniqueName: \"kubernetes.io/projected/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-kube-api-access-4nh68\") pod \"cilium-operator-599987898-mwwln\" (UID: \"4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61\") " pod="kube-system/cilium-operator-599987898-mwwln" Mar 17 21:20:53.638523 env[1202]: time="2025-03-17T21:20:53.638358118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52nx8,Uid:6b4f8c6b-e88d-49fe-bcc7-69b89709e979,Namespace:kube-system,Attempt:0,}" Mar 17 21:20:53.656125 env[1202]: time="2025-03-17T21:20:53.655272334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-765fr,Uid:9abf560c-5626-436b-922d-3f8314123101,Namespace:kube-system,Attempt:0,}" Mar 17 21:20:53.676083 env[1202]: time="2025-03-17T21:20:53.675952874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:20:53.676254 env[1202]: time="2025-03-17T21:20:53.676109760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:20:53.676254 env[1202]: time="2025-03-17T21:20:53.676179160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:20:53.676506 env[1202]: time="2025-03-17T21:20:53.676453047Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc pid=2118 runtime=io.containerd.runc.v2 Mar 17 21:20:53.680346 env[1202]: time="2025-03-17T21:20:53.680254530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:20:53.680346 env[1202]: time="2025-03-17T21:20:53.680304525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:20:53.680675 env[1202]: time="2025-03-17T21:20:53.680617221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:20:53.681001 env[1202]: time="2025-03-17T21:20:53.680948466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6846d9ab04d9b92b31ccbb90d32cf24a91a83b7c15ee3630e599e0ca6024ca9 pid=2129 runtime=io.containerd.runc.v2 Mar 17 21:20:53.700248 systemd[1]: Started cri-containerd-add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc.scope. Mar 17 21:20:53.720308 systemd[1]: Started cri-containerd-e6846d9ab04d9b92b31ccbb90d32cf24a91a83b7c15ee3630e599e0ca6024ca9.scope. Mar 17 21:20:53.776163 env[1202]: time="2025-03-17T21:20:53.775987533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52nx8,Uid:6b4f8c6b-e88d-49fe-bcc7-69b89709e979,Namespace:kube-system,Attempt:0,} returns sandbox id \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\"" Mar 17 21:20:53.776928 env[1202]: time="2025-03-17T21:20:53.776277551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-765fr,Uid:9abf560c-5626-436b-922d-3f8314123101,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6846d9ab04d9b92b31ccbb90d32cf24a91a83b7c15ee3630e599e0ca6024ca9\"" Mar 17 21:20:53.781898 env[1202]: time="2025-03-17T21:20:53.781843240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 21:20:53.790060 env[1202]: time="2025-03-17T21:20:53.790007258Z" level=info msg="CreateContainer within sandbox \"e6846d9ab04d9b92b31ccbb90d32cf24a91a83b7c15ee3630e599e0ca6024ca9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 21:20:53.817126 env[1202]: time="2025-03-17T21:20:53.814059062Z" level=info msg="CreateContainer within sandbox \"e6846d9ab04d9b92b31ccbb90d32cf24a91a83b7c15ee3630e599e0ca6024ca9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02e6e615b57ac36416b8e19d1715046c7d08b4ffe03477ef9e58df239dd3722d\"" Mar 17 21:20:53.817126 env[1202]: time="2025-03-17T21:20:53.815217075Z" level=info msg="StartContainer for \"02e6e615b57ac36416b8e19d1715046c7d08b4ffe03477ef9e58df239dd3722d\"" Mar 17 21:20:53.837825 env[1202]: time="2025-03-17T21:20:53.837603244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mwwln,Uid:4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61,Namespace:kube-system,Attempt:0,}" Mar 17 21:20:53.845043 systemd[1]: Started cri-containerd-02e6e615b57ac36416b8e19d1715046c7d08b4ffe03477ef9e58df239dd3722d.scope. Mar 17 21:20:53.885962 env[1202]: time="2025-03-17T21:20:53.885238654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:20:53.885962 env[1202]: time="2025-03-17T21:20:53.885415484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:20:53.885962 env[1202]: time="2025-03-17T21:20:53.885494523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:20:53.885962 env[1202]: time="2025-03-17T21:20:53.885821645Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38 pid=2224 runtime=io.containerd.runc.v2 Mar 17 21:20:53.907983 env[1202]: time="2025-03-17T21:20:53.907738349Z" level=info msg="StartContainer for \"02e6e615b57ac36416b8e19d1715046c7d08b4ffe03477ef9e58df239dd3722d\" returns successfully" Mar 17 21:20:53.923911 systemd[1]: Started cri-containerd-035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38.scope. Mar 17 21:20:53.995833 env[1202]: time="2025-03-17T21:20:53.995767594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mwwln,Uid:4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61,Namespace:kube-system,Attempt:0,} returns sandbox id \"035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38\"" Mar 17 21:20:54.877666 kubelet[2028]: I0317 21:20:54.877581 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-765fr" podStartSLOduration=1.877545208 podStartE2EDuration="1.877545208s" podCreationTimestamp="2025-03-17 21:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:20:54.874728615 +0000 UTC m=+15.339618786" watchObservedRunningTime="2025-03-17 21:20:54.877545208 +0000 UTC m=+15.342435362" Mar 17 21:20:59.825508 systemd[1]: Started sshd@8-10.230.48.190:22-143.110.184.217:55470.service. Mar 17 21:21:00.516271 sshd[2389]: Invalid user from 143.110.184.217 port 55470 Mar 17 21:21:04.733765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount305572753.mount: Deactivated successfully. Mar 17 21:21:07.818127 sshd[2389]: Connection closed by invalid user 143.110.184.217 port 55470 [preauth] Mar 17 21:21:07.820416 systemd[1]: sshd@8-10.230.48.190:22-143.110.184.217:55470.service: Deactivated successfully. Mar 17 21:21:09.181756 env[1202]: time="2025-03-17T21:21:09.181682225Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:21:09.184133 env[1202]: time="2025-03-17T21:21:09.184067044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:21:09.186361 env[1202]: time="2025-03-17T21:21:09.186326431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:21:09.187417 env[1202]: time="2025-03-17T21:21:09.187378297Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 21:21:09.191605 env[1202]: time="2025-03-17T21:21:09.191560172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 21:21:09.194894 env[1202]: time="2025-03-17T21:21:09.194178710Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:21:09.207708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598586858.mount: Deactivated successfully. Mar 17 21:21:09.216876 env[1202]: time="2025-03-17T21:21:09.215717920Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\"" Mar 17 21:21:09.218782 env[1202]: time="2025-03-17T21:21:09.218703897Z" level=info msg="StartContainer for \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\"" Mar 17 21:21:09.219571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687278627.mount: Deactivated successfully. Mar 17 21:21:09.252001 systemd[1]: Started cri-containerd-bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849.scope. Mar 17 21:21:09.300372 env[1202]: time="2025-03-17T21:21:09.300313165Z" level=info msg="StartContainer for \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\" returns successfully" Mar 17 21:21:09.309942 systemd[1]: cri-containerd-bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849.scope: Deactivated successfully. Mar 17 21:21:09.540016 env[1202]: time="2025-03-17T21:21:09.535143835Z" level=info msg="shim disconnected" id=bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849 Mar 17 21:21:09.540016 env[1202]: time="2025-03-17T21:21:09.535219607Z" level=warning msg="cleaning up after shim disconnected" id=bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849 namespace=k8s.io Mar 17 21:21:09.540016 env[1202]: time="2025-03-17T21:21:09.535238346Z" level=info msg="cleaning up dead shim" Mar 17 21:21:09.548963 env[1202]: time="2025-03-17T21:21:09.548902138Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:21:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2444 runtime=io.containerd.runc.v2\n" Mar 17 21:21:09.907145 env[1202]: time="2025-03-17T21:21:09.906434983Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 21:21:09.923327 env[1202]: time="2025-03-17T21:21:09.923020926Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\"" Mar 17 21:21:09.923950 env[1202]: time="2025-03-17T21:21:09.923899756Z" level=info msg="StartContainer for \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\"" Mar 17 21:21:09.948981 systemd[1]: Started cri-containerd-32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376.scope. Mar 17 21:21:09.996459 env[1202]: time="2025-03-17T21:21:09.996378699Z" level=info msg="StartContainer for \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\" returns successfully" Mar 17 21:21:10.013372 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 21:21:10.014225 systemd[1]: Stopped systemd-sysctl.service. Mar 17 21:21:10.014589 systemd[1]: Stopping systemd-sysctl.service... Mar 17 21:21:10.019128 systemd[1]: Starting systemd-sysctl.service... Mar 17 21:21:10.036938 systemd[1]: cri-containerd-32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376.scope: Deactivated successfully. Mar 17 21:21:10.059026 systemd[1]: Finished systemd-sysctl.service. Mar 17 21:21:10.096714 env[1202]: time="2025-03-17T21:21:10.096660203Z" level=info msg="shim disconnected" id=32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376 Mar 17 21:21:10.097064 env[1202]: time="2025-03-17T21:21:10.097033505Z" level=warning msg="cleaning up after shim disconnected" id=32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376 namespace=k8s.io Mar 17 21:21:10.097208 env[1202]: time="2025-03-17T21:21:10.097179535Z" level=info msg="cleaning up dead shim" Mar 17 21:21:10.114907 env[1202]: time="2025-03-17T21:21:10.114845613Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:21:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2508 runtime=io.containerd.runc.v2\n" Mar 17 21:21:10.204726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849-rootfs.mount: Deactivated successfully. Mar 17 21:21:10.908127 env[1202]: time="2025-03-17T21:21:10.907667497Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 21:21:10.932129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185402893.mount: Deactivated successfully. Mar 17 21:21:10.940702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967133299.mount: Deactivated successfully. Mar 17 21:21:10.947338 env[1202]: time="2025-03-17T21:21:10.947281652Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\"" Mar 17 21:21:10.948513 env[1202]: time="2025-03-17T21:21:10.948450375Z" level=info msg="StartContainer for \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\"" Mar 17 21:21:10.974974 systemd[1]: Started cri-containerd-7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298.scope. Mar 17 21:21:11.032987 env[1202]: time="2025-03-17T21:21:11.032930141Z" level=info msg="StartContainer for \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\" returns successfully" Mar 17 21:21:11.035630 systemd[1]: cri-containerd-7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298.scope: Deactivated successfully. Mar 17 21:21:11.068767 env[1202]: time="2025-03-17T21:21:11.068697999Z" level=info msg="shim disconnected" id=7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298 Mar 17 21:21:11.068767 env[1202]: time="2025-03-17T21:21:11.068763962Z" level=warning msg="cleaning up after shim disconnected" id=7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298 namespace=k8s.io Mar 17 21:21:11.069215 env[1202]: time="2025-03-17T21:21:11.068781509Z" level=info msg="cleaning up dead shim" Mar 17 21:21:11.080607 env[1202]: time="2025-03-17T21:21:11.080538437Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:21:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2568 runtime=io.containerd.runc.v2\n" Mar 17 21:21:11.918906 env[1202]: time="2025-03-17T21:21:11.918664433Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 21:21:12.001183 env[1202]: time="2025-03-17T21:21:12.001035828Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\"" Mar 17 21:21:12.002504 env[1202]: time="2025-03-17T21:21:12.002455213Z" level=info msg="StartContainer for \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\"" Mar 17 21:21:12.037297 systemd[1]: Started cri-containerd-7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63.scope. Mar 17 21:21:12.105503 systemd[1]: cri-containerd-7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63.scope: Deactivated successfully. Mar 17 21:21:12.109242 env[1202]: time="2025-03-17T21:21:12.109148100Z" level=info msg="StartContainer for \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\" returns successfully" Mar 17 21:21:12.139502 env[1202]: time="2025-03-17T21:21:12.139420413Z" level=info msg="shim disconnected" id=7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63 Mar 17 21:21:12.139502 env[1202]: time="2025-03-17T21:21:12.139494846Z" level=warning msg="cleaning up after shim disconnected" id=7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63 namespace=k8s.io Mar 17 21:21:12.139892 env[1202]: time="2025-03-17T21:21:12.139512520Z" level=info msg="cleaning up dead shim" Mar 17 21:21:12.149761 env[1202]: time="2025-03-17T21:21:12.149699482Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:21:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2625 runtime=io.containerd.runc.v2\n" Mar 17 21:21:12.204487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63-rootfs.mount: Deactivated successfully. Mar 17 21:21:12.922839 env[1202]: time="2025-03-17T21:21:12.922604856Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 21:21:12.945105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142303126.mount: Deactivated successfully. Mar 17 21:21:12.959069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318230409.mount: Deactivated successfully. Mar 17 21:21:12.961042 env[1202]: time="2025-03-17T21:21:12.960778786Z" level=info msg="CreateContainer within sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\"" Mar 17 21:21:12.962121 env[1202]: time="2025-03-17T21:21:12.961678193Z" level=info msg="StartContainer for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\"" Mar 17 21:21:12.992408 systemd[1]: Started cri-containerd-8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df.scope. Mar 17 21:21:13.062605 env[1202]: time="2025-03-17T21:21:13.061931937Z" level=info msg="StartContainer for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" returns successfully" Mar 17 21:21:13.322588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695334491.mount: Deactivated successfully. Mar 17 21:21:13.331582 kubelet[2028]: I0317 21:21:13.330648 2028 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 21:21:13.387973 kubelet[2028]: I0317 21:21:13.386725 2028 topology_manager.go:215] "Topology Admit Handler" podUID="cd82fb0c-e95d-477c-b86a-e0caef81acd0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6fr4t" Mar 17 21:21:13.387973 kubelet[2028]: I0317 21:21:13.387159 2028 topology_manager.go:215] "Topology Admit Handler" podUID="17e60bf9-ddbf-4fe8-9602-80c037feee12" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tsg58" Mar 17 21:21:13.396927 systemd[1]: Created slice kubepods-burstable-pod17e60bf9_ddbf_4fe8_9602_80c037feee12.slice. Mar 17 21:21:13.406549 systemd[1]: Created slice kubepods-burstable-podcd82fb0c_e95d_477c_b86a_e0caef81acd0.slice. Mar 17 21:21:13.486173 kubelet[2028]: I0317 21:21:13.486105 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd82fb0c-e95d-477c-b86a-e0caef81acd0-config-volume\") pod \"coredns-7db6d8ff4d-6fr4t\" (UID: \"cd82fb0c-e95d-477c-b86a-e0caef81acd0\") " pod="kube-system/coredns-7db6d8ff4d-6fr4t" Mar 17 21:21:13.486486 kubelet[2028]: I0317 21:21:13.486224 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6qd8\" (UniqueName: \"kubernetes.io/projected/17e60bf9-ddbf-4fe8-9602-80c037feee12-kube-api-access-h6qd8\") pod \"coredns-7db6d8ff4d-tsg58\" (UID: \"17e60bf9-ddbf-4fe8-9602-80c037feee12\") " pod="kube-system/coredns-7db6d8ff4d-tsg58" Mar 17 21:21:13.486486 kubelet[2028]: I0317 21:21:13.486325 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4lj\" (UniqueName: \"kubernetes.io/projected/cd82fb0c-e95d-477c-b86a-e0caef81acd0-kube-api-access-2g4lj\") pod \"coredns-7db6d8ff4d-6fr4t\" (UID: \"cd82fb0c-e95d-477c-b86a-e0caef81acd0\") " pod="kube-system/coredns-7db6d8ff4d-6fr4t" Mar 17 21:21:13.486486 kubelet[2028]: I0317 21:21:13.486404 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e60bf9-ddbf-4fe8-9602-80c037feee12-config-volume\") pod \"coredns-7db6d8ff4d-tsg58\" (UID: \"17e60bf9-ddbf-4fe8-9602-80c037feee12\") " pod="kube-system/coredns-7db6d8ff4d-tsg58" Mar 17 21:21:13.705532 env[1202]: time="2025-03-17T21:21:13.705448584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tsg58,Uid:17e60bf9-ddbf-4fe8-9602-80c037feee12,Namespace:kube-system,Attempt:0,}" Mar 17 21:21:13.722232 env[1202]: time="2025-03-17T21:21:13.722148995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fr4t,Uid:cd82fb0c-e95d-477c-b86a-e0caef81acd0,Namespace:kube-system,Attempt:0,}" Mar 17 21:21:13.959895 kubelet[2028]: I0317 21:21:13.959366 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-52nx8" podStartSLOduration=5.549430846 podStartE2EDuration="20.959337429s" podCreationTimestamp="2025-03-17 21:20:53 +0000 UTC" firstStartedPulling="2025-03-17 21:20:53.779338179 +0000 UTC m=+14.244228320" lastFinishedPulling="2025-03-17 21:21:09.189244762 +0000 UTC m=+29.654134903" observedRunningTime="2025-03-17 21:21:13.957965118 +0000 UTC m=+34.422855273" watchObservedRunningTime="2025-03-17 21:21:13.959337429 +0000 UTC m=+34.424227581" Mar 17 21:21:14.837617 systemd[1]: Started sshd@9-10.230.48.190:22-177.12.2.75:60774.service. Mar 17 21:21:15.033398 env[1202]: time="2025-03-17T21:21:15.033309844Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:21:15.035199 env[1202]: time="2025-03-17T21:21:15.035158226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:21:15.037452 env[1202]: time="2025-03-17T21:21:15.037405770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 21:21:15.038987 env[1202]: time="2025-03-17T21:21:15.038329535Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 21:21:15.044011 env[1202]: time="2025-03-17T21:21:15.043948398Z" level=info msg="CreateContainer within sandbox \"035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 21:21:15.061082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795201825.mount: Deactivated successfully. Mar 17 21:21:15.069434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762018154.mount: Deactivated successfully. Mar 17 21:21:15.075221 env[1202]: time="2025-03-17T21:21:15.075140445Z" level=info msg="CreateContainer within sandbox \"035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\"" Mar 17 21:21:15.076716 env[1202]: time="2025-03-17T21:21:15.076402176Z" level=info msg="StartContainer for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\"" Mar 17 21:21:15.111309 systemd[1]: Started cri-containerd-d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796.scope. Mar 17 21:21:15.168050 env[1202]: time="2025-03-17T21:21:15.166001887Z" level=info msg="StartContainer for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" returns successfully" Mar 17 21:21:15.941693 sshd[2787]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=177.12.2.75 user=root Mar 17 21:21:18.157450 sshd[2787]: Failed password for root from 177.12.2.75 port 60774 ssh2 Mar 17 21:21:19.010961 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 21:21:19.014069 systemd-networkd[1026]: cilium_host: Link UP Mar 17 21:21:19.016258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 21:21:19.017648 systemd-networkd[1026]: cilium_net: Link UP Mar 17 21:21:19.019228 systemd-networkd[1026]: cilium_net: Gained carrier Mar 17 21:21:19.024424 systemd-networkd[1026]: cilium_host: Gained carrier Mar 17 21:21:19.204905 systemd-networkd[1026]: cilium_vxlan: Link UP Mar 17 21:21:19.204915 systemd-networkd[1026]: cilium_vxlan: Gained carrier Mar 17 21:21:19.495713 systemd-networkd[1026]: cilium_host: Gained IPv6LL Mar 17 21:21:19.751672 systemd-networkd[1026]: cilium_net: Gained IPv6LL Mar 17 21:21:19.844193 kernel: NET: Registered PF_ALG protocol family Mar 17 21:21:19.863624 sshd[2787]: Received disconnect from 177.12.2.75 port 60774:11: Bye Bye [preauth] Mar 17 21:21:19.863624 sshd[2787]: Disconnected from authenticating user root 177.12.2.75 port 60774 [preauth] Mar 17 21:21:19.862012 systemd[1]: sshd@9-10.230.48.190:22-177.12.2.75:60774.service: Deactivated successfully. Mar 17 21:21:20.934977 systemd-networkd[1026]: lxc_health: Link UP Mar 17 21:21:20.952231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 21:21:20.955961 systemd-networkd[1026]: lxc_health: Gained carrier Mar 17 21:21:20.968892 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL Mar 17 21:21:21.326794 systemd-networkd[1026]: lxc385fc7ff2821: Link UP Mar 17 21:21:21.341350 kernel: eth0: renamed from tmp7949a Mar 17 21:21:21.354223 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc385fc7ff2821: link becomes ready Mar 17 21:21:21.354611 systemd-networkd[1026]: lxc385fc7ff2821: Gained carrier Mar 17 21:21:21.394145 systemd-networkd[1026]: lxcd8055ee8f1e4: Link UP Mar 17 21:21:21.399301 kernel: eth0: renamed from tmpbe273 Mar 17 21:21:21.403891 systemd-networkd[1026]: lxcd8055ee8f1e4: Gained carrier Mar 17 21:21:21.404138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd8055ee8f1e4: link becomes ready Mar 17 21:21:21.684464 kubelet[2028]: I0317 21:21:21.684352 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mwwln" podStartSLOduration=7.641944983 podStartE2EDuration="28.684306518s" podCreationTimestamp="2025-03-17 21:20:53 +0000 UTC" firstStartedPulling="2025-03-17 21:20:53.998248738 +0000 UTC m=+14.463138879" lastFinishedPulling="2025-03-17 21:21:15.040610269 +0000 UTC m=+35.505500414" observedRunningTime="2025-03-17 21:21:16.024712756 +0000 UTC m=+36.489602906" watchObservedRunningTime="2025-03-17 21:21:21.684306518 +0000 UTC m=+42.149196668" Mar 17 21:21:22.951431 systemd-networkd[1026]: lxc_health: Gained IPv6LL Mar 17 21:21:23.207414 systemd-networkd[1026]: lxcd8055ee8f1e4: Gained IPv6LL Mar 17 21:21:23.399425 systemd-networkd[1026]: lxc385fc7ff2821: Gained IPv6LL Mar 17 21:21:27.065240 env[1202]: time="2025-03-17T21:21:27.064953666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:21:27.066382 env[1202]: time="2025-03-17T21:21:27.066326102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:21:27.066581 env[1202]: time="2025-03-17T21:21:27.066527667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:21:27.067732 env[1202]: time="2025-03-17T21:21:27.067591374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7949ad4c4e5697024a95d996ef590666309fb20c75707082322922364da650f2 pid=3223 runtime=io.containerd.runc.v2 Mar 17 21:21:27.074054 env[1202]: time="2025-03-17T21:21:27.073938318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:21:27.075194 env[1202]: time="2025-03-17T21:21:27.075128158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:21:27.075313 env[1202]: time="2025-03-17T21:21:27.075239650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:21:27.075971 env[1202]: time="2025-03-17T21:21:27.075908356Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be273a0e5c659b793e6f02c6ea397923efb364972a9928e0473e3d66590d4e0b pid=3232 runtime=io.containerd.runc.v2 Mar 17 21:21:27.124874 systemd[1]: Started cri-containerd-be273a0e5c659b793e6f02c6ea397923efb364972a9928e0473e3d66590d4e0b.scope. Mar 17 21:21:27.131426 systemd[1]: run-containerd-runc-k8s.io-be273a0e5c659b793e6f02c6ea397923efb364972a9928e0473e3d66590d4e0b-runc.HeVbwT.mount: Deactivated successfully. Mar 17 21:21:27.161801 systemd[1]: Started cri-containerd-7949ad4c4e5697024a95d996ef590666309fb20c75707082322922364da650f2.scope. Mar 17 21:21:27.316591 env[1202]: time="2025-03-17T21:21:27.315386915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fr4t,Uid:cd82fb0c-e95d-477c-b86a-e0caef81acd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"be273a0e5c659b793e6f02c6ea397923efb364972a9928e0473e3d66590d4e0b\"" Mar 17 21:21:27.316591 env[1202]: time="2025-03-17T21:21:27.316333610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tsg58,Uid:17e60bf9-ddbf-4fe8-9602-80c037feee12,Namespace:kube-system,Attempt:0,} returns sandbox id \"7949ad4c4e5697024a95d996ef590666309fb20c75707082322922364da650f2\"" Mar 17 21:21:27.333438 env[1202]: time="2025-03-17T21:21:27.333371726Z" level=info msg="CreateContainer within sandbox \"7949ad4c4e5697024a95d996ef590666309fb20c75707082322922364da650f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 21:21:27.333996 env[1202]: time="2025-03-17T21:21:27.333950615Z" level=info msg="CreateContainer within sandbox \"be273a0e5c659b793e6f02c6ea397923efb364972a9928e0473e3d66590d4e0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 21:21:27.360877 env[1202]: time="2025-03-17T21:21:27.360767957Z" level=info msg="CreateContainer within sandbox \"be273a0e5c659b793e6f02c6ea397923efb364972a9928e0473e3d66590d4e0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3fdadcd73a302b3abc16c1cd54b90402e2437f96b83b68a6913911878d982fa\"" Mar 17 21:21:27.366354 env[1202]: time="2025-03-17T21:21:27.365196375Z" level=info msg="CreateContainer within sandbox \"7949ad4c4e5697024a95d996ef590666309fb20c75707082322922364da650f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0699e5421ea22c1eb04a1d57af21d35145f15b414bdf958bba057d94364d8927\"" Mar 17 21:21:27.366624 env[1202]: time="2025-03-17T21:21:27.366571021Z" level=info msg="StartContainer for \"e3fdadcd73a302b3abc16c1cd54b90402e2437f96b83b68a6913911878d982fa\"" Mar 17 21:21:27.367734 env[1202]: time="2025-03-17T21:21:27.367691735Z" level=info msg="StartContainer for \"0699e5421ea22c1eb04a1d57af21d35145f15b414bdf958bba057d94364d8927\"" Mar 17 21:21:27.399640 systemd[1]: Started cri-containerd-0699e5421ea22c1eb04a1d57af21d35145f15b414bdf958bba057d94364d8927.scope. Mar 17 21:21:27.415035 systemd[1]: Started cri-containerd-e3fdadcd73a302b3abc16c1cd54b90402e2437f96b83b68a6913911878d982fa.scope. Mar 17 21:21:27.479866 env[1202]: time="2025-03-17T21:21:27.479808823Z" level=info msg="StartContainer for \"0699e5421ea22c1eb04a1d57af21d35145f15b414bdf958bba057d94364d8927\" returns successfully" Mar 17 21:21:27.500464 env[1202]: time="2025-03-17T21:21:27.500387564Z" level=info msg="StartContainer for \"e3fdadcd73a302b3abc16c1cd54b90402e2437f96b83b68a6913911878d982fa\" returns successfully" Mar 17 21:21:28.011146 kubelet[2028]: I0317 21:21:28.011006 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6fr4t" podStartSLOduration=35.010940965 podStartE2EDuration="35.010940965s" podCreationTimestamp="2025-03-17 21:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:21:28.007073929 +0000 UTC m=+48.471964080" watchObservedRunningTime="2025-03-17 21:21:28.010940965 +0000 UTC m=+48.475831112" Mar 17 21:21:28.030788 kubelet[2028]: I0317 21:21:28.030691 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tsg58" podStartSLOduration=35.030668754 podStartE2EDuration="35.030668754s" podCreationTimestamp="2025-03-17 21:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:21:28.029367755 +0000 UTC m=+48.494257911" watchObservedRunningTime="2025-03-17 21:21:28.030668754 +0000 UTC m=+48.495558908" Mar 17 21:21:28.085118 systemd[1]: run-containerd-runc-k8s.io-7949ad4c4e5697024a95d996ef590666309fb20c75707082322922364da650f2-runc.YmyQnc.mount: Deactivated successfully. Mar 17 21:21:43.108406 systemd[1]: Started sshd@10-10.230.48.190:22-198.20.252.107:35524.service. Mar 17 21:21:43.959124 sshd[3378]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.252.107 user=root Mar 17 21:21:45.685024 sshd[3378]: Failed password for root from 198.20.252.107 port 35524 ssh2 Mar 17 21:21:45.963960 sshd[3378]: Received disconnect from 198.20.252.107 port 35524:11: Bye Bye [preauth] Mar 17 21:21:45.963960 sshd[3378]: Disconnected from authenticating user root 198.20.252.107 port 35524 [preauth] Mar 17 21:21:45.965605 systemd[1]: sshd@10-10.230.48.190:22-198.20.252.107:35524.service: Deactivated successfully. Mar 17 21:21:51.769692 systemd[1]: Started sshd@11-10.230.48.190:22-139.178.89.65:55848.service. Mar 17 21:21:52.660681 sshd[3384]: Accepted publickey for core from 139.178.89.65 port 55848 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:21:52.663380 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:21:52.672566 systemd-logind[1190]: New session 6 of user core. Mar 17 21:21:52.674683 systemd[1]: Started session-6.scope. Mar 17 21:21:53.489263 sshd[3384]: pam_unix(sshd:session): session closed for user core Mar 17 21:21:53.496561 systemd[1]: sshd@11-10.230.48.190:22-139.178.89.65:55848.service: Deactivated successfully. Mar 17 21:21:53.497630 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 21:21:53.498702 systemd-logind[1190]: Session 6 logged out. Waiting for processes to exit. Mar 17 21:21:53.500550 systemd-logind[1190]: Removed session 6. Mar 17 21:21:56.956033 systemd[1]: Started sshd@12-10.230.48.190:22-134.209.151.205:35448.service. Mar 17 21:21:57.891598 sshd[3399]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=134.209.151.205 user=root Mar 17 21:21:58.635403 systemd[1]: Started sshd@13-10.230.48.190:22-139.178.89.65:55858.service. Mar 17 21:21:59.522992 sshd[3402]: Accepted publickey for core from 139.178.89.65 port 55858 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:21:59.526177 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:21:59.533752 systemd[1]: Started session-7.scope. Mar 17 21:21:59.535179 systemd-logind[1190]: New session 7 of user core. Mar 17 21:22:00.209957 sshd[3399]: Failed password for root from 134.209.151.205 port 35448 ssh2 Mar 17 21:22:00.264565 sshd[3402]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:00.269927 systemd-logind[1190]: Session 7 logged out. Waiting for processes to exit. Mar 17 21:22:00.270232 systemd[1]: sshd@13-10.230.48.190:22-139.178.89.65:55858.service: Deactivated successfully. Mar 17 21:22:00.271207 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 21:22:00.272254 systemd-logind[1190]: Removed session 7. Mar 17 21:22:01.789726 sshd[3399]: Received disconnect from 134.209.151.205 port 35448:11: Bye Bye [preauth] Mar 17 21:22:01.789726 sshd[3399]: Disconnected from authenticating user root 134.209.151.205 port 35448 [preauth] Mar 17 21:22:01.791785 systemd[1]: sshd@12-10.230.48.190:22-134.209.151.205:35448.service: Deactivated successfully. Mar 17 21:22:05.410157 systemd[1]: Started sshd@14-10.230.48.190:22-139.178.89.65:44566.service. Mar 17 21:22:06.295836 sshd[3416]: Accepted publickey for core from 139.178.89.65 port 44566 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:06.298638 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:06.304603 systemd-logind[1190]: New session 8 of user core. Mar 17 21:22:06.308384 systemd[1]: Started session-8.scope. Mar 17 21:22:06.988638 sshd[3416]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:06.993030 systemd[1]: sshd@14-10.230.48.190:22-139.178.89.65:44566.service: Deactivated successfully. Mar 17 21:22:06.994065 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 21:22:06.995762 systemd-logind[1190]: Session 8 logged out. Waiting for processes to exit. Mar 17 21:22:06.996890 systemd-logind[1190]: Removed session 8. Mar 17 21:22:12.138055 systemd[1]: Started sshd@15-10.230.48.190:22-139.178.89.65:51006.service. Mar 17 21:22:13.027415 sshd[3429]: Accepted publickey for core from 139.178.89.65 port 51006 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:13.030429 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:13.037171 systemd-logind[1190]: New session 9 of user core. Mar 17 21:22:13.037436 systemd[1]: Started session-9.scope. Mar 17 21:22:13.736757 sshd[3429]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:13.740314 systemd[1]: sshd@15-10.230.48.190:22-139.178.89.65:51006.service: Deactivated successfully. Mar 17 21:22:13.741717 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 21:22:13.742568 systemd-logind[1190]: Session 9 logged out. Waiting for processes to exit. Mar 17 21:22:13.743844 systemd-logind[1190]: Removed session 9. Mar 17 21:22:13.885569 systemd[1]: Started sshd@16-10.230.48.190:22-139.178.89.65:51010.service. Mar 17 21:22:14.779789 sshd[3441]: Accepted publickey for core from 139.178.89.65 port 51010 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:14.782711 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:14.790704 systemd[1]: Started session-10.scope. Mar 17 21:22:14.791777 systemd-logind[1190]: New session 10 of user core. Mar 17 21:22:15.555877 sshd[3441]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:15.559925 systemd[1]: sshd@16-10.230.48.190:22-139.178.89.65:51010.service: Deactivated successfully. Mar 17 21:22:15.561110 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 21:22:15.561988 systemd-logind[1190]: Session 10 logged out. Waiting for processes to exit. Mar 17 21:22:15.563347 systemd-logind[1190]: Removed session 10. Mar 17 21:22:15.703917 systemd[1]: Started sshd@17-10.230.48.190:22-139.178.89.65:51012.service. Mar 17 21:22:16.594997 sshd[3452]: Accepted publickey for core from 139.178.89.65 port 51012 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:16.597192 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:16.605742 systemd-logind[1190]: New session 11 of user core. Mar 17 21:22:16.606740 systemd[1]: Started session-11.scope. Mar 17 21:22:17.313592 sshd[3452]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:17.317618 systemd[1]: sshd@17-10.230.48.190:22-139.178.89.65:51012.service: Deactivated successfully. Mar 17 21:22:17.318596 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 21:22:17.319545 systemd-logind[1190]: Session 11 logged out. Waiting for processes to exit. Mar 17 21:22:17.320669 systemd-logind[1190]: Removed session 11. Mar 17 21:22:22.461501 systemd[1]: Started sshd@18-10.230.48.190:22-139.178.89.65:37420.service. Mar 17 21:22:23.349340 sshd[3465]: Accepted publickey for core from 139.178.89.65 port 37420 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:23.352438 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:23.359992 systemd-logind[1190]: New session 12 of user core. Mar 17 21:22:23.360756 systemd[1]: Started session-12.scope. Mar 17 21:22:23.542688 systemd[1]: Started sshd@19-10.230.48.190:22-143.110.184.217:38456.service. Mar 17 21:22:24.070921 sshd[3465]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:24.075272 systemd[1]: sshd@18-10.230.48.190:22-139.178.89.65:37420.service: Deactivated successfully. Mar 17 21:22:24.076565 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 21:22:24.077488 systemd-logind[1190]: Session 12 logged out. Waiting for processes to exit. Mar 17 21:22:24.079297 systemd-logind[1190]: Removed session 12. Mar 17 21:22:24.310985 systemd[1]: Started sshd@20-10.230.48.190:22-103.212.211.155:59820.service. Mar 17 21:22:24.445523 sshd[3469]: Invalid user palworld from 143.110.184.217 port 38456 Mar 17 21:22:24.726661 sshd[3469]: pam_faillock(sshd:auth): User unknown Mar 17 21:22:24.727638 sshd[3469]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:22:24.727705 sshd[3469]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.110.184.217 Mar 17 21:22:24.728748 sshd[3469]: pam_faillock(sshd:auth): User unknown Mar 17 21:22:26.750078 sshd[3469]: Failed password for invalid user palworld from 143.110.184.217 port 38456 ssh2 Mar 17 21:22:27.316861 sshd[3484]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.212.211.155 user=root Mar 17 21:22:27.317075 sshd[3484]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Mar 17 21:22:28.506627 sshd[3469]: Connection closed by invalid user palworld 143.110.184.217 port 38456 [preauth] Mar 17 21:22:28.509034 systemd[1]: sshd@19-10.230.48.190:22-143.110.184.217:38456.service: Deactivated successfully. Mar 17 21:22:29.221206 systemd[1]: Started sshd@21-10.230.48.190:22-139.178.89.65:37422.service. Mar 17 21:22:29.418483 sshd[3484]: Failed password for root from 103.212.211.155 port 59820 ssh2 Mar 17 21:22:30.117036 sshd[3488]: Accepted publickey for core from 139.178.89.65 port 37422 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:30.119041 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:30.126871 systemd[1]: Started session-13.scope. Mar 17 21:22:30.127342 systemd-logind[1190]: New session 13 of user core. Mar 17 21:22:30.828231 sshd[3488]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:30.832200 systemd-logind[1190]: Session 13 logged out. Waiting for processes to exit. Mar 17 21:22:30.832726 systemd[1]: sshd@21-10.230.48.190:22-139.178.89.65:37422.service: Deactivated successfully. Mar 17 21:22:30.833648 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 21:22:30.834589 systemd-logind[1190]: Removed session 13. Mar 17 21:22:30.975328 systemd[1]: Started sshd@22-10.230.48.190:22-139.178.89.65:37432.service. Mar 17 21:22:31.509521 sshd[3484]: Received disconnect from 103.212.211.155 port 59820:11: Bye Bye [preauth] Mar 17 21:22:31.510076 sshd[3484]: Disconnected from authenticating user root 103.212.211.155 port 59820 [preauth] Mar 17 21:22:31.511762 systemd[1]: sshd@20-10.230.48.190:22-103.212.211.155:59820.service: Deactivated successfully. Mar 17 21:22:31.867684 sshd[3500]: Accepted publickey for core from 139.178.89.65 port 37432 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:31.866996 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:31.873302 systemd-logind[1190]: New session 14 of user core. Mar 17 21:22:31.874294 systemd[1]: Started session-14.scope. Mar 17 21:22:32.959794 sshd[3500]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:32.964368 systemd-logind[1190]: Session 14 logged out. Waiting for processes to exit. Mar 17 21:22:32.965696 systemd[1]: sshd@22-10.230.48.190:22-139.178.89.65:37432.service: Deactivated successfully. Mar 17 21:22:32.966767 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 21:22:32.967974 systemd-logind[1190]: Removed session 14. Mar 17 21:22:33.107494 systemd[1]: Started sshd@23-10.230.48.190:22-139.178.89.65:33200.service. Mar 17 21:22:33.997746 sshd[3510]: Accepted publickey for core from 139.178.89.65 port 33200 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:33.999769 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:34.008173 systemd[1]: Started session-15.scope. Mar 17 21:22:34.008724 systemd-logind[1190]: New session 15 of user core. Mar 17 21:22:35.417451 systemd[1]: Started sshd@24-10.230.48.190:22-143.110.184.217:50868.service. Mar 17 21:22:37.000038 sshd[3510]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:37.004408 systemd[1]: sshd@23-10.230.48.190:22-139.178.89.65:33200.service: Deactivated successfully. Mar 17 21:22:37.005468 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 21:22:37.006352 systemd-logind[1190]: Session 15 logged out. Waiting for processes to exit. Mar 17 21:22:37.007760 systemd-logind[1190]: Removed session 15. Mar 17 21:22:37.146683 systemd[1]: Started sshd@25-10.230.48.190:22-139.178.89.65:33210.service. Mar 17 21:22:37.866257 sshd[3522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.110.184.217 user=root Mar 17 21:22:38.040705 sshd[3530]: Accepted publickey for core from 139.178.89.65 port 33210 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:38.042734 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:38.049508 systemd-logind[1190]: New session 16 of user core. Mar 17 21:22:38.050420 systemd[1]: Started session-16.scope. Mar 17 21:22:38.996768 sshd[3530]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:39.001174 systemd-logind[1190]: Session 16 logged out. Waiting for processes to exit. Mar 17 21:22:39.001689 systemd[1]: sshd@25-10.230.48.190:22-139.178.89.65:33210.service: Deactivated successfully. Mar 17 21:22:39.002721 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 21:22:39.003937 systemd-logind[1190]: Removed session 16. Mar 17 21:22:39.145810 systemd[1]: Started sshd@26-10.230.48.190:22-139.178.89.65:33222.service. Mar 17 21:22:39.671902 sshd[3522]: Failed password for root from 143.110.184.217 port 50868 ssh2 Mar 17 21:22:40.040776 sshd[3539]: Accepted publickey for core from 139.178.89.65 port 33222 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:40.042739 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:40.050510 systemd[1]: Started session-17.scope. Mar 17 21:22:40.052223 systemd-logind[1190]: New session 17 of user core. Mar 17 21:22:40.239008 sshd[3522]: Connection closed by authenticating user root 143.110.184.217 port 50868 [preauth] Mar 17 21:22:40.240742 systemd[1]: sshd@24-10.230.48.190:22-143.110.184.217:50868.service: Deactivated successfully. Mar 17 21:22:40.771883 sshd[3539]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:40.775428 systemd[1]: sshd@26-10.230.48.190:22-139.178.89.65:33222.service: Deactivated successfully. Mar 17 21:22:40.776402 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 21:22:40.777203 systemd-logind[1190]: Session 17 logged out. Waiting for processes to exit. Mar 17 21:22:40.778363 systemd-logind[1190]: Removed session 17. Mar 17 21:22:45.919167 systemd[1]: Started sshd@27-10.230.48.190:22-139.178.89.65:33628.service. Mar 17 21:22:46.708498 systemd[1]: Started sshd@28-10.230.48.190:22-198.20.252.107:59076.service. Mar 17 21:22:46.805319 sshd[3555]: Accepted publickey for core from 139.178.89.65 port 33628 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:46.807959 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:46.815599 systemd[1]: Started session-18.scope. Mar 17 21:22:46.815961 systemd-logind[1190]: New session 18 of user core. Mar 17 21:22:47.330783 systemd[1]: Started sshd@29-10.230.48.190:22-143.110.184.217:33962.service. Mar 17 21:22:47.520893 sshd[3555]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:47.525036 systemd-logind[1190]: Session 18 logged out. Waiting for processes to exit. Mar 17 21:22:47.525601 systemd[1]: sshd@27-10.230.48.190:22-139.178.89.65:33628.service: Deactivated successfully. Mar 17 21:22:47.526644 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 21:22:47.528004 systemd-logind[1190]: Removed session 18. Mar 17 21:22:47.570940 sshd[3561]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=198.20.252.107 user=root Mar 17 21:22:49.415864 sshd[3561]: Failed password for root from 198.20.252.107 port 59076 ssh2 Mar 17 21:22:49.585217 sshd[3561]: Received disconnect from 198.20.252.107 port 59076:11: Bye Bye [preauth] Mar 17 21:22:49.585217 sshd[3561]: Disconnected from authenticating user root 198.20.252.107 port 59076 [preauth] Mar 17 21:22:49.586675 systemd[1]: sshd@28-10.230.48.190:22-198.20.252.107:59076.service: Deactivated successfully. Mar 17 21:22:50.050241 sshd[3572]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.110.184.217 user=root Mar 17 21:22:52.642656 sshd[3572]: Failed password for root from 143.110.184.217 port 33962 ssh2 Mar 17 21:22:52.672328 systemd[1]: Started sshd@30-10.230.48.190:22-139.178.89.65:41270.service. Mar 17 21:22:53.566703 sshd[3577]: Accepted publickey for core from 139.178.89.65 port 41270 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:22:53.569599 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:22:53.577573 systemd[1]: Started session-19.scope. Mar 17 21:22:53.579392 systemd-logind[1190]: New session 19 of user core. Mar 17 21:22:54.264486 sshd[3577]: pam_unix(sshd:session): session closed for user core Mar 17 21:22:54.268611 systemd-logind[1190]: Session 19 logged out. Waiting for processes to exit. Mar 17 21:22:54.270021 systemd[1]: sshd@30-10.230.48.190:22-139.178.89.65:41270.service: Deactivated successfully. Mar 17 21:22:54.270948 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 21:22:54.271528 systemd-logind[1190]: Removed session 19. Mar 17 21:22:54.284591 sshd[3572]: Connection closed by authenticating user root 143.110.184.217 port 33962 [preauth] Mar 17 21:22:54.285722 systemd[1]: sshd@29-10.230.48.190:22-143.110.184.217:33962.service: Deactivated successfully. Mar 17 21:22:55.414286 systemd[1]: Started sshd@31-10.230.48.190:22-177.12.2.75:56082.service. Mar 17 21:22:56.679172 sshd[3592]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=177.12.2.75 user=root Mar 17 21:22:58.761153 sshd[3592]: Failed password for root from 177.12.2.75 port 56082 ssh2 Mar 17 21:22:59.396689 systemd[1]: Started sshd@32-10.230.48.190:22-143.110.184.217:58944.service. Mar 17 21:22:59.411692 systemd[1]: Started sshd@33-10.230.48.190:22-139.178.89.65:41274.service. Mar 17 21:23:00.300499 sshd[3598]: Accepted publickey for core from 139.178.89.65 port 41274 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:23:00.303382 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:23:00.313699 systemd-logind[1190]: New session 20 of user core. Mar 17 21:23:00.314542 systemd[1]: Started session-20.scope. Mar 17 21:23:00.353167 sshd[3595]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.110.184.217 user=root Mar 17 21:23:01.001364 sshd[3598]: pam_unix(sshd:session): session closed for user core Mar 17 21:23:01.004997 systemd[1]: sshd@33-10.230.48.190:22-139.178.89.65:41274.service: Deactivated successfully. Mar 17 21:23:01.006463 systemd-logind[1190]: Session 20 logged out. Waiting for processes to exit. Mar 17 21:23:01.006556 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 21:23:01.008154 systemd-logind[1190]: Removed session 20. Mar 17 21:23:01.008922 sshd[3592]: Received disconnect from 177.12.2.75 port 56082:11: Bye Bye [preauth] Mar 17 21:23:01.008922 sshd[3592]: Disconnected from authenticating user root 177.12.2.75 port 56082 [preauth] Mar 17 21:23:01.010366 systemd[1]: sshd@31-10.230.48.190:22-177.12.2.75:56082.service: Deactivated successfully. Mar 17 21:23:01.147634 systemd[1]: Started sshd@34-10.230.48.190:22-139.178.89.65:41276.service. Mar 17 21:23:01.983486 sshd[3595]: Failed password for root from 143.110.184.217 port 58944 ssh2 Mar 17 21:23:02.028510 sshd[3611]: Accepted publickey for core from 139.178.89.65 port 41276 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:23:02.030426 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:23:02.038207 systemd-logind[1190]: New session 21 of user core. Mar 17 21:23:02.038246 systemd[1]: Started session-21.scope. Mar 17 21:23:02.453377 sshd[3595]: Connection closed by authenticating user root 143.110.184.217 port 58944 [preauth] Mar 17 21:23:02.455096 systemd[1]: sshd@32-10.230.48.190:22-143.110.184.217:58944.service: Deactivated successfully. Mar 17 21:23:04.060570 env[1202]: time="2025-03-17T21:23:04.060485483Z" level=info msg="StopContainer for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" with timeout 30 (s)" Mar 17 21:23:04.062226 env[1202]: time="2025-03-17T21:23:04.062162457Z" level=info msg="Stop container \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" with signal terminated" Mar 17 21:23:04.095055 systemd[1]: run-containerd-runc-k8s.io-8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df-runc.gWCdRL.mount: Deactivated successfully. Mar 17 21:23:04.111613 systemd[1]: cri-containerd-d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796.scope: Deactivated successfully. Mar 17 21:23:04.146569 env[1202]: time="2025-03-17T21:23:04.146451857Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 21:23:04.153751 env[1202]: time="2025-03-17T21:23:04.153702204Z" level=info msg="StopContainer for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" with timeout 2 (s)" Mar 17 21:23:04.154210 env[1202]: time="2025-03-17T21:23:04.154170367Z" level=info msg="Stop container \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" with signal terminated" Mar 17 21:23:04.175227 systemd-networkd[1026]: lxc_health: Link DOWN Mar 17 21:23:04.175239 systemd-networkd[1026]: lxc_health: Lost carrier Mar 17 21:23:04.188756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796-rootfs.mount: Deactivated successfully. Mar 17 21:23:04.225702 systemd[1]: cri-containerd-8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df.scope: Deactivated successfully. Mar 17 21:23:04.226191 systemd[1]: cri-containerd-8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df.scope: Consumed 10.121s CPU time. Mar 17 21:23:04.227449 env[1202]: time="2025-03-17T21:23:04.225028961Z" level=info msg="shim disconnected" id=d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796 Mar 17 21:23:04.227449 env[1202]: time="2025-03-17T21:23:04.227177277Z" level=warning msg="cleaning up after shim disconnected" id=d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796 namespace=k8s.io Mar 17 21:23:04.227449 env[1202]: time="2025-03-17T21:23:04.227245548Z" level=info msg="cleaning up dead shim" Mar 17 21:23:04.258348 env[1202]: time="2025-03-17T21:23:04.258274572Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3667 runtime=io.containerd.runc.v2\n" Mar 17 21:23:04.260555 env[1202]: time="2025-03-17T21:23:04.260512903Z" level=info msg="StopContainer for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" returns successfully" Mar 17 21:23:04.261578 env[1202]: time="2025-03-17T21:23:04.261537685Z" level=info msg="StopPodSandbox for \"035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38\"" Mar 17 21:23:04.261721 env[1202]: time="2025-03-17T21:23:04.261683996Z" level=info msg="Container to stop \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:04.264559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38-shm.mount: Deactivated successfully. Mar 17 21:23:04.280972 env[1202]: time="2025-03-17T21:23:04.280893621Z" level=info msg="shim disconnected" id=8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df Mar 17 21:23:04.280972 env[1202]: time="2025-03-17T21:23:04.280973786Z" level=warning msg="cleaning up after shim disconnected" id=8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df namespace=k8s.io Mar 17 21:23:04.281451 env[1202]: time="2025-03-17T21:23:04.280990067Z" level=info msg="cleaning up dead shim" Mar 17 21:23:04.284096 systemd[1]: cri-containerd-035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38.scope: Deactivated successfully. Mar 17 21:23:04.301588 env[1202]: time="2025-03-17T21:23:04.301524632Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\n" Mar 17 21:23:04.309891 env[1202]: time="2025-03-17T21:23:04.309833742Z" level=info msg="StopContainer for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" returns successfully" Mar 17 21:23:04.311100 env[1202]: time="2025-03-17T21:23:04.310990391Z" level=info msg="StopPodSandbox for \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\"" Mar 17 21:23:04.311366 env[1202]: time="2025-03-17T21:23:04.311329265Z" level=info msg="Container to stop \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:04.311576 env[1202]: time="2025-03-17T21:23:04.311530295Z" level=info msg="Container to stop \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:04.311733 env[1202]: time="2025-03-17T21:23:04.311698907Z" level=info msg="Container to stop \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:04.311915 env[1202]: time="2025-03-17T21:23:04.311880747Z" level=info msg="Container to stop \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:04.312161 env[1202]: time="2025-03-17T21:23:04.312118741Z" level=info msg="Container to stop \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:04.322923 systemd[1]: cri-containerd-add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc.scope: Deactivated successfully. Mar 17 21:23:04.343172 env[1202]: time="2025-03-17T21:23:04.343077722Z" level=info msg="shim disconnected" id=035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38 Mar 17 21:23:04.343172 env[1202]: time="2025-03-17T21:23:04.343164809Z" level=warning msg="cleaning up after shim disconnected" id=035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38 namespace=k8s.io Mar 17 21:23:04.343172 env[1202]: time="2025-03-17T21:23:04.343182846Z" level=info msg="cleaning up dead shim" Mar 17 21:23:04.354514 env[1202]: time="2025-03-17T21:23:04.354448738Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3732 runtime=io.containerd.runc.v2\n" Mar 17 21:23:04.355671 env[1202]: time="2025-03-17T21:23:04.355629781Z" level=info msg="TearDown network for sandbox \"035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38\" successfully" Mar 17 21:23:04.355770 env[1202]: time="2025-03-17T21:23:04.355669478Z" level=info msg="StopPodSandbox for \"035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38\" returns successfully" Mar 17 21:23:04.384921 env[1202]: time="2025-03-17T21:23:04.384841147Z" level=info msg="shim disconnected" id=add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc Mar 17 21:23:04.385531 env[1202]: time="2025-03-17T21:23:04.385367052Z" level=warning msg="cleaning up after shim disconnected" id=add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc namespace=k8s.io Mar 17 21:23:04.385659 env[1202]: time="2025-03-17T21:23:04.385629266Z" level=info msg="cleaning up dead shim" Mar 17 21:23:04.398586 env[1202]: time="2025-03-17T21:23:04.398525427Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Mar 17 21:23:04.399119 env[1202]: time="2025-03-17T21:23:04.399050871Z" level=info msg="TearDown network for sandbox \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" successfully" Mar 17 21:23:04.399235 env[1202]: time="2025-03-17T21:23:04.399118439Z" level=info msg="StopPodSandbox for \"add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc\" returns successfully" Mar 17 21:23:04.551616 kubelet[2028]: I0317 21:23:04.550978 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-kernel\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.551616 kubelet[2028]: I0317 21:23:04.551040 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-bpf-maps\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.551616 kubelet[2028]: I0317 21:23:04.551116 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-clustermesh-secrets\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.551616 kubelet[2028]: I0317 21:23:04.551167 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nh68\" (UniqueName: \"kubernetes.io/projected/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-kube-api-access-4nh68\") pod \"4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61\" (UID: \"4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61\") " Mar 17 21:23:04.551616 kubelet[2028]: I0317 21:23:04.551208 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-net\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.551616 kubelet[2028]: I0317 21:23:04.551236 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-config-path\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552528 kubelet[2028]: I0317 21:23:04.551299 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-run\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552528 kubelet[2028]: I0317 21:23:04.551328 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-cgroup\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552528 kubelet[2028]: I0317 21:23:04.551353 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-xtables-lock\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552528 kubelet[2028]: I0317 21:23:04.551376 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-lib-modules\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552528 kubelet[2028]: I0317 21:23:04.551420 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngl5n\" (UniqueName: \"kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-kube-api-access-ngl5n\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552528 kubelet[2028]: I0317 21:23:04.551448 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cni-path\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552989 kubelet[2028]: I0317 21:23:04.551477 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-cilium-config-path\") pod \"4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61\" (UID: \"4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61\") " Mar 17 21:23:04.552989 kubelet[2028]: I0317 21:23:04.551505 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hubble-tls\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552989 kubelet[2028]: I0317 21:23:04.551529 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hostproc\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.552989 kubelet[2028]: I0317 21:23:04.551551 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-etc-cni-netd\") pod \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\" (UID: \"6b4f8c6b-e88d-49fe-bcc7-69b89709e979\") " Mar 17 21:23:04.560697 kubelet[2028]: I0317 21:23:04.553342 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.571591 kubelet[2028]: I0317 21:23:04.571489 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.571749 kubelet[2028]: I0317 21:23:04.571489 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.571969 kubelet[2028]: I0317 21:23:04.571931 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.573305 kubelet[2028]: I0317 21:23:04.573265 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cni-path" (OuterVolumeSpecName: "cni-path") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.573734 kubelet[2028]: I0317 21:23:04.573701 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.573830 kubelet[2028]: I0317 21:23:04.573751 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.573830 kubelet[2028]: I0317 21:23:04.573782 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.573830 kubelet[2028]: I0317 21:23:04.573813 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.574199 kubelet[2028]: I0317 21:23:04.574156 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hostproc" (OuterVolumeSpecName: "hostproc") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:04.575784 kubelet[2028]: I0317 21:23:04.575750 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-kube-api-access-4nh68" (OuterVolumeSpecName: "kube-api-access-4nh68") pod "4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61" (UID: "4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61"). InnerVolumeSpecName "kube-api-access-4nh68". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:23:04.578631 kubelet[2028]: I0317 21:23:04.578596 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61" (UID: "4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 21:23:04.581967 kubelet[2028]: I0317 21:23:04.581933 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-kube-api-access-ngl5n" (OuterVolumeSpecName: "kube-api-access-ngl5n") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "kube-api-access-ngl5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:23:04.582662 kubelet[2028]: I0317 21:23:04.582623 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 21:23:04.582766 kubelet[2028]: I0317 21:23:04.582704 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 21:23:04.585029 kubelet[2028]: I0317 21:23:04.584994 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6b4f8c6b-e88d-49fe-bcc7-69b89709e979" (UID: "6b4f8c6b-e88d-49fe-bcc7-69b89709e979"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:23:04.651883 kubelet[2028]: I0317 21:23:04.651771 2028 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-kernel\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.651883 kubelet[2028]: I0317 21:23:04.651881 2028 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-bpf-maps\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.651903 2028 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-host-proc-sys-net\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.651919 2028 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-clustermesh-secrets\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.651944 2028 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4nh68\" (UniqueName: \"kubernetes.io/projected/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-kube-api-access-4nh68\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.651959 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-run\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.651973 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-config-path\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.651990 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cilium-cgroup\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.652014 2028 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-xtables-lock\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652234 kubelet[2028]: I0317 21:23:04.652027 2028 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-cni-path\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652696 kubelet[2028]: I0317 21:23:04.652041 2028 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-lib-modules\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652696 kubelet[2028]: I0317 21:23:04.652055 2028 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ngl5n\" (UniqueName: \"kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-kube-api-access-ngl5n\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652696 kubelet[2028]: I0317 21:23:04.652069 2028 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hubble-tls\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652696 kubelet[2028]: I0317 21:23:04.652083 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61-cilium-config-path\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652696 kubelet[2028]: I0317 21:23:04.652159 2028 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-etc-cni-netd\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.652696 kubelet[2028]: I0317 21:23:04.652199 2028 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4f8c6b-e88d-49fe-bcc7-69b89709e979-hostproc\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:04.932048 kubelet[2028]: E0317 21:23:04.931968 2028 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:23:05.085500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df-rootfs.mount: Deactivated successfully. Mar 17 21:23:05.086019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-035ad6df26cae51abf35df713381538db24d88c0cb52ab781f44d3ab34ddcc38-rootfs.mount: Deactivated successfully. Mar 17 21:23:05.086306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc-rootfs.mount: Deactivated successfully. Mar 17 21:23:05.086842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-add4caf06a7c1e9b448db664f15fc60a7c488e7253c8d877e93f33b94863c5dc-shm.mount: Deactivated successfully. Mar 17 21:23:05.087297 systemd[1]: var-lib-kubelet-pods-4f97ebd7\x2da7e8\x2d4d72\x2da54a\x2d3564e0f0bb61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nh68.mount: Deactivated successfully. Mar 17 21:23:05.087846 systemd[1]: var-lib-kubelet-pods-6b4f8c6b\x2de88d\x2d49fe\x2dbcc7\x2d69b89709e979-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngl5n.mount: Deactivated successfully. Mar 17 21:23:05.088626 systemd[1]: var-lib-kubelet-pods-6b4f8c6b\x2de88d\x2d49fe\x2dbcc7\x2d69b89709e979-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:23:05.088970 systemd[1]: var-lib-kubelet-pods-6b4f8c6b\x2de88d\x2d49fe\x2dbcc7\x2d69b89709e979-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:23:05.269336 systemd[1]: Removed slice kubepods-burstable-pod6b4f8c6b_e88d_49fe_bcc7_69b89709e979.slice. Mar 17 21:23:05.269489 systemd[1]: kubepods-burstable-pod6b4f8c6b_e88d_49fe_bcc7_69b89709e979.slice: Consumed 10.298s CPU time. Mar 17 21:23:05.280408 kubelet[2028]: I0317 21:23:05.280374 2028 scope.go:117] "RemoveContainer" containerID="8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df" Mar 17 21:23:05.285532 env[1202]: time="2025-03-17T21:23:05.285349639Z" level=info msg="RemoveContainer for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\"" Mar 17 21:23:05.292877 env[1202]: time="2025-03-17T21:23:05.292827874Z" level=info msg="RemoveContainer for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" returns successfully" Mar 17 21:23:05.297397 systemd[1]: Removed slice kubepods-besteffort-pod4f97ebd7_a7e8_4d72_a54a_3564e0f0bb61.slice. Mar 17 21:23:05.297971 kubelet[2028]: I0317 21:23:05.297907 2028 scope.go:117] "RemoveContainer" containerID="7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63" Mar 17 21:23:05.303132 env[1202]: time="2025-03-17T21:23:05.302973414Z" level=info msg="RemoveContainer for \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\"" Mar 17 21:23:05.310746 env[1202]: time="2025-03-17T21:23:05.310669954Z" level=info msg="RemoveContainer for \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\" returns successfully" Mar 17 21:23:05.311082 kubelet[2028]: I0317 21:23:05.311049 2028 scope.go:117] "RemoveContainer" containerID="7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298" Mar 17 21:23:05.313627 env[1202]: time="2025-03-17T21:23:05.312635253Z" level=info msg="RemoveContainer for \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\"" Mar 17 21:23:05.319353 env[1202]: time="2025-03-17T21:23:05.319289630Z" level=info msg="RemoveContainer for \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\" returns successfully" Mar 17 21:23:05.319912 kubelet[2028]: I0317 21:23:05.319882 2028 scope.go:117] "RemoveContainer" containerID="32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376" Mar 17 21:23:05.323697 env[1202]: time="2025-03-17T21:23:05.323653454Z" level=info msg="RemoveContainer for \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\"" Mar 17 21:23:05.327514 env[1202]: time="2025-03-17T21:23:05.327477827Z" level=info msg="RemoveContainer for \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\" returns successfully" Mar 17 21:23:05.328554 kubelet[2028]: I0317 21:23:05.328490 2028 scope.go:117] "RemoveContainer" containerID="bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849" Mar 17 21:23:05.331472 env[1202]: time="2025-03-17T21:23:05.331409686Z" level=info msg="RemoveContainer for \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\"" Mar 17 21:23:05.338280 env[1202]: time="2025-03-17T21:23:05.338232690Z" level=info msg="RemoveContainer for \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\" returns successfully" Mar 17 21:23:05.339440 kubelet[2028]: I0317 21:23:05.339381 2028 scope.go:117] "RemoveContainer" containerID="8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df" Mar 17 21:23:05.340066 env[1202]: time="2025-03-17T21:23:05.339886391Z" level=error msg="ContainerStatus for \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\": not found" Mar 17 21:23:05.341139 kubelet[2028]: E0317 21:23:05.341079 2028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\": not found" containerID="8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df" Mar 17 21:23:05.341327 kubelet[2028]: I0317 21:23:05.341150 2028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df"} err="failed to get container status \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e17ad7b9067435af9c47047b2672ada501ff0a27b0c874ee0c38dca304b59df\": not found" Mar 17 21:23:05.341327 kubelet[2028]: I0317 21:23:05.341261 2028 scope.go:117] "RemoveContainer" containerID="7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63" Mar 17 21:23:05.341801 env[1202]: time="2025-03-17T21:23:05.341701052Z" level=error msg="ContainerStatus for \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\": not found" Mar 17 21:23:05.342293 kubelet[2028]: E0317 21:23:05.342262 2028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\": not found" containerID="7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63" Mar 17 21:23:05.342574 kubelet[2028]: I0317 21:23:05.342512 2028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63"} err="failed to get container status \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\": rpc error: code = NotFound desc = an error occurred when try to find container \"7738cfc28663fd67aaa158fa0f740c7c08cef368e1f85bca5fb0a58e85554f63\": not found" Mar 17 21:23:05.342719 kubelet[2028]: I0317 21:23:05.342693 2028 scope.go:117] "RemoveContainer" containerID="7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298" Mar 17 21:23:05.343219 env[1202]: time="2025-03-17T21:23:05.343151891Z" level=error msg="ContainerStatus for \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\": not found" Mar 17 21:23:05.343674 kubelet[2028]: E0317 21:23:05.343598 2028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\": not found" containerID="7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298" Mar 17 21:23:05.343912 kubelet[2028]: I0317 21:23:05.343867 2028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298"} err="failed to get container status \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ec7355b27493b6298b4cb744878df77dcca1d488825209d827c42a72fdc2298\": not found" Mar 17 21:23:05.344082 kubelet[2028]: I0317 21:23:05.344060 2028 scope.go:117] "RemoveContainer" containerID="32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376" Mar 17 21:23:05.344610 env[1202]: time="2025-03-17T21:23:05.344500651Z" level=error msg="ContainerStatus for \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\": not found" Mar 17 21:23:05.344909 kubelet[2028]: E0317 21:23:05.344837 2028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\": not found" containerID="32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376" Mar 17 21:23:05.345031 kubelet[2028]: I0317 21:23:05.344908 2028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376"} err="failed to get container status \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\": rpc error: code = NotFound desc = an error occurred when try to find container \"32d24881f8e4ab04d50f733e1b8ff3d0057d1b03a8b4310ccdf31cd77bf6f376\": not found" Mar 17 21:23:05.345031 kubelet[2028]: I0317 21:23:05.344936 2028 scope.go:117] "RemoveContainer" containerID="bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849" Mar 17 21:23:05.345673 env[1202]: time="2025-03-17T21:23:05.345593721Z" level=error msg="ContainerStatus for \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\": not found" Mar 17 21:23:05.346019 kubelet[2028]: E0317 21:23:05.345978 2028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\": not found" containerID="bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849" Mar 17 21:23:05.346254 kubelet[2028]: I0317 21:23:05.346019 2028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849"} err="failed to get container status \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb63bae237a2d559ce56c375495f77864741fc1efe86b22a16d6026ee243a849\": not found" Mar 17 21:23:05.346254 kubelet[2028]: I0317 21:23:05.346043 2028 scope.go:117] "RemoveContainer" containerID="d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796" Mar 17 21:23:05.347579 env[1202]: time="2025-03-17T21:23:05.347478992Z" level=info msg="RemoveContainer for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\"" Mar 17 21:23:05.351699 env[1202]: time="2025-03-17T21:23:05.351575972Z" level=info msg="RemoveContainer for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" returns successfully" Mar 17 21:23:05.352244 kubelet[2028]: I0317 21:23:05.352218 2028 scope.go:117] "RemoveContainer" containerID="d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796" Mar 17 21:23:05.352616 env[1202]: time="2025-03-17T21:23:05.352533529Z" level=error msg="ContainerStatus for \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\": not found" Mar 17 21:23:05.353157 kubelet[2028]: E0317 21:23:05.353124 2028 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\": not found" containerID="d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796" Mar 17 21:23:05.353299 kubelet[2028]: I0317 21:23:05.353267 2028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796"} err="failed to get container status \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\": rpc error: code = NotFound desc = an error occurred when try to find container \"d215f1397323cb069758d17ec93c2084211db05110c72e58687a1228af3b3796\": not found" Mar 17 21:23:05.773618 kubelet[2028]: I0317 21:23:05.773569 2028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61" path="/var/lib/kubelet/pods/4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61/volumes" Mar 17 21:23:05.775393 kubelet[2028]: I0317 21:23:05.775362 2028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" path="/var/lib/kubelet/pods/6b4f8c6b-e88d-49fe-bcc7-69b89709e979/volumes" Mar 17 21:23:06.111019 sshd[3611]: pam_unix(sshd:session): session closed for user core Mar 17 21:23:06.119644 systemd[1]: sshd@34-10.230.48.190:22-139.178.89.65:41276.service: Deactivated successfully. Mar 17 21:23:06.120810 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 21:23:06.121704 systemd-logind[1190]: Session 21 logged out. Waiting for processes to exit. Mar 17 21:23:06.123161 systemd-logind[1190]: Removed session 21. Mar 17 21:23:06.257488 systemd[1]: Started sshd@35-10.230.48.190:22-139.178.89.65:49820.service. Mar 17 21:23:07.153201 sshd[3777]: Accepted publickey for core from 139.178.89.65 port 49820 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:23:07.155409 sshd[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:23:07.163624 systemd-logind[1190]: New session 22 of user core. Mar 17 21:23:07.164979 systemd[1]: Started session-22.scope. Mar 17 21:23:07.289650 systemd[1]: Started sshd@36-10.230.48.190:22-104.248.141.166:55060.service. Mar 17 21:23:07.398581 sshd[3781]: Invalid user debian from 104.248.141.166 port 55060 Mar 17 21:23:07.424914 sshd[3781]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:07.426172 sshd[3781]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:07.426226 sshd[3781]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:07.427148 sshd[3781]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:09.032965 kubelet[2028]: I0317 21:23:09.031969 2028 topology_manager.go:215] "Topology Admit Handler" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" podNamespace="kube-system" podName="cilium-92f5g" Mar 17 21:23:09.035785 kubelet[2028]: E0317 21:23:09.035750 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" containerName="cilium-agent" Mar 17 21:23:09.035944 kubelet[2028]: E0317 21:23:09.035799 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" containerName="apply-sysctl-overwrites" Mar 17 21:23:09.035944 kubelet[2028]: E0317 21:23:09.035814 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" containerName="clean-cilium-state" Mar 17 21:23:09.035944 kubelet[2028]: E0317 21:23:09.035825 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" containerName="mount-cgroup" Mar 17 21:23:09.035944 kubelet[2028]: E0317 21:23:09.035835 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" containerName="mount-bpf-fs" Mar 17 21:23:09.035944 kubelet[2028]: E0317 21:23:09.035845 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61" containerName="cilium-operator" Mar 17 21:23:09.036299 kubelet[2028]: I0317 21:23:09.035942 2028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4f8c6b-e88d-49fe-bcc7-69b89709e979" containerName="cilium-agent" Mar 17 21:23:09.036299 kubelet[2028]: I0317 21:23:09.035963 2028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f97ebd7-a7e8-4d72-a54a-3564e0f0bb61" containerName="cilium-operator" Mar 17 21:23:09.054755 systemd[1]: Created slice kubepods-burstable-pod32df9528_cfcc_4b7b_8a02_04367d105c6f.slice. Mar 17 21:23:09.078524 kubelet[2028]: I0317 21:23:09.078456 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-cgroup\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.078836 kubelet[2028]: I0317 21:23:09.078798 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-config-path\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.079073 kubelet[2028]: I0317 21:23:09.079030 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8qw\" (UniqueName: \"kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-kube-api-access-fn8qw\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.079349 kubelet[2028]: I0317 21:23:09.079313 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-bpf-maps\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.079508 kubelet[2028]: I0317 21:23:09.079473 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-run\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.079693 kubelet[2028]: I0317 21:23:09.079647 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cni-path\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.079866 kubelet[2028]: I0317 21:23:09.079824 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-ipsec-secrets\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.080033 kubelet[2028]: I0317 21:23:09.079998 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-xtables-lock\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.080222 kubelet[2028]: I0317 21:23:09.080187 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-net\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.080415 kubelet[2028]: I0317 21:23:09.080368 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-etc-cni-netd\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.080593 kubelet[2028]: I0317 21:23:09.080568 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-clustermesh-secrets\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.080767 kubelet[2028]: I0317 21:23:09.080741 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-lib-modules\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.080956 kubelet[2028]: I0317 21:23:09.080919 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-hubble-tls\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.081124 kubelet[2028]: I0317 21:23:09.081074 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-hostproc\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.081318 kubelet[2028]: I0317 21:23:09.081281 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-kernel\") pod \"cilium-92f5g\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " pod="kube-system/cilium-92f5g" Mar 17 21:23:09.157544 sshd[3777]: pam_unix(sshd:session): session closed for user core Mar 17 21:23:09.161446 systemd[1]: sshd@35-10.230.48.190:22-139.178.89.65:49820.service: Deactivated successfully. Mar 17 21:23:09.162579 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 21:23:09.162836 systemd[1]: session-22.scope: Consumed 1.257s CPU time. Mar 17 21:23:09.163459 systemd-logind[1190]: Session 22 logged out. Waiting for processes to exit. Mar 17 21:23:09.165120 systemd-logind[1190]: Removed session 22. Mar 17 21:23:09.305797 systemd[1]: Started sshd@37-10.230.48.190:22-139.178.89.65:49832.service. Mar 17 21:23:09.352063 sshd[3781]: Failed password for invalid user debian from 104.248.141.166 port 55060 ssh2 Mar 17 21:23:09.361580 env[1202]: time="2025-03-17T21:23:09.361331127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92f5g,Uid:32df9528-cfcc-4b7b-8a02-04367d105c6f,Namespace:kube-system,Attempt:0,}" Mar 17 21:23:09.395647 env[1202]: time="2025-03-17T21:23:09.395330066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:23:09.395647 env[1202]: time="2025-03-17T21:23:09.395392535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:23:09.395647 env[1202]: time="2025-03-17T21:23:09.395410218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:23:09.396147 env[1202]: time="2025-03-17T21:23:09.396055663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad pid=3802 runtime=io.containerd.runc.v2 Mar 17 21:23:09.414572 systemd[1]: Started cri-containerd-a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad.scope. Mar 17 21:23:09.456677 env[1202]: time="2025-03-17T21:23:09.456612573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-92f5g,Uid:32df9528-cfcc-4b7b-8a02-04367d105c6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\"" Mar 17 21:23:09.463541 env[1202]: time="2025-03-17T21:23:09.463476763Z" level=info msg="CreateContainer within sandbox \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:23:09.481106 env[1202]: time="2025-03-17T21:23:09.481000575Z" level=info msg="CreateContainer within sandbox \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\"" Mar 17 21:23:09.486787 env[1202]: time="2025-03-17T21:23:09.483831988Z" level=info msg="StartContainer for \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\"" Mar 17 21:23:09.508735 systemd[1]: Started cri-containerd-6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be.scope. Mar 17 21:23:09.536157 systemd[1]: cri-containerd-6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be.scope: Deactivated successfully. Mar 17 21:23:09.559352 env[1202]: time="2025-03-17T21:23:09.558581672Z" level=info msg="shim disconnected" id=6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be Mar 17 21:23:09.559683 env[1202]: time="2025-03-17T21:23:09.559648938Z" level=warning msg="cleaning up after shim disconnected" id=6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be namespace=k8s.io Mar 17 21:23:09.559846 env[1202]: time="2025-03-17T21:23:09.559814213Z" level=info msg="cleaning up dead shim" Mar 17 21:23:09.576964 env[1202]: time="2025-03-17T21:23:09.576901296Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T21:23:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 21:23:09.577741 env[1202]: time="2025-03-17T21:23:09.577548726Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Mar 17 21:23:09.578478 env[1202]: time="2025-03-17T21:23:09.578109300Z" level=error msg="Failed to pipe stdout of container \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\"" error="reading from a closed fifo" Mar 17 21:23:09.578637 env[1202]: time="2025-03-17T21:23:09.578434589Z" level=error msg="Failed to pipe stderr of container \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\"" error="reading from a closed fifo" Mar 17 21:23:09.579796 env[1202]: time="2025-03-17T21:23:09.579740331Z" level=error msg="StartContainer for \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 21:23:09.580169 kubelet[2028]: E0317 21:23:09.580076 2028 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be" Mar 17 21:23:09.585037 kubelet[2028]: E0317 21:23:09.584972 2028 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 21:23:09.585037 kubelet[2028]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 21:23:09.585037 kubelet[2028]: rm /hostbin/cilium-mount Mar 17 21:23:09.585339 kubelet[2028]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn8qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-92f5g_kube-system(32df9528-cfcc-4b7b-8a02-04367d105c6f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 21:23:09.586361 kubelet[2028]: E0317 21:23:09.586287 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-92f5g" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" Mar 17 21:23:09.933357 kubelet[2028]: E0317 21:23:09.933304 2028 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:23:10.203890 sshd[3794]: Accepted publickey for core from 139.178.89.65 port 49832 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:23:10.207246 sshd[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:23:10.215441 systemd[1]: Started session-23.scope. Mar 17 21:23:10.216693 systemd-logind[1190]: New session 23 of user core. Mar 17 21:23:10.286963 sshd[3781]: Connection closed by invalid user debian 104.248.141.166 port 55060 [preauth] Mar 17 21:23:10.288856 systemd[1]: sshd@36-10.230.48.190:22-104.248.141.166:55060.service: Deactivated successfully. Mar 17 21:23:10.312114 env[1202]: time="2025-03-17T21:23:10.307276490Z" level=info msg="CreateContainer within sandbox \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 17 21:23:10.316215 systemd[1]: Started sshd@38-10.230.48.190:22-104.248.141.166:59876.service. Mar 17 21:23:10.339595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764637965.mount: Deactivated successfully. Mar 17 21:23:10.348162 env[1202]: time="2025-03-17T21:23:10.348063936Z" level=info msg="CreateContainer within sandbox \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\"" Mar 17 21:23:10.350030 env[1202]: time="2025-03-17T21:23:10.349412075Z" level=info msg="StartContainer for \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\"" Mar 17 21:23:10.387247 systemd[1]: Started cri-containerd-e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee.scope. Mar 17 21:23:10.389864 systemd[1]: Started sshd@39-10.230.48.190:22-143.110.184.217:59108.service. Mar 17 21:23:10.410461 systemd[1]: cri-containerd-e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee.scope: Deactivated successfully. Mar 17 21:23:10.422135 env[1202]: time="2025-03-17T21:23:10.422058264Z" level=info msg="shim disconnected" id=e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee Mar 17 21:23:10.422135 env[1202]: time="2025-03-17T21:23:10.422140435Z" level=warning msg="cleaning up after shim disconnected" id=e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee namespace=k8s.io Mar 17 21:23:10.422743 env[1202]: time="2025-03-17T21:23:10.422157471Z" level=info msg="cleaning up dead shim" Mar 17 21:23:10.431982 env[1202]: time="2025-03-17T21:23:10.431923646Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3905 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T21:23:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 21:23:10.432349 env[1202]: time="2025-03-17T21:23:10.432281638Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Mar 17 21:23:10.433217 env[1202]: time="2025-03-17T21:23:10.433167585Z" level=error msg="Failed to pipe stderr of container \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\"" error="reading from a closed fifo" Mar 17 21:23:10.433492 env[1202]: time="2025-03-17T21:23:10.433451318Z" level=error msg="Failed to pipe stdout of container \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\"" error="reading from a closed fifo" Mar 17 21:23:10.434870 env[1202]: time="2025-03-17T21:23:10.434825184Z" level=error msg="StartContainer for \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 21:23:10.435468 kubelet[2028]: E0317 21:23:10.435254 2028 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee" Mar 17 21:23:10.437627 kubelet[2028]: E0317 21:23:10.437461 2028 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 21:23:10.437627 kubelet[2028]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 21:23:10.437627 kubelet[2028]: rm /hostbin/cilium-mount Mar 17 21:23:10.437627 kubelet[2028]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn8qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-92f5g_kube-system(32df9528-cfcc-4b7b-8a02-04367d105c6f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 21:23:10.438141 kubelet[2028]: E0317 21:23:10.437521 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-92f5g" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" Mar 17 21:23:10.460712 sshd[3877]: Invalid user debian from 104.248.141.166 port 59876 Mar 17 21:23:10.487475 sshd[3877]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:10.488905 sshd[3877]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:10.489107 sshd[3877]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:10.489937 sshd[3877]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:10.940908 sshd[3794]: pam_unix(sshd:session): session closed for user core Mar 17 21:23:10.944858 systemd[1]: sshd@37-10.230.48.190:22-139.178.89.65:49832.service: Deactivated successfully. Mar 17 21:23:10.945932 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 21:23:10.946872 systemd-logind[1190]: Session 23 logged out. Waiting for processes to exit. Mar 17 21:23:10.948215 systemd-logind[1190]: Removed session 23. Mar 17 21:23:11.087520 systemd[1]: Started sshd@40-10.230.48.190:22-139.178.89.65:49840.service. Mar 17 21:23:11.201597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee-rootfs.mount: Deactivated successfully. Mar 17 21:23:11.309277 kubelet[2028]: I0317 21:23:11.309238 2028 scope.go:117] "RemoveContainer" containerID="6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be" Mar 17 21:23:11.309477 env[1202]: time="2025-03-17T21:23:11.309232015Z" level=info msg="StopPodSandbox for \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\"" Mar 17 21:23:11.309477 env[1202]: time="2025-03-17T21:23:11.309316986Z" level=info msg="Container to stop \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:11.309477 env[1202]: time="2025-03-17T21:23:11.309399209Z" level=info msg="Container to stop \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:23:11.311930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad-shm.mount: Deactivated successfully. Mar 17 21:23:11.323685 env[1202]: time="2025-03-17T21:23:11.323633160Z" level=info msg="RemoveContainer for \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\"" Mar 17 21:23:11.329166 systemd[1]: cri-containerd-a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad.scope: Deactivated successfully. Mar 17 21:23:11.332449 env[1202]: time="2025-03-17T21:23:11.332404284Z" level=info msg="RemoveContainer for \"6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be\" returns successfully" Mar 17 21:23:11.363666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad-rootfs.mount: Deactivated successfully. Mar 17 21:23:11.372708 sshd[3898]: Invalid user amanda from 143.110.184.217 port 59108 Mar 17 21:23:11.376496 env[1202]: time="2025-03-17T21:23:11.376430425Z" level=info msg="shim disconnected" id=a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad Mar 17 21:23:11.376664 env[1202]: time="2025-03-17T21:23:11.376501469Z" level=warning msg="cleaning up after shim disconnected" id=a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad namespace=k8s.io Mar 17 21:23:11.376664 env[1202]: time="2025-03-17T21:23:11.376535882Z" level=info msg="cleaning up dead shim" Mar 17 21:23:11.389427 env[1202]: time="2025-03-17T21:23:11.389319232Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" Mar 17 21:23:11.389917 env[1202]: time="2025-03-17T21:23:11.389870553Z" level=info msg="TearDown network for sandbox \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\" successfully" Mar 17 21:23:11.390000 env[1202]: time="2025-03-17T21:23:11.389912781Z" level=info msg="StopPodSandbox for \"a04cab70c0d035e31ffb8dec9bccb83297144b7d5018584c41ef7836c93db6ad\" returns successfully" Mar 17 21:23:11.503260 kubelet[2028]: I0317 21:23:11.503077 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-kernel\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.503937 kubelet[2028]: I0317 21:23:11.503907 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-net\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.504111 kubelet[2028]: I0317 21:23:11.504070 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-clustermesh-secrets\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.504275 kubelet[2028]: I0317 21:23:11.504248 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-hubble-tls\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.504996 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-bpf-maps\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505065 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-config-path\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505140 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn8qw\" (UniqueName: \"kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-kube-api-access-fn8qw\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505173 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cni-path\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505258 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-run\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505312 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-lib-modules\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505336 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-xtables-lock\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505380 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-etc-cni-netd\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505405 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-hostproc\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505456 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-cgroup\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.505611 kubelet[2028]: I0317 21:23:11.505558 2028 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-ipsec-secrets\") pod \"32df9528-cfcc-4b7b-8a02-04367d105c6f\" (UID: \"32df9528-cfcc-4b7b-8a02-04367d105c6f\") " Mar 17 21:23:11.506476 kubelet[2028]: I0317 21:23:11.503276 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.506557 kubelet[2028]: I0317 21:23:11.503979 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.506719 kubelet[2028]: I0317 21:23:11.506689 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.506863 kubelet[2028]: I0317 21:23:11.506836 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.509640 kubelet[2028]: I0317 21:23:11.509519 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.509813 kubelet[2028]: I0317 21:23:11.509540 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.509944 kubelet[2028]: I0317 21:23:11.509558 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.510051 kubelet[2028]: I0317 21:23:11.509585 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-hostproc" (OuterVolumeSpecName: "hostproc") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.511245 kubelet[2028]: I0317 21:23:11.509603 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.511648 kubelet[2028]: I0317 21:23:11.511601 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 21:23:11.511795 kubelet[2028]: I0317 21:23:11.511769 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cni-path" (OuterVolumeSpecName: "cni-path") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:23:11.514648 systemd[1]: var-lib-kubelet-pods-32df9528\x2dcfcc\x2d4b7b\x2d8a02\x2d04367d105c6f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:23:11.516657 kubelet[2028]: I0317 21:23:11.516618 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 21:23:11.520101 systemd[1]: var-lib-kubelet-pods-32df9528\x2dcfcc\x2d4b7b\x2d8a02\x2d04367d105c6f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 21:23:11.521647 kubelet[2028]: I0317 21:23:11.521610 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 21:23:11.523829 kubelet[2028]: I0317 21:23:11.523783 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:23:11.526408 kubelet[2028]: I0317 21:23:11.526375 2028 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-kube-api-access-fn8qw" (OuterVolumeSpecName: "kube-api-access-fn8qw") pod "32df9528-cfcc-4b7b-8a02-04367d105c6f" (UID: "32df9528-cfcc-4b7b-8a02-04367d105c6f"). InnerVolumeSpecName "kube-api-access-fn8qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:23:11.569333 sshd[3898]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:11.569999 sshd[3898]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:11.570059 sshd[3898]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.110.184.217 Mar 17 21:23:11.570757 sshd[3898]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:11.606063 kubelet[2028]: I0317 21:23:11.606019 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-cgroup\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.606395 kubelet[2028]: I0317 21:23:11.606343 2028 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-hostproc\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.606604 kubelet[2028]: I0317 21:23:11.606527 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-ipsec-secrets\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.606746 kubelet[2028]: I0317 21:23:11.606722 2028 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-kernel\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.606903 kubelet[2028]: I0317 21:23:11.606875 2028 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-host-proc-sys-net\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607048 kubelet[2028]: I0317 21:23:11.607024 2028 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-bpf-maps\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607216 kubelet[2028]: I0317 21:23:11.607193 2028 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32df9528-cfcc-4b7b-8a02-04367d105c6f-clustermesh-secrets\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607388 kubelet[2028]: I0317 21:23:11.607352 2028 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-hubble-tls\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607539 kubelet[2028]: I0317 21:23:11.607516 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-config-path\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607687 kubelet[2028]: I0317 21:23:11.607662 2028 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fn8qw\" (UniqueName: \"kubernetes.io/projected/32df9528-cfcc-4b7b-8a02-04367d105c6f-kube-api-access-fn8qw\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607842 kubelet[2028]: I0317 21:23:11.607818 2028 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cilium-run\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.607984 kubelet[2028]: I0317 21:23:11.607961 2028 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-cni-path\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.608141 kubelet[2028]: I0317 21:23:11.608118 2028 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-xtables-lock\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.608300 kubelet[2028]: I0317 21:23:11.608278 2028 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-etc-cni-netd\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.608486 kubelet[2028]: I0317 21:23:11.608463 2028 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32df9528-cfcc-4b7b-8a02-04367d105c6f-lib-modules\") on node \"srv-y0snw.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:23:11.782396 systemd[1]: Removed slice kubepods-burstable-pod32df9528_cfcc_4b7b_8a02_04367d105c6f.slice. Mar 17 21:23:11.977659 sshd[3928]: Accepted publickey for core from 139.178.89.65 port 49840 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:23:11.979810 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:23:11.987985 systemd[1]: Started session-24.scope. Mar 17 21:23:11.988634 systemd-logind[1190]: New session 24 of user core. Mar 17 21:23:12.160515 sshd[3877]: Failed password for invalid user debian from 104.248.141.166 port 59876 ssh2 Mar 17 21:23:12.201421 systemd[1]: var-lib-kubelet-pods-32df9528\x2dcfcc\x2d4b7b\x2d8a02\x2d04367d105c6f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfn8qw.mount: Deactivated successfully. Mar 17 21:23:12.201612 systemd[1]: var-lib-kubelet-pods-32df9528\x2dcfcc\x2d4b7b\x2d8a02\x2d04367d105c6f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:23:12.312728 kubelet[2028]: I0317 21:23:12.312691 2028 scope.go:117] "RemoveContainer" containerID="e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee" Mar 17 21:23:12.316442 env[1202]: time="2025-03-17T21:23:12.316388067Z" level=info msg="RemoveContainer for \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\"" Mar 17 21:23:12.322044 env[1202]: time="2025-03-17T21:23:12.321982793Z" level=info msg="RemoveContainer for \"e00c66815bbe9d58d82b9cc8890716986a71212e0bd5393e03403abef72ac1ee\" returns successfully" Mar 17 21:23:12.387307 kubelet[2028]: I0317 21:23:12.387238 2028 topology_manager.go:215] "Topology Admit Handler" podUID="3ef50824-b39a-4181-85a4-eafb581c69cf" podNamespace="kube-system" podName="cilium-fkg5q" Mar 17 21:23:12.388548 kubelet[2028]: E0317 21:23:12.388512 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" containerName="mount-cgroup" Mar 17 21:23:12.388712 kubelet[2028]: E0317 21:23:12.388688 2028 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" containerName="mount-cgroup" Mar 17 21:23:12.389002 kubelet[2028]: I0317 21:23:12.388892 2028 memory_manager.go:354] "RemoveStaleState removing state" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" containerName="mount-cgroup" Mar 17 21:23:12.389152 kubelet[2028]: I0317 21:23:12.389129 2028 memory_manager.go:354] "RemoveStaleState removing state" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" containerName="mount-cgroup" Mar 17 21:23:12.401431 systemd[1]: Created slice kubepods-burstable-pod3ef50824_b39a_4181_85a4_eafb581c69cf.slice. Mar 17 21:23:12.517945 kubelet[2028]: I0317 21:23:12.517855 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-cni-path\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.518672 kubelet[2028]: I0317 21:23:12.518634 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3ef50824-b39a-4181-85a4-eafb581c69cf-cilium-ipsec-secrets\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.518843 kubelet[2028]: I0317 21:23:12.518815 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-etc-cni-netd\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.519001 kubelet[2028]: I0317 21:23:12.518974 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-cilium-cgroup\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.519192 kubelet[2028]: I0317 21:23:12.519146 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-bpf-maps\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.519361 kubelet[2028]: I0317 21:23:12.519334 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-hostproc\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.519543 kubelet[2028]: I0317 21:23:12.519514 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhfh\" (UniqueName: \"kubernetes.io/projected/3ef50824-b39a-4181-85a4-eafb581c69cf-kube-api-access-pwhfh\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.519739 kubelet[2028]: I0317 21:23:12.519712 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-host-proc-sys-net\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.519898 kubelet[2028]: I0317 21:23:12.519872 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ef50824-b39a-4181-85a4-eafb581c69cf-clustermesh-secrets\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.520097 kubelet[2028]: I0317 21:23:12.520039 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-lib-modules\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.520256 kubelet[2028]: I0317 21:23:12.520230 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-xtables-lock\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.520454 kubelet[2028]: I0317 21:23:12.520417 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-host-proc-sys-kernel\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.520633 kubelet[2028]: I0317 21:23:12.520587 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ef50824-b39a-4181-85a4-eafb581c69cf-hubble-tls\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.520772 kubelet[2028]: I0317 21:23:12.520746 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ef50824-b39a-4181-85a4-eafb581c69cf-cilium-run\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.520925 kubelet[2028]: I0317 21:23:12.520899 2028 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ef50824-b39a-4181-85a4-eafb581c69cf-cilium-config-path\") pod \"cilium-fkg5q\" (UID: \"3ef50824-b39a-4181-85a4-eafb581c69cf\") " pod="kube-system/cilium-fkg5q" Mar 17 21:23:12.685932 kubelet[2028]: W0317 21:23:12.685857 2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32df9528_cfcc_4b7b_8a02_04367d105c6f.slice/cri-containerd-6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be.scope WatchSource:0}: container "6c9a98ffdca417731bb8b755160f5162060946755ab0fa4138e7c589347b33be" in namespace "k8s.io": not found Mar 17 21:23:12.722140 env[1202]: time="2025-03-17T21:23:12.722054474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkg5q,Uid:3ef50824-b39a-4181-85a4-eafb581c69cf,Namespace:kube-system,Attempt:0,}" Mar 17 21:23:12.739471 env[1202]: time="2025-03-17T21:23:12.739172383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:23:12.739471 env[1202]: time="2025-03-17T21:23:12.739242440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:23:12.739471 env[1202]: time="2025-03-17T21:23:12.739270970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:23:12.740581 env[1202]: time="2025-03-17T21:23:12.739523550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223 pid=3984 runtime=io.containerd.runc.v2 Mar 17 21:23:12.757544 systemd[1]: Started cri-containerd-8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223.scope. Mar 17 21:23:12.802869 env[1202]: time="2025-03-17T21:23:12.802731862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fkg5q,Uid:3ef50824-b39a-4181-85a4-eafb581c69cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\"" Mar 17 21:23:12.808230 env[1202]: time="2025-03-17T21:23:12.808190156Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:23:12.820460 env[1202]: time="2025-03-17T21:23:12.820408098Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0\"" Mar 17 21:23:12.821396 env[1202]: time="2025-03-17T21:23:12.821363093Z" level=info msg="StartContainer for \"7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0\"" Mar 17 21:23:12.850947 systemd[1]: Started cri-containerd-7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0.scope. Mar 17 21:23:12.899749 env[1202]: time="2025-03-17T21:23:12.899691908Z" level=info msg="StartContainer for \"7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0\" returns successfully" Mar 17 21:23:12.914925 systemd[1]: cri-containerd-7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0.scope: Deactivated successfully. Mar 17 21:23:12.952994 env[1202]: time="2025-03-17T21:23:12.952931729Z" level=info msg="shim disconnected" id=7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0 Mar 17 21:23:12.953363 env[1202]: time="2025-03-17T21:23:12.953330087Z" level=warning msg="cleaning up after shim disconnected" id=7664baeb0f399b073554d8cd146e17ab6d649ebbf277f0c15e4627d92c3efaf0 namespace=k8s.io Mar 17 21:23:12.953515 env[1202]: time="2025-03-17T21:23:12.953485794Z" level=info msg="cleaning up dead shim" Mar 17 21:23:12.965044 env[1202]: time="2025-03-17T21:23:12.964987229Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Mar 17 21:23:13.045248 sshd[3898]: Failed password for invalid user amanda from 143.110.184.217 port 59108 ssh2 Mar 17 21:23:13.214920 kubelet[2028]: I0317 21:23:13.214816 2028 setters.go:580] "Node became not ready" node="srv-y0snw.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T21:23:13Z","lastTransitionTime":"2025-03-17T21:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 21:23:13.322690 env[1202]: time="2025-03-17T21:23:13.322627681Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 21:23:13.342649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258457888.mount: Deactivated successfully. Mar 17 21:23:13.351406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073119219.mount: Deactivated successfully. Mar 17 21:23:13.356014 sshd[3877]: Connection closed by invalid user debian 104.248.141.166 port 59876 [preauth] Mar 17 21:23:13.357544 systemd[1]: sshd@38-10.230.48.190:22-104.248.141.166:59876.service: Deactivated successfully. Mar 17 21:23:13.362347 env[1202]: time="2025-03-17T21:23:13.362284437Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428\"" Mar 17 21:23:13.363325 env[1202]: time="2025-03-17T21:23:13.363290965Z" level=info msg="StartContainer for \"502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428\"" Mar 17 21:23:13.382192 systemd[1]: Started sshd@41-10.230.48.190:22-104.248.141.166:59892.service. Mar 17 21:23:13.400405 systemd[1]: Started cri-containerd-502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428.scope. Mar 17 21:23:13.459062 env[1202]: time="2025-03-17T21:23:13.459004939Z" level=info msg="StartContainer for \"502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428\" returns successfully" Mar 17 21:23:13.481457 systemd[1]: cri-containerd-502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428.scope: Deactivated successfully. Mar 17 21:23:13.496739 sshd[4093]: Invalid user debian from 104.248.141.166 port 59892 Mar 17 21:23:13.511933 env[1202]: time="2025-03-17T21:23:13.511871625Z" level=info msg="shim disconnected" id=502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428 Mar 17 21:23:13.511933 env[1202]: time="2025-03-17T21:23:13.511934073Z" level=warning msg="cleaning up after shim disconnected" id=502c89b48766e9a0f2e9296449851686c4e3c8ca3c0cbb02916a788bec9d8428 namespace=k8s.io Mar 17 21:23:13.512367 env[1202]: time="2025-03-17T21:23:13.511950601Z" level=info msg="cleaning up dead shim" Mar 17 21:23:13.524021 env[1202]: time="2025-03-17T21:23:13.523956607Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4131 runtime=io.containerd.runc.v2\n" Mar 17 21:23:13.525531 sshd[4093]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:13.526499 sshd[4093]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:13.526553 sshd[4093]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:13.527785 sshd[4093]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:13.772406 kubelet[2028]: I0317 21:23:13.772240 2028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32df9528-cfcc-4b7b-8a02-04367d105c6f" path="/var/lib/kubelet/pods/32df9528-cfcc-4b7b-8a02-04367d105c6f/volumes" Mar 17 21:23:14.195456 sshd[3898]: Connection closed by invalid user amanda 143.110.184.217 port 59108 [preauth] Mar 17 21:23:14.197276 systemd[1]: sshd@39-10.230.48.190:22-143.110.184.217:59108.service: Deactivated successfully. Mar 17 21:23:14.327722 env[1202]: time="2025-03-17T21:23:14.327292341Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 21:23:14.349028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348426184.mount: Deactivated successfully. Mar 17 21:23:14.358586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663587109.mount: Deactivated successfully. Mar 17 21:23:14.372234 env[1202]: time="2025-03-17T21:23:14.372158204Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b\"" Mar 17 21:23:14.374149 env[1202]: time="2025-03-17T21:23:14.373403085Z" level=info msg="StartContainer for \"21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b\"" Mar 17 21:23:14.397958 systemd[1]: Started cri-containerd-21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b.scope. Mar 17 21:23:14.452381 env[1202]: time="2025-03-17T21:23:14.452188395Z" level=info msg="StartContainer for \"21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b\" returns successfully" Mar 17 21:23:14.459369 systemd[1]: cri-containerd-21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b.scope: Deactivated successfully. Mar 17 21:23:14.498009 env[1202]: time="2025-03-17T21:23:14.497938034Z" level=info msg="shim disconnected" id=21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b Mar 17 21:23:14.498009 env[1202]: time="2025-03-17T21:23:14.498007720Z" level=warning msg="cleaning up after shim disconnected" id=21998600f14faf32914c386c24f546bb9c5a30bef9032a7ceea3ca6e37a9a82b namespace=k8s.io Mar 17 21:23:14.498372 env[1202]: time="2025-03-17T21:23:14.498025447Z" level=info msg="cleaning up dead shim" Mar 17 21:23:14.514827 env[1202]: time="2025-03-17T21:23:14.514713183Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4188 runtime=io.containerd.runc.v2\n" Mar 17 21:23:14.935262 kubelet[2028]: E0317 21:23:14.935175 2028 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:23:15.333830 env[1202]: time="2025-03-17T21:23:15.333585019Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 21:23:15.350562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299485458.mount: Deactivated successfully. Mar 17 21:23:15.359772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450512156.mount: Deactivated successfully. Mar 17 21:23:15.362600 env[1202]: time="2025-03-17T21:23:15.362172453Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f\"" Mar 17 21:23:15.366905 env[1202]: time="2025-03-17T21:23:15.366863557Z" level=info msg="StartContainer for \"f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f\"" Mar 17 21:23:15.393031 systemd[1]: Started cri-containerd-f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f.scope. Mar 17 21:23:15.435605 systemd[1]: cri-containerd-f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f.scope: Deactivated successfully. Mar 17 21:23:15.437941 env[1202]: time="2025-03-17T21:23:15.437839880Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ef50824_b39a_4181_85a4_eafb581c69cf.slice/cri-containerd-f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f.scope/memory.events\": no such file or directory" Mar 17 21:23:15.439493 env[1202]: time="2025-03-17T21:23:15.439433935Z" level=info msg="StartContainer for \"f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f\" returns successfully" Mar 17 21:23:15.475497 env[1202]: time="2025-03-17T21:23:15.475434507Z" level=info msg="shim disconnected" id=f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f Mar 17 21:23:15.475497 env[1202]: time="2025-03-17T21:23:15.475496649Z" level=warning msg="cleaning up after shim disconnected" id=f59e7b88f2f2995cd6a73b1be1d279bf4f7e2cce1c5dc55f8564b8fb0ec0210f namespace=k8s.io Mar 17 21:23:15.476113 env[1202]: time="2025-03-17T21:23:15.475513731Z" level=info msg="cleaning up dead shim" Mar 17 21:23:15.504226 env[1202]: time="2025-03-17T21:23:15.503067603Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:23:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4244 runtime=io.containerd.runc.v2\n" Mar 17 21:23:15.943258 sshd[4093]: Failed password for invalid user debian from 104.248.141.166 port 59892 ssh2 Mar 17 21:23:16.344309 env[1202]: time="2025-03-17T21:23:16.343871709Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 21:23:16.374462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658365744.mount: Deactivated successfully. Mar 17 21:23:16.385688 env[1202]: time="2025-03-17T21:23:16.385588758Z" level=info msg="CreateContainer within sandbox \"8cb8493b48346177244e04a7a08f3ade3b8967d936d74b672032dce7381ac223\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9ecab051e51986a322ba2711d3da6a5d59712adc0a4328cf108fc6e86e1d711\"" Mar 17 21:23:16.386407 env[1202]: time="2025-03-17T21:23:16.386370827Z" level=info msg="StartContainer for \"b9ecab051e51986a322ba2711d3da6a5d59712adc0a4328cf108fc6e86e1d711\"" Mar 17 21:23:16.397734 sshd[4093]: Connection closed by invalid user debian 104.248.141.166 port 59892 [preauth] Mar 17 21:23:16.399626 systemd[1]: sshd@41-10.230.48.190:22-104.248.141.166:59892.service: Deactivated successfully. Mar 17 21:23:16.418775 systemd[1]: Started cri-containerd-b9ecab051e51986a322ba2711d3da6a5d59712adc0a4328cf108fc6e86e1d711.scope. Mar 17 21:23:16.427470 systemd[1]: Started sshd@42-10.230.48.190:22-104.248.141.166:59900.service. Mar 17 21:23:16.481971 env[1202]: time="2025-03-17T21:23:16.481074656Z" level=info msg="StartContainer for \"b9ecab051e51986a322ba2711d3da6a5d59712adc0a4328cf108fc6e86e1d711\" returns successfully" Mar 17 21:23:16.558358 sshd[4275]: Invalid user debian from 104.248.141.166 port 59900 Mar 17 21:23:16.586702 sshd[4275]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:16.588460 sshd[4275]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:16.588527 sshd[4275]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:16.593216 sshd[4275]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:17.245683 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 21:23:17.372263 kubelet[2028]: I0317 21:23:17.372169 2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fkg5q" podStartSLOduration=5.372124095 podStartE2EDuration="5.372124095s" podCreationTimestamp="2025-03-17 21:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:23:17.369736037 +0000 UTC m=+157.834626178" watchObservedRunningTime="2025-03-17 21:23:17.372124095 +0000 UTC m=+157.837014249" Mar 17 21:23:18.087508 sshd[4275]: Failed password for invalid user debian from 104.248.141.166 port 59900 ssh2 Mar 17 21:23:19.451537 sshd[4275]: Connection closed by invalid user debian 104.248.141.166 port 59900 [preauth] Mar 17 21:23:19.452992 systemd[1]: sshd@42-10.230.48.190:22-104.248.141.166:59900.service: Deactivated successfully. Mar 17 21:23:19.485002 systemd[1]: Started sshd@43-10.230.48.190:22-104.248.141.166:37788.service. Mar 17 21:23:19.614286 sshd[4509]: Invalid user debian from 104.248.141.166 port 37788 Mar 17 21:23:19.640216 sshd[4509]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:19.641130 sshd[4509]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:19.641189 sshd[4509]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:19.641821 sshd[4509]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:20.873403 systemd-networkd[1026]: lxc_health: Link UP Mar 17 21:23:20.896325 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 21:23:20.899375 systemd-networkd[1026]: lxc_health: Gained carrier Mar 17 21:23:21.487346 systemd[1]: Started sshd@44-10.230.48.190:22-143.110.184.217:52052.service. Mar 17 21:23:21.547265 sshd[4509]: Failed password for invalid user debian from 104.248.141.166 port 37788 ssh2 Mar 17 21:23:22.336372 sshd[4875]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.110.184.217 user=uucp Mar 17 21:23:22.511335 sshd[4509]: Connection closed by invalid user debian 104.248.141.166 port 37788 [preauth] Mar 17 21:23:22.512936 systemd[1]: sshd@43-10.230.48.190:22-104.248.141.166:37788.service: Deactivated successfully. Mar 17 21:23:22.541307 systemd[1]: Started sshd@45-10.230.48.190:22-104.248.141.166:37796.service. Mar 17 21:23:22.597332 systemd-networkd[1026]: lxc_health: Gained IPv6LL Mar 17 21:23:22.672528 sshd[4886]: Invalid user debian from 104.248.141.166 port 37796 Mar 17 21:23:22.696611 sshd[4886]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:22.697915 sshd[4886]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:22.698120 sshd[4886]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:22.698996 sshd[4886]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:23.546452 systemd[1]: run-containerd-runc-k8s.io-b9ecab051e51986a322ba2711d3da6a5d59712adc0a4328cf108fc6e86e1d711-runc.vt0zAI.mount: Deactivated successfully. Mar 17 21:23:23.987585 sshd[4875]: Failed password for uucp from 143.110.184.217 port 52052 ssh2 Mar 17 21:23:24.349422 sshd[4886]: Failed password for invalid user debian from 104.248.141.166 port 37796 ssh2 Mar 17 21:23:25.326300 sshd[4875]: Connection closed by authenticating user uucp 143.110.184.217 port 52052 [preauth] Mar 17 21:23:25.328815 systemd[1]: sshd@44-10.230.48.190:22-143.110.184.217:52052.service: Deactivated successfully. Mar 17 21:23:25.561517 sshd[4886]: Connection closed by invalid user debian 104.248.141.166 port 37796 [preauth] Mar 17 21:23:25.563498 systemd[1]: sshd@45-10.230.48.190:22-104.248.141.166:37796.service: Deactivated successfully. Mar 17 21:23:25.591877 systemd[1]: Started sshd@46-10.230.48.190:22-104.248.141.166:37806.service. Mar 17 21:23:25.722943 sshd[4924]: Invalid user debian from 104.248.141.166 port 37806 Mar 17 21:23:25.756787 sshd[4924]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:25.758126 sshd[4924]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:25.758338 sshd[4924]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:25.759295 sshd[4924]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:25.819694 systemd[1]: run-containerd-runc-k8s.io-b9ecab051e51986a322ba2711d3da6a5d59712adc0a4328cf108fc6e86e1d711-runc.2ikjcH.mount: Deactivated successfully. Mar 17 21:23:26.159509 sshd[3928]: pam_unix(sshd:session): session closed for user core Mar 17 21:23:26.163972 systemd[1]: sshd@40-10.230.48.190:22-139.178.89.65:49840.service: Deactivated successfully. Mar 17 21:23:26.166031 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 21:23:26.167453 systemd-logind[1190]: Session 24 logged out. Waiting for processes to exit. Mar 17 21:23:26.168675 systemd-logind[1190]: Removed session 24. Mar 17 21:23:28.157185 sshd[4924]: Failed password for invalid user debian from 104.248.141.166 port 37806 ssh2 Mar 17 21:23:28.622186 sshd[4924]: Connection closed by invalid user debian 104.248.141.166 port 37806 [preauth] Mar 17 21:23:28.623857 systemd[1]: sshd@46-10.230.48.190:22-104.248.141.166:37806.service: Deactivated successfully. Mar 17 21:23:28.649775 systemd[1]: Started sshd@47-10.230.48.190:22-104.248.141.166:37822.service. Mar 17 21:23:28.762944 sshd[4955]: Invalid user debian from 104.248.141.166 port 37822 Mar 17 21:23:28.787913 sshd[4955]: pam_faillock(sshd:auth): User unknown Mar 17 21:23:28.788858 sshd[4955]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:23:28.788936 sshd[4955]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=104.248.141.166 Mar 17 21:23:28.789786 sshd[4955]: pam_faillock(sshd:auth): User unknown