Mar 17 20:56:53.025482 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 20:56:53.025530 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:56:53.025552 kernel: BIOS-provided physical RAM map: Mar 17 20:56:53.025563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 20:56:53.025572 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 20:56:53.025582 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 20:56:53.025603 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 17 20:56:53.025627 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 17 20:56:53.025637 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 20:56:53.025647 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 20:56:53.025662 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 20:56:53.025672 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 20:56:53.025682 kernel: NX (Execute Disable) protection: active Mar 17 20:56:53.025692 kernel: SMBIOS 2.8 present. Mar 17 20:56:53.025704 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Mar 17 20:56:53.025715 kernel: Hypervisor detected: KVM Mar 17 20:56:53.025729 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 20:56:53.025740 kernel: kvm-clock: cpu 0, msr 4219a001, primary cpu clock Mar 17 20:56:53.025751 kernel: kvm-clock: using sched offset of 5213331718 cycles Mar 17 20:56:53.025762 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 20:56:53.025773 kernel: tsc: Detected 2799.998 MHz processor Mar 17 20:56:53.025784 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 20:56:53.025795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 20:56:53.025806 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 17 20:56:53.025816 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 20:56:53.025831 kernel: Using GB pages for direct mapping Mar 17 20:56:53.025842 kernel: ACPI: Early table checksum verification disabled Mar 17 20:56:53.025852 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Mar 17 20:56:53.025863 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025873 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025898 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025910 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 17 20:56:53.025921 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025931 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025955 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025966 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:56:53.025976 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 17 20:56:53.025987 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 17 20:56:53.025998 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 17 20:56:53.026008 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 17 20:56:53.026024 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 17 20:56:53.026040 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 17 20:56:53.026051 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 17 20:56:53.026085 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 20:56:53.026097 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 20:56:53.026108 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 17 20:56:53.026119 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 17 20:56:53.026138 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 17 20:56:53.026155 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 17 20:56:53.026167 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 17 20:56:53.026178 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 17 20:56:53.026189 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 17 20:56:53.026201 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 17 20:56:53.026219 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 17 20:56:53.026230 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 17 20:56:53.026241 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 17 20:56:53.026252 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 17 20:56:53.026263 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 17 20:56:53.026278 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 17 20:56:53.026294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 20:56:53.026306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 20:56:53.026318 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 17 20:56:53.026337 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 17 20:56:53.026349 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 17 20:56:53.026360 kernel: Zone ranges: Mar 17 20:56:53.026371 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 20:56:53.026383 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 17 20:56:53.026405 kernel: Normal empty Mar 17 20:56:53.026417 kernel: Movable zone start for each node Mar 17 20:56:53.026428 kernel: Early memory node ranges Mar 17 20:56:53.026439 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 20:56:53.026451 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 17 20:56:53.026462 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 17 20:56:53.026473 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 20:56:53.026484 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 20:56:53.026495 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 17 20:56:53.026515 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 20:56:53.026527 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 20:56:53.026538 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 20:56:53.026550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 20:56:53.026561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 20:56:53.026572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 20:56:53.026584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 20:56:53.026595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 20:56:53.026615 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 20:56:53.026632 kernel: TSC deadline timer available Mar 17 20:56:53.026644 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 17 20:56:53.026655 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 20:56:53.026666 kernel: Booting paravirtualized kernel on KVM Mar 17 20:56:53.026677 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 20:56:53.026689 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Mar 17 20:56:53.026700 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Mar 17 20:56:53.026711 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Mar 17 20:56:53.026722 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 17 20:56:53.026737 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Mar 17 20:56:53.026749 kernel: kvm-guest: PV spinlocks enabled Mar 17 20:56:53.026760 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 20:56:53.026772 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 17 20:56:53.026783 kernel: Policy zone: DMA32 Mar 17 20:56:53.026795 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:56:53.026807 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 20:56:53.026819 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 20:56:53.026834 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 20:56:53.026846 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 20:56:53.026858 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 192524K reserved, 0K cma-reserved) Mar 17 20:56:53.026869 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 17 20:56:53.026880 kernel: Kernel/User page tables isolation: enabled Mar 17 20:56:53.026903 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 20:56:53.026916 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 20:56:53.026927 kernel: rcu: Hierarchical RCU implementation. Mar 17 20:56:53.026939 kernel: rcu: RCU event tracing is enabled. Mar 17 20:56:53.026956 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 17 20:56:53.026968 kernel: Rude variant of Tasks RCU enabled. Mar 17 20:56:53.026984 kernel: Tracing variant of Tasks RCU enabled. Mar 17 20:56:53.026996 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 20:56:53.027008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 17 20:56:53.027019 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 17 20:56:53.027031 kernel: random: crng init done Mar 17 20:56:53.027067 kernel: Console: colour VGA+ 80x25 Mar 17 20:56:53.027080 kernel: printk: console [tty0] enabled Mar 17 20:56:53.027092 kernel: printk: console [ttyS0] enabled Mar 17 20:56:53.027105 kernel: ACPI: Core revision 20210730 Mar 17 20:56:53.027117 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 20:56:53.027138 kernel: x2apic enabled Mar 17 20:56:53.027160 kernel: Switched APIC routing to physical x2apic. Mar 17 20:56:53.027172 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 17 20:56:53.027184 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Mar 17 20:56:53.027196 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 20:56:53.027213 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 20:56:53.027230 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 20:56:53.027241 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 20:56:53.027253 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 20:56:53.027265 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 20:56:53.027277 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 20:56:53.027289 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 20:56:53.027300 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 20:56:53.027312 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 20:56:53.027324 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 20:56:53.027335 kernel: MMIO Stale Data: Unknown: No mitigations Mar 17 20:56:53.027351 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 17 20:56:53.027363 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 20:56:53.027375 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 20:56:53.027386 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 20:56:53.027398 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 20:56:53.027410 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 20:56:53.027421 kernel: Freeing SMP alternatives memory: 32K Mar 17 20:56:53.027433 kernel: pid_max: default: 32768 minimum: 301 Mar 17 20:56:53.027444 kernel: LSM: Security Framework initializing Mar 17 20:56:53.027456 kernel: SELinux: Initializing. Mar 17 20:56:53.027467 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 20:56:53.027483 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 20:56:53.027495 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 17 20:56:53.027507 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 17 20:56:53.027519 kernel: signal: max sigframe size: 1776 Mar 17 20:56:53.027530 kernel: rcu: Hierarchical SRCU implementation. Mar 17 20:56:53.027542 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 20:56:53.027554 kernel: smp: Bringing up secondary CPUs ... Mar 17 20:56:53.027566 kernel: x86: Booting SMP configuration: Mar 17 20:56:53.027578 kernel: .... node #0, CPUs: #1 Mar 17 20:56:53.027593 kernel: kvm-clock: cpu 1, msr 4219a041, secondary cpu clock Mar 17 20:56:53.027614 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 17 20:56:53.027627 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Mar 17 20:56:53.027639 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 20:56:53.027650 kernel: smpboot: Max logical packages: 16 Mar 17 20:56:53.027662 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Mar 17 20:56:53.027674 kernel: devtmpfs: initialized Mar 17 20:56:53.027685 kernel: x86/mm: Memory block size: 128MB Mar 17 20:56:53.027697 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 20:56:53.027709 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 17 20:56:53.027726 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 20:56:53.027738 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 20:56:53.027750 kernel: audit: initializing netlink subsys (disabled) Mar 17 20:56:53.027762 kernel: audit: type=2000 audit(1742245011.280:1): state=initialized audit_enabled=0 res=1 Mar 17 20:56:53.027773 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 20:56:53.027786 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 20:56:53.027797 kernel: cpuidle: using governor menu Mar 17 20:56:53.027809 kernel: ACPI: bus type PCI registered Mar 17 20:56:53.027821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 20:56:53.027836 kernel: dca service started, version 1.12.1 Mar 17 20:56:53.027849 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 20:56:53.027861 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 20:56:53.027872 kernel: PCI: Using configuration type 1 for base access Mar 17 20:56:53.027885 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 20:56:53.027896 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 20:56:53.027908 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 20:56:53.027920 kernel: ACPI: Added _OSI(Module Device) Mar 17 20:56:53.027936 kernel: ACPI: Added _OSI(Processor Device) Mar 17 20:56:53.027947 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 20:56:53.027959 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 20:56:53.027971 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 20:56:53.027983 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 20:56:53.027994 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 20:56:53.028006 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 20:56:53.028018 kernel: ACPI: Interpreter enabled Mar 17 20:56:53.028029 kernel: ACPI: PM: (supports S0 S5) Mar 17 20:56:53.028041 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 20:56:53.028066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 20:56:53.028079 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 20:56:53.028091 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 20:56:53.028366 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 20:56:53.028526 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 20:56:53.028697 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 20:56:53.028716 kernel: PCI host bridge to bus 0000:00 Mar 17 20:56:53.028876 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 20:56:53.029028 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 20:56:53.029200 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 20:56:53.029350 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 17 20:56:53.029525 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 20:56:53.029685 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 17 20:56:53.029832 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 20:56:53.030009 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 20:56:53.030204 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 17 20:56:53.030359 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 17 20:56:53.030519 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 17 20:56:53.030681 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 17 20:56:53.030832 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 20:56:53.031003 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.031178 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 17 20:56:53.031356 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.031507 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 17 20:56:53.031716 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.031871 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 17 20:56:53.032036 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.041377 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 17 20:56:53.041656 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.041894 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 17 20:56:53.042190 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.042419 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 17 20:56:53.042721 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.042952 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 17 20:56:53.043272 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 17 20:56:53.043603 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 17 20:56:53.043823 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 20:56:53.044075 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Mar 17 20:56:53.044241 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 17 20:56:53.044453 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 17 20:56:53.044630 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 17 20:56:53.044883 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 20:56:53.045148 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Mar 17 20:56:53.045367 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 17 20:56:53.045593 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 17 20:56:53.045847 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 20:56:53.046045 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 20:56:53.046272 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 20:56:53.046455 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Mar 17 20:56:53.046625 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 17 20:56:53.046810 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 20:56:53.046989 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 20:56:53.047182 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 17 20:56:53.047352 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 17 20:56:53.047523 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 20:56:53.047761 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Mar 17 20:56:53.047944 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 20:56:53.057246 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:56:53.057471 kernel: pci_bus 0000:02: extended config space not accessible Mar 17 20:56:53.057714 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 17 20:56:53.057889 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 17 20:56:53.058053 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 20:56:53.058282 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Mar 17 20:56:53.058460 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 20:56:53.058666 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:56:53.058860 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 17 20:56:53.059042 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 17 20:56:53.059223 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 20:56:53.059407 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 20:56:53.059576 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 20:56:53.059777 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 17 20:56:53.059938 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 17 20:56:53.060146 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 20:56:53.060310 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 20:56:53.060475 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 20:56:53.060665 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 20:56:53.060818 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 20:56:53.060979 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 20:56:53.061165 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 20:56:53.061321 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 20:56:53.061485 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 20:56:53.061661 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 20:56:53.061809 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 20:56:53.061959 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 20:56:53.062153 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 20:56:53.062305 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 20:56:53.062456 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 20:56:53.062631 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 20:56:53.062788 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 20:56:53.062941 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 20:56:53.062969 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 20:56:53.062982 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 20:56:53.062995 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 20:56:53.063007 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 20:56:53.063019 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 20:56:53.063031 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 20:56:53.063043 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 20:56:53.063077 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 20:56:53.063091 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 20:56:53.063103 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 20:56:53.063115 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 20:56:53.063128 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 20:56:53.063140 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 20:56:53.063152 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 20:56:53.063164 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 20:56:53.063176 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 20:56:53.063194 kernel: iommu: Default domain type: Translated Mar 17 20:56:53.063207 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 20:56:53.063362 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 20:56:53.063519 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 20:56:53.063688 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 20:56:53.063707 kernel: vgaarb: loaded Mar 17 20:56:53.063720 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 20:56:53.063732 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 20:56:53.063751 kernel: PTP clock support registered Mar 17 20:56:53.063763 kernel: PCI: Using ACPI for IRQ routing Mar 17 20:56:53.063775 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 20:56:53.063787 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 20:56:53.063799 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 17 20:56:53.063812 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 20:56:53.063824 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 20:56:53.063837 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 20:56:53.063849 kernel: pnp: PnP ACPI init Mar 17 20:56:53.072903 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 20:56:53.072936 kernel: pnp: PnP ACPI: found 5 devices Mar 17 20:56:53.072951 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 20:56:53.072963 kernel: NET: Registered PF_INET protocol family Mar 17 20:56:53.072975 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 20:56:53.072988 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 20:56:53.073000 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 20:56:53.073021 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 20:56:53.073041 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Mar 17 20:56:53.073053 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 20:56:53.073082 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 20:56:53.073095 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 20:56:53.073107 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 20:56:53.073120 kernel: NET: Registered PF_XDP protocol family Mar 17 20:56:53.073299 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 17 20:56:53.073479 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 17 20:56:53.073653 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 17 20:56:53.073806 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 17 20:56:53.073958 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 17 20:56:53.074130 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 17 20:56:53.074287 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 17 20:56:53.074435 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Mar 17 20:56:53.074591 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Mar 17 20:56:53.074755 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Mar 17 20:56:53.074912 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Mar 17 20:56:53.075071 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Mar 17 20:56:53.075231 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Mar 17 20:56:53.075387 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Mar 17 20:56:53.075546 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 17 20:56:53.075735 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Mar 17 20:56:53.075902 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 17 20:56:53.076097 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:56:53.076268 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 17 20:56:53.076456 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Mar 17 20:56:53.076621 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 17 20:56:53.076778 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:56:53.076947 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 17 20:56:53.077114 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Mar 17 20:56:53.077263 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 17 20:56:53.077412 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 20:56:53.077566 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 17 20:56:53.077728 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Mar 17 20:56:53.077878 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 17 20:56:53.078028 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 20:56:53.078188 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 17 20:56:53.078349 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Mar 17 20:56:53.078508 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 17 20:56:53.078682 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 20:56:53.078835 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 17 20:56:53.078995 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Mar 17 20:56:53.079165 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 17 20:56:53.079321 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 20:56:53.079477 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 17 20:56:53.079643 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Mar 17 20:56:53.079805 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 17 20:56:53.079962 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 20:56:53.081236 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 17 20:56:53.081392 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Mar 17 20:56:53.081541 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 17 20:56:53.081709 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 20:56:53.081869 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 17 20:56:53.082017 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Mar 17 20:56:53.082185 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 17 20:56:53.082348 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 20:56:53.082502 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 20:56:53.082659 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 20:56:53.082804 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 20:56:53.082940 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 17 20:56:53.083096 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 20:56:53.083234 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 17 20:56:53.083391 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Mar 17 20:56:53.083536 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 17 20:56:53.083693 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:56:53.083851 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Mar 17 20:56:53.084004 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 17 20:56:53.084175 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 17 20:56:53.084333 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Mar 17 20:56:53.084477 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 17 20:56:53.084633 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 17 20:56:53.084793 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Mar 17 20:56:53.084937 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 17 20:56:53.085095 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 17 20:56:53.085259 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Mar 17 20:56:53.085403 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 17 20:56:53.085574 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 17 20:56:53.085770 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Mar 17 20:56:53.085961 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 17 20:56:53.086126 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 17 20:56:53.086284 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Mar 17 20:56:53.086438 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 17 20:56:53.086581 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 17 20:56:53.086749 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Mar 17 20:56:53.086895 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 17 20:56:53.087038 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 17 20:56:53.087212 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Mar 17 20:56:53.087374 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 17 20:56:53.087521 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 17 20:56:53.087540 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 20:56:53.087554 kernel: PCI: CLS 0 bytes, default 64 Mar 17 20:56:53.087567 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 20:56:53.087581 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 17 20:56:53.087600 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 20:56:53.087630 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 17 20:56:53.087644 kernel: Initialise system trusted keyrings Mar 17 20:56:53.087657 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 20:56:53.087669 kernel: Key type asymmetric registered Mar 17 20:56:53.087682 kernel: Asymmetric key parser 'x509' registered Mar 17 20:56:53.087694 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 20:56:53.087707 kernel: io scheduler mq-deadline registered Mar 17 20:56:53.087719 kernel: io scheduler kyber registered Mar 17 20:56:53.087732 kernel: io scheduler bfq registered Mar 17 20:56:53.087888 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 17 20:56:53.088040 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 17 20:56:53.097801 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.098026 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 17 20:56:53.098219 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 17 20:56:53.098377 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.098545 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 17 20:56:53.098729 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 17 20:56:53.098890 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.099068 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 17 20:56:53.099248 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 17 20:56:53.099421 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.099583 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 17 20:56:53.099762 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 17 20:56:53.099916 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.100101 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 17 20:56:53.100266 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 17 20:56:53.100432 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.100623 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 17 20:56:53.100784 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 17 20:56:53.100935 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.101109 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 17 20:56:53.101260 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 17 20:56:53.101418 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 17 20:56:53.101438 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 20:56:53.101467 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 20:56:53.101482 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 20:56:53.101495 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 20:56:53.101508 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 20:56:53.101521 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 20:56:53.101534 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 20:56:53.101547 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 20:56:53.101569 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 20:56:53.101745 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 20:56:53.101898 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 20:56:53.102052 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T20:56:52 UTC (1742245012) Mar 17 20:56:53.102232 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 20:56:53.102251 kernel: intel_pstate: CPU model not supported Mar 17 20:56:53.102264 kernel: NET: Registered PF_INET6 protocol family Mar 17 20:56:53.102277 kernel: Segment Routing with IPv6 Mar 17 20:56:53.102290 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 20:56:53.102303 kernel: NET: Registered PF_PACKET protocol family Mar 17 20:56:53.102322 kernel: Key type dns_resolver registered Mar 17 20:56:53.102335 kernel: IPI shorthand broadcast: enabled Mar 17 20:56:53.102348 kernel: sched_clock: Marking stable (1114749386, 215970421)->(1618715971, -287996164) Mar 17 20:56:53.102361 kernel: registered taskstats version 1 Mar 17 20:56:53.102374 kernel: Loading compiled-in X.509 certificates Mar 17 20:56:53.102386 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 20:56:53.102399 kernel: Key type .fscrypt registered Mar 17 20:56:53.102411 kernel: Key type fscrypt-provisioning registered Mar 17 20:56:53.102424 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 20:56:53.102441 kernel: ima: Allocated hash algorithm: sha1 Mar 17 20:56:53.102454 kernel: ima: No architecture policies found Mar 17 20:56:53.102467 kernel: clk: Disabling unused clocks Mar 17 20:56:53.102480 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 20:56:53.102493 kernel: Write protecting the kernel read-only data: 28672k Mar 17 20:56:53.102505 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 20:56:53.102518 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 20:56:53.102531 kernel: Run /init as init process Mar 17 20:56:53.102543 kernel: with arguments: Mar 17 20:56:53.102560 kernel: /init Mar 17 20:56:53.102573 kernel: with environment: Mar 17 20:56:53.102585 kernel: HOME=/ Mar 17 20:56:53.102598 kernel: TERM=linux Mar 17 20:56:53.102621 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 20:56:53.102637 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 20:56:53.102658 systemd[1]: Detected virtualization kvm. Mar 17 20:56:53.102672 systemd[1]: Detected architecture x86-64. Mar 17 20:56:53.102689 systemd[1]: Running in initrd. Mar 17 20:56:53.102729 systemd[1]: No hostname configured, using default hostname. Mar 17 20:56:53.102743 systemd[1]: Hostname set to . Mar 17 20:56:53.102757 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:56:53.102771 systemd[1]: Queued start job for default target initrd.target. Mar 17 20:56:53.102784 systemd[1]: Started systemd-ask-password-console.path. Mar 17 20:56:53.102797 systemd[1]: Reached target cryptsetup.target. Mar 17 20:56:53.102810 systemd[1]: Reached target paths.target. Mar 17 20:56:53.102829 systemd[1]: Reached target slices.target. Mar 17 20:56:53.102842 systemd[1]: Reached target swap.target. Mar 17 20:56:53.102860 systemd[1]: Reached target timers.target. Mar 17 20:56:53.102873 systemd[1]: Listening on iscsid.socket. Mar 17 20:56:53.102887 systemd[1]: Listening on iscsiuio.socket. Mar 17 20:56:53.102900 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 20:56:53.102919 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 20:56:53.102939 systemd[1]: Listening on systemd-journald.socket. Mar 17 20:56:53.102953 systemd[1]: Listening on systemd-networkd.socket. Mar 17 20:56:53.102966 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 20:56:53.102979 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 20:56:53.102993 systemd[1]: Reached target sockets.target. Mar 17 20:56:53.103013 systemd[1]: Starting kmod-static-nodes.service... Mar 17 20:56:53.103027 systemd[1]: Finished network-cleanup.service. Mar 17 20:56:53.103040 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 20:56:53.108493 systemd[1]: Starting systemd-journald.service... Mar 17 20:56:53.108548 systemd[1]: Starting systemd-modules-load.service... Mar 17 20:56:53.108562 systemd[1]: Starting systemd-resolved.service... Mar 17 20:56:53.108596 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 20:56:53.108621 systemd[1]: Finished kmod-static-nodes.service. Mar 17 20:56:53.108644 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 20:56:53.108658 kernel: Bridge firewalling registered Mar 17 20:56:53.108680 systemd-journald[202]: Journal started Mar 17 20:56:53.108767 systemd-journald[202]: Runtime Journal (/run/log/journal/a23f03c4c016416684a24056fda67c76) is 4.7M, max 38.1M, 33.3M free. Mar 17 20:56:53.019432 systemd-modules-load[203]: Inserted module 'overlay' Mar 17 20:56:53.116008 systemd[1]: Started systemd-resolved.service. Mar 17 20:56:53.116034 kernel: audit: type=1130 audit(1742245013.109:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.064799 systemd-resolved[204]: Positive Trust Anchors: Mar 17 20:56:53.122913 systemd[1]: Started systemd-journald.service. Mar 17 20:56:53.122939 kernel: audit: type=1130 audit(1742245013.116:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.064823 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:56:53.128767 kernel: audit: type=1130 audit(1742245013.123:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.064862 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 20:56:53.136349 kernel: audit: type=1130 audit(1742245013.129:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.068254 systemd-resolved[204]: Defaulting to hostname 'linux'. Mar 17 20:56:53.111127 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 17 20:56:53.123926 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 20:56:53.129614 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 20:56:53.140618 systemd[1]: Reached target nss-lookup.target. Mar 17 20:56:53.161341 kernel: audit: type=1130 audit(1742245013.140:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.161377 kernel: SCSI subsystem initialized Mar 17 20:56:53.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.147812 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 20:56:53.149536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 20:56:53.172588 kernel: audit: type=1130 audit(1742245013.166:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.172636 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 20:56:53.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.165908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 20:56:53.178929 kernel: device-mapper: uevent: version 1.0.3 Mar 17 20:56:53.178955 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 20:56:53.187097 kernel: audit: type=1130 audit(1742245013.181:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.180882 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 20:56:53.188469 systemd[1]: Starting dracut-cmdline.service... Mar 17 20:56:53.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.191867 systemd-modules-load[203]: Inserted module 'dm_multipath' Mar 17 20:56:53.214575 kernel: audit: type=1130 audit(1742245013.193:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.193385 systemd[1]: Finished systemd-modules-load.service. Mar 17 20:56:53.213353 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:56:53.222224 dracut-cmdline[221]: dracut-dracut-053 Mar 17 20:56:53.225808 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:56:53.226042 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:56:53.234086 kernel: audit: type=1130 audit(1742245013.228:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.310102 kernel: Loading iSCSI transport class v2.0-870. Mar 17 20:56:53.332101 kernel: iscsi: registered transport (tcp) Mar 17 20:56:53.360720 kernel: iscsi: registered transport (qla4xxx) Mar 17 20:56:53.360832 kernel: QLogic iSCSI HBA Driver Mar 17 20:56:53.409275 systemd[1]: Finished dracut-cmdline.service. Mar 17 20:56:53.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.411547 systemd[1]: Starting dracut-pre-udev.service... Mar 17 20:56:53.470122 kernel: raid6: sse2x4 gen() 7570 MB/s Mar 17 20:56:53.488146 kernel: raid6: sse2x4 xor() 4669 MB/s Mar 17 20:56:53.506207 kernel: raid6: sse2x2 gen() 5472 MB/s Mar 17 20:56:53.524129 kernel: raid6: sse2x2 xor() 8140 MB/s Mar 17 20:56:53.542136 kernel: raid6: sse2x1 gen() 5273 MB/s Mar 17 20:56:53.560754 kernel: raid6: sse2x1 xor() 7366 MB/s Mar 17 20:56:53.560861 kernel: raid6: using algorithm sse2x4 gen() 7570 MB/s Mar 17 20:56:53.560881 kernel: raid6: .... xor() 4669 MB/s, rmw enabled Mar 17 20:56:53.561982 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 20:56:53.578097 kernel: xor: automatically using best checksumming function avx Mar 17 20:56:53.694107 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 20:56:53.707128 systemd[1]: Finished dracut-pre-udev.service. Mar 17 20:56:53.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.708000 audit: BPF prog-id=7 op=LOAD Mar 17 20:56:53.708000 audit: BPF prog-id=8 op=LOAD Mar 17 20:56:53.709327 systemd[1]: Starting systemd-udevd.service... Mar 17 20:56:53.726537 systemd-udevd[401]: Using default interface naming scheme 'v252'. Mar 17 20:56:53.734197 systemd[1]: Started systemd-udevd.service. Mar 17 20:56:53.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.740544 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 20:56:53.759145 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Mar 17 20:56:53.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:53.802522 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 20:56:53.804605 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 20:56:53.897320 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 20:56:53.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:54.004128 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 20:56:54.017290 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 20:56:54.077867 kernel: ACPI: bus type USB registered Mar 17 20:56:54.077905 kernel: usbcore: registered new interface driver usbfs Mar 17 20:56:54.077934 kernel: usbcore: registered new interface driver hub Mar 17 20:56:54.077948 kernel: usbcore: registered new device driver usb Mar 17 20:56:54.077963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 20:56:54.078000 kernel: GPT:17805311 != 125829119 Mar 17 20:56:54.078014 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 20:56:54.078034 kernel: GPT:17805311 != 125829119 Mar 17 20:56:54.078047 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 20:56:54.078061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:56:54.078087 kernel: AVX version of gcm_enc/dec engaged. Mar 17 20:56:54.078117 kernel: AES CTR mode by8 optimization enabled Mar 17 20:56:54.078133 kernel: libata version 3.00 loaded. Mar 17 20:56:54.141426 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 20:56:54.249397 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 20:56:54.249782 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Mar 17 20:56:54.249805 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 20:56:54.250004 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 17 20:56:54.250216 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 17 20:56:54.250393 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 20:56:54.250414 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 20:56:54.250602 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 20:56:54.250783 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 17 20:56:54.250975 kernel: scsi host0: ahci Mar 17 20:56:54.251207 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 17 20:56:54.251416 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 17 20:56:54.251604 kernel: hub 1-0:1.0: USB hub found Mar 17 20:56:54.251823 kernel: hub 1-0:1.0: 4 ports detected Mar 17 20:56:54.252035 kernel: scsi host1: ahci Mar 17 20:56:54.252253 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 17 20:56:54.252532 kernel: hub 2-0:1.0: USB hub found Mar 17 20:56:54.252767 kernel: hub 2-0:1.0: 4 ports detected Mar 17 20:56:54.252957 kernel: scsi host2: ahci Mar 17 20:56:54.253158 kernel: scsi host3: ahci Mar 17 20:56:54.253384 kernel: scsi host4: ahci Mar 17 20:56:54.253599 kernel: scsi host5: ahci Mar 17 20:56:54.253802 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 17 20:56:54.253822 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 17 20:56:54.253838 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 17 20:56:54.253854 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 17 20:56:54.253870 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 17 20:56:54.253892 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 17 20:56:54.249844 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 20:56:54.262039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 20:56:54.268155 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 20:56:54.273224 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 20:56:54.275326 systemd[1]: Starting disk-uuid.service... Mar 17 20:56:54.282596 disk-uuid[529]: Primary Header is updated. Mar 17 20:56:54.282596 disk-uuid[529]: Secondary Entries is updated. Mar 17 20:56:54.282596 disk-uuid[529]: Secondary Header is updated. Mar 17 20:56:54.287084 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:56:54.293081 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:56:54.299142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:56:54.400226 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 17 20:56:54.483092 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 20:56:54.483187 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 20:56:54.487456 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 20:56:54.487498 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 20:56:54.491219 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 17 20:56:54.491268 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 20:56:54.540090 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 20:56:54.546931 kernel: usbcore: registered new interface driver usbhid Mar 17 20:56:54.547001 kernel: usbhid: USB HID core driver Mar 17 20:56:54.556408 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 17 20:56:54.556458 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 17 20:56:55.298086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:56:55.298474 disk-uuid[530]: The operation has completed successfully. Mar 17 20:56:55.362968 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 20:56:55.363140 systemd[1]: Finished disk-uuid.service. Mar 17 20:56:55.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.365472 systemd[1]: Starting verity-setup.service... Mar 17 20:56:55.386091 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 17 20:56:55.443141 systemd[1]: Found device dev-mapper-usr.device. Mar 17 20:56:55.444995 systemd[1]: Mounting sysusr-usr.mount... Mar 17 20:56:55.446637 systemd[1]: Finished verity-setup.service. Mar 17 20:56:55.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.544104 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 20:56:55.544634 systemd[1]: Mounted sysusr-usr.mount. Mar 17 20:56:55.545445 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 20:56:55.546480 systemd[1]: Starting ignition-setup.service... Mar 17 20:56:55.549521 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 20:56:55.566728 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:56:55.566791 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:56:55.566810 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:56:55.580914 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 20:56:55.587419 systemd[1]: Finished ignition-setup.service. Mar 17 20:56:55.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.589274 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 20:56:55.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.753000 audit: BPF prog-id=9 op=LOAD Mar 17 20:56:55.751375 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 20:56:55.754830 systemd[1]: Starting systemd-networkd.service... Mar 17 20:56:55.804972 systemd-networkd[706]: lo: Link UP Mar 17 20:56:55.804996 systemd-networkd[706]: lo: Gained carrier Mar 17 20:56:55.806409 systemd-networkd[706]: Enumeration completed Mar 17 20:56:55.807158 systemd-networkd[706]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:56:55.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.809044 systemd[1]: Started systemd-networkd.service. Mar 17 20:56:55.809821 systemd-networkd[706]: eth0: Link UP Mar 17 20:56:55.809827 systemd-networkd[706]: eth0: Gained carrier Mar 17 20:56:55.809974 systemd[1]: Reached target network.target. Mar 17 20:56:55.811894 systemd[1]: Starting iscsiuio.service... Mar 17 20:56:55.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.858265 systemd[1]: Started iscsiuio.service. Mar 17 20:56:55.860611 systemd[1]: Starting iscsid.service... Mar 17 20:56:55.867741 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 20:56:55.870092 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 20:56:55.870092 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 20:56:55.870092 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 20:56:55.870092 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 20:56:55.870092 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 20:56:55.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.872629 systemd[1]: Started iscsid.service. Mar 17 20:56:55.874555 systemd[1]: Starting dracut-initqueue.service... Mar 17 20:56:55.885264 systemd-networkd[706]: eth0: DHCPv4 address 10.243.78.42/30, gateway 10.243.78.41 acquired from 10.243.78.41 Mar 17 20:56:55.894000 systemd[1]: Finished dracut-initqueue.service. Mar 17 20:56:55.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.894866 systemd[1]: Reached target remote-fs-pre.target. Mar 17 20:56:55.896643 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 20:56:55.899464 systemd[1]: Reached target remote-fs.target. Mar 17 20:56:55.901973 systemd[1]: Starting dracut-pre-mount.service... Mar 17 20:56:55.920437 systemd[1]: Finished dracut-pre-mount.service. Mar 17 20:56:55.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.940951 ignition[630]: Ignition 2.14.0 Mar 17 20:56:55.940972 ignition[630]: Stage: fetch-offline Mar 17 20:56:55.941114 ignition[630]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:56:55.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:55.944839 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 20:56:55.941153 ignition[630]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:56:55.946813 systemd[1]: Starting ignition-fetch.service... Mar 17 20:56:55.942942 ignition[630]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:56:55.943110 ignition[630]: parsed url from cmdline: "" Mar 17 20:56:55.943117 ignition[630]: no config URL provided Mar 17 20:56:55.943127 ignition[630]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:56:55.943143 ignition[630]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:56:55.943176 ignition[630]: failed to fetch config: resource requires networking Mar 17 20:56:55.943532 ignition[630]: Ignition finished successfully Mar 17 20:56:55.966486 ignition[729]: Ignition 2.14.0 Mar 17 20:56:55.966507 ignition[729]: Stage: fetch Mar 17 20:56:55.966716 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:56:55.966752 ignition[729]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:56:55.969311 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:56:55.969492 ignition[729]: parsed url from cmdline: "" Mar 17 20:56:55.969500 ignition[729]: no config URL provided Mar 17 20:56:55.969509 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:56:55.969525 ignition[729]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:56:55.971474 ignition[729]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 20:56:55.971520 ignition[729]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 20:56:55.971549 ignition[729]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 20:56:55.994135 ignition[729]: GET result: OK Mar 17 20:56:55.994333 ignition[729]: parsing config with SHA512: 18658bd12dbb1c00b1dd78bf4e8697e5e9c9e4cbdcc42ff25e863407d82cdd5b426fffb9255a51cde541b05916727fd219b3ec1eea9f529b4f86db299ac6f538 Mar 17 20:56:56.006031 unknown[729]: fetched base config from "system" Mar 17 20:56:56.006947 unknown[729]: fetched base config from "system" Mar 17 20:56:56.007689 unknown[729]: fetched user config from "openstack" Mar 17 20:56:56.009314 ignition[729]: fetch: fetch complete Mar 17 20:56:56.009991 ignition[729]: fetch: fetch passed Mar 17 20:56:56.010753 ignition[729]: Ignition finished successfully Mar 17 20:56:56.013166 systemd[1]: Finished ignition-fetch.service. Mar 17 20:56:56.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.015215 systemd[1]: Starting ignition-kargs.service... Mar 17 20:56:56.026329 ignition[735]: Ignition 2.14.0 Mar 17 20:56:56.026349 ignition[735]: Stage: kargs Mar 17 20:56:56.026505 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:56:56.026553 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:56:56.027749 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:56:56.029368 ignition[735]: kargs: kargs passed Mar 17 20:56:56.030485 systemd[1]: Finished ignition-kargs.service. Mar 17 20:56:56.029441 ignition[735]: Ignition finished successfully Mar 17 20:56:56.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.032917 systemd[1]: Starting ignition-disks.service... Mar 17 20:56:56.046680 ignition[741]: Ignition 2.14.0 Mar 17 20:56:56.046699 ignition[741]: Stage: disks Mar 17 20:56:56.046860 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:56:56.046894 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:56:56.048154 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:56:56.049834 ignition[741]: disks: disks passed Mar 17 20:56:56.049929 ignition[741]: Ignition finished successfully Mar 17 20:56:56.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.050910 systemd[1]: Finished ignition-disks.service. Mar 17 20:56:56.051905 systemd[1]: Reached target initrd-root-device.target. Mar 17 20:56:56.052567 systemd[1]: Reached target local-fs-pre.target. Mar 17 20:56:56.053774 systemd[1]: Reached target local-fs.target. Mar 17 20:56:56.054944 systemd[1]: Reached target sysinit.target. Mar 17 20:56:56.056186 systemd[1]: Reached target basic.target. Mar 17 20:56:56.058636 systemd[1]: Starting systemd-fsck-root.service... Mar 17 20:56:56.080021 systemd-fsck[748]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 20:56:56.084163 systemd[1]: Finished systemd-fsck-root.service. Mar 17 20:56:56.085884 systemd[1]: Mounting sysroot.mount... Mar 17 20:56:56.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.098089 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 20:56:56.098757 systemd[1]: Mounted sysroot.mount. Mar 17 20:56:56.099596 systemd[1]: Reached target initrd-root-fs.target. Mar 17 20:56:56.102192 systemd[1]: Mounting sysroot-usr.mount... Mar 17 20:56:56.103315 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 20:56:56.104298 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 17 20:56:56.107326 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 20:56:56.107378 systemd[1]: Reached target ignition-diskful.target. Mar 17 20:56:56.111203 systemd[1]: Mounted sysroot-usr.mount. Mar 17 20:56:56.113463 systemd[1]: Starting initrd-setup-root.service... Mar 17 20:56:56.120459 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 20:56:56.137256 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Mar 17 20:56:56.149992 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 20:56:56.158160 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 20:56:56.218999 systemd[1]: Finished initrd-setup-root.service. Mar 17 20:56:56.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.220783 systemd[1]: Starting ignition-mount.service... Mar 17 20:56:56.226815 systemd[1]: Starting sysroot-boot.service... Mar 17 20:56:56.345804 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 20:56:56.349886 coreos-metadata[754]: Mar 17 20:56:56.349 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 20:56:56.363929 ignition[803]: INFO : Ignition 2.14.0 Mar 17 20:56:56.363929 ignition[803]: INFO : Stage: mount Mar 17 20:56:56.365466 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:56:56.365466 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:56:56.371002 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:56:56.371002 ignition[803]: INFO : mount: mount passed Mar 17 20:56:56.371002 ignition[803]: INFO : Ignition finished successfully Mar 17 20:56:56.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.389379 coreos-metadata[754]: Mar 17 20:56:56.367 INFO Fetch successful Mar 17 20:56:56.389379 coreos-metadata[754]: Mar 17 20:56:56.367 INFO wrote hostname srv-be9pf.gb1.brightbox.com to /sysroot/etc/hostname Mar 17 20:56:56.369864 systemd[1]: Finished ignition-mount.service. Mar 17 20:56:56.371886 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 20:56:56.372018 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 17 20:56:56.397718 systemd[1]: Finished sysroot-boot.service. Mar 17 20:56:56.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:56:56.465779 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 20:56:56.482100 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (813) Mar 17 20:56:56.485757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:56:56.485793 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:56:56.485835 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:56:56.492767 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 20:56:56.494662 systemd[1]: Starting ignition-files.service... Mar 17 20:56:56.520962 ignition[833]: INFO : Ignition 2.14.0 Mar 17 20:56:56.520962 ignition[833]: INFO : Stage: files Mar 17 20:56:56.522648 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:56:56.522648 ignition[833]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:56:56.522648 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:56:56.525753 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Mar 17 20:56:56.525753 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 20:56:56.525753 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 20:56:56.528693 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 20:56:56.530161 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 20:56:56.531977 unknown[833]: wrote ssh authorized keys file for user: core Mar 17 20:56:56.533578 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 20:56:56.535847 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 20:56:56.536963 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 20:56:56.536963 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 20:56:56.536963 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 20:56:57.002381 systemd-networkd[706]: eth0: Gained IPv6LL Mar 17 20:56:58.217635 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 20:56:58.516714 systemd-networkd[706]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d38a:24:19ff:fef3:4e2a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d38a:24:19ff:fef3:4e2a/64 assigned by NDisc. Mar 17 20:56:58.516727 systemd-networkd[706]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 20:57:03.248544 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 20:57:03.250332 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:57:03.250332 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 20:57:03.837469 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 20:57:04.098739 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:57:04.098739 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:57:04.101094 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:57:04.115165 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 20:57:04.599915 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 20:57:05.665878 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:57:05.667718 ignition[833]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 20:57:05.668678 ignition[833]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 20:57:05.669833 ignition[833]: INFO : files: op(e): [started] processing unit "containerd.service" Mar 17 20:57:05.671085 ignition[833]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(e): [finished] processing unit "containerd.service" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 17 20:57:05.673433 ignition[833]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 20:57:05.697921 kernel: kauditd_printk_skb: 28 callbacks suppressed Mar 17 20:57:05.697961 kernel: audit: type=1130 audit(1742245025.686:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.683907 systemd[1]: Finished ignition-files.service. Mar 17 20:57:05.699762 ignition[833]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:57:05.699762 ignition[833]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:57:05.699762 ignition[833]: INFO : files: files passed Mar 17 20:57:05.699762 ignition[833]: INFO : Ignition finished successfully Mar 17 20:57:05.713957 kernel: audit: type=1130 audit(1742245025.703:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.714029 kernel: audit: type=1131 audit(1742245025.703:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.688171 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 20:57:05.720371 kernel: audit: type=1130 audit(1742245025.714:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.694459 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 20:57:05.722094 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 20:57:05.695467 systemd[1]: Starting ignition-quench.service... Mar 17 20:57:05.700925 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 20:57:05.701050 systemd[1]: Finished ignition-quench.service. Mar 17 20:57:05.713927 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 20:57:05.714837 systemd[1]: Reached target ignition-complete.target. Mar 17 20:57:05.721964 systemd[1]: Starting initrd-parse-etc.service... Mar 17 20:57:05.741747 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 20:57:05.752758 kernel: audit: type=1130 audit(1742245025.743:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.752814 kernel: audit: type=1131 audit(1742245025.743:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.741877 systemd[1]: Finished initrd-parse-etc.service. Mar 17 20:57:05.743428 systemd[1]: Reached target initrd-fs.target. Mar 17 20:57:05.753390 systemd[1]: Reached target initrd.target. Mar 17 20:57:05.754758 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 20:57:05.755822 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 20:57:05.773024 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 20:57:05.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.775102 systemd[1]: Starting initrd-cleanup.service... Mar 17 20:57:05.781541 kernel: audit: type=1130 audit(1742245025.773:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.788719 systemd[1]: Stopped target nss-lookup.target. Mar 17 20:57:05.790224 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 20:57:05.791766 systemd[1]: Stopped target timers.target. Mar 17 20:57:05.793126 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 20:57:05.794091 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 20:57:05.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.796372 systemd[1]: Stopped target initrd.target. Mar 17 20:57:05.802442 kernel: audit: type=1131 audit(1742245025.795:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.801033 systemd[1]: Stopped target basic.target. Mar 17 20:57:05.801823 systemd[1]: Stopped target ignition-complete.target. Mar 17 20:57:05.803180 systemd[1]: Stopped target ignition-diskful.target. Mar 17 20:57:05.804480 systemd[1]: Stopped target initrd-root-device.target. Mar 17 20:57:05.805752 systemd[1]: Stopped target remote-fs.target. Mar 17 20:57:05.806973 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 20:57:05.808265 systemd[1]: Stopped target sysinit.target. Mar 17 20:57:05.809424 systemd[1]: Stopped target local-fs.target. Mar 17 20:57:05.810598 systemd[1]: Stopped target local-fs-pre.target. Mar 17 20:57:05.811799 systemd[1]: Stopped target swap.target. Mar 17 20:57:05.819252 kernel: audit: type=1131 audit(1742245025.814:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.812890 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 20:57:05.813142 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 20:57:05.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.814401 systemd[1]: Stopped target cryptsetup.target. Mar 17 20:57:05.827834 kernel: audit: type=1131 audit(1742245025.821:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.820004 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 20:57:05.820268 systemd[1]: Stopped dracut-initqueue.service. Mar 17 20:57:05.821407 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 20:57:05.821639 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 20:57:05.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.831613 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 20:57:05.832654 systemd[1]: Stopped ignition-files.service. Mar 17 20:57:05.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.839725 iscsid[715]: iscsid shutting down. Mar 17 20:57:05.835167 systemd[1]: Stopping ignition-mount.service... Mar 17 20:57:05.838747 systemd[1]: Stopping iscsid.service... Mar 17 20:57:05.839547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 20:57:05.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.839846 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 20:57:05.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.856953 ignition[871]: INFO : Ignition 2.14.0 Mar 17 20:57:05.856953 ignition[871]: INFO : Stage: umount Mar 17 20:57:05.856953 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:57:05.856953 ignition[871]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:57:05.856953 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:57:05.842712 systemd[1]: Stopping sysroot-boot.service... Mar 17 20:57:05.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.863552 ignition[871]: INFO : umount: umount passed Mar 17 20:57:05.863552 ignition[871]: INFO : Ignition finished successfully Mar 17 20:57:05.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.852168 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 20:57:05.852520 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 20:57:05.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.853326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 20:57:05.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.853485 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 20:57:05.859420 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 20:57:05.859640 systemd[1]: Stopped iscsid.service. Mar 17 20:57:05.862741 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 20:57:05.862880 systemd[1]: Stopped ignition-mount.service. Mar 17 20:57:05.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.865865 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 20:57:05.865987 systemd[1]: Finished initrd-cleanup.service. Mar 17 20:57:05.868176 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 20:57:05.868249 systemd[1]: Stopped ignition-disks.service. Mar 17 20:57:05.869898 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 20:57:05.869986 systemd[1]: Stopped ignition-kargs.service. Mar 17 20:57:05.870617 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 20:57:05.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.870674 systemd[1]: Stopped ignition-fetch.service. Mar 17 20:57:05.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.871294 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 20:57:05.871363 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 20:57:05.871986 systemd[1]: Stopped target paths.target. Mar 17 20:57:05.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.872550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 20:57:05.876125 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 20:57:05.876740 systemd[1]: Stopped target slices.target. Mar 17 20:57:05.877324 systemd[1]: Stopped target sockets.target. Mar 17 20:57:05.877910 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 20:57:05.877967 systemd[1]: Closed iscsid.socket. Mar 17 20:57:05.878525 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 20:57:05.878608 systemd[1]: Stopped ignition-setup.service. Mar 17 20:57:05.879675 systemd[1]: Stopping iscsiuio.service... Mar 17 20:57:05.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.884664 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 20:57:05.885286 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 20:57:05.885466 systemd[1]: Stopped iscsiuio.service. Mar 17 20:57:05.886932 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 20:57:05.887060 systemd[1]: Stopped sysroot-boot.service. Mar 17 20:57:05.888029 systemd[1]: Stopped target network.target. Mar 17 20:57:05.889101 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 20:57:05.889173 systemd[1]: Closed iscsiuio.socket. Mar 17 20:57:05.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.890491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 20:57:05.910000 audit: BPF prog-id=6 op=UNLOAD Mar 17 20:57:05.890556 systemd[1]: Stopped initrd-setup-root.service. Mar 17 20:57:05.891910 systemd[1]: Stopping systemd-networkd.service... Mar 17 20:57:05.893675 systemd[1]: Stopping systemd-resolved.service... Mar 17 20:57:05.897158 systemd-networkd[706]: eth0: DHCPv6 lease lost Mar 17 20:57:05.916000 audit: BPF prog-id=9 op=UNLOAD Mar 17 20:57:05.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.901418 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 20:57:05.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.901606 systemd[1]: Stopped systemd-resolved.service. Mar 17 20:57:05.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.903851 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 20:57:05.904012 systemd[1]: Stopped systemd-networkd.service. Mar 17 20:57:05.910498 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 20:57:05.910569 systemd[1]: Closed systemd-networkd.socket. Mar 17 20:57:05.912727 systemd[1]: Stopping network-cleanup.service... Mar 17 20:57:05.913612 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 20:57:05.913698 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 20:57:05.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.917148 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:57:05.917215 systemd[1]: Stopped systemd-sysctl.service. Mar 17 20:57:05.918715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 20:57:05.918779 systemd[1]: Stopped systemd-modules-load.service. Mar 17 20:57:05.934011 systemd[1]: Stopping systemd-udevd.service... Mar 17 20:57:05.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.937727 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 20:57:05.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.938491 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 20:57:05.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.938702 systemd[1]: Stopped systemd-udevd.service. Mar 17 20:57:05.941573 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 20:57:05.941668 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 20:57:05.945171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 20:57:05.945227 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 20:57:05.946650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 20:57:05.946747 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 20:57:05.947868 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 20:57:05.947929 systemd[1]: Stopped dracut-cmdline.service. Mar 17 20:57:05.949016 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 20:57:05.949204 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 20:57:05.952004 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 20:57:05.960750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 20:57:05.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.960860 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 20:57:05.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.963771 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 20:57:05.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:05.963931 systemd[1]: Stopped network-cleanup.service. Mar 17 20:57:05.964934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 20:57:05.965073 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 20:57:05.966050 systemd[1]: Reached target initrd-switch-root.target. Mar 17 20:57:05.968429 systemd[1]: Starting initrd-switch-root.service... Mar 17 20:57:05.980329 systemd[1]: Switching root. Mar 17 20:57:05.986000 audit: BPF prog-id=8 op=UNLOAD Mar 17 20:57:05.986000 audit: BPF prog-id=7 op=UNLOAD Mar 17 20:57:05.987000 audit: BPF prog-id=5 op=UNLOAD Mar 17 20:57:05.987000 audit: BPF prog-id=4 op=UNLOAD Mar 17 20:57:05.987000 audit: BPF prog-id=3 op=UNLOAD Mar 17 20:57:06.006465 systemd-journald[202]: Journal stopped Mar 17 20:57:10.850499 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 17 20:57:10.850774 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 20:57:10.850834 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 20:57:10.850868 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 20:57:10.850918 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 20:57:10.850961 kernel: SELinux: policy capability open_perms=1 Mar 17 20:57:10.851005 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 20:57:10.851037 kernel: SELinux: policy capability always_check_network=0 Mar 17 20:57:10.851080 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 20:57:10.851108 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 20:57:10.851147 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 20:57:10.851190 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 20:57:10.851243 systemd[1]: Successfully loaded SELinux policy in 99.892ms. Mar 17 20:57:10.851318 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.449ms. Mar 17 20:57:10.851345 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 20:57:10.851373 systemd[1]: Detected virtualization kvm. Mar 17 20:57:10.851414 systemd[1]: Detected architecture x86-64. Mar 17 20:57:10.851447 systemd[1]: Detected first boot. Mar 17 20:57:10.851476 systemd[1]: Hostname set to . Mar 17 20:57:10.851510 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:57:10.851556 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 20:57:10.851611 systemd[1]: Populated /etc with preset unit settings. Mar 17 20:57:10.851642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:57:10.851720 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:57:10.851744 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:57:10.851785 systemd[1]: Queued start job for default target multi-user.target. Mar 17 20:57:10.851830 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 20:57:10.851851 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 20:57:10.851877 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 20:57:10.851910 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 20:57:10.851952 systemd[1]: Created slice system-getty.slice. Mar 17 20:57:10.851972 systemd[1]: Created slice system-modprobe.slice. Mar 17 20:57:10.851991 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 20:57:10.852038 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 20:57:10.852081 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 20:57:10.852123 systemd[1]: Created slice user.slice. Mar 17 20:57:10.852163 systemd[1]: Started systemd-ask-password-console.path. Mar 17 20:57:10.852200 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 20:57:10.852240 systemd[1]: Set up automount boot.automount. Mar 17 20:57:10.852262 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 20:57:10.852290 systemd[1]: Reached target integritysetup.target. Mar 17 20:57:10.852334 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 20:57:10.852364 systemd[1]: Reached target remote-fs.target. Mar 17 20:57:10.852385 systemd[1]: Reached target slices.target. Mar 17 20:57:10.852416 systemd[1]: Reached target swap.target. Mar 17 20:57:10.852443 systemd[1]: Reached target torcx.target. Mar 17 20:57:10.852466 systemd[1]: Reached target veritysetup.target. Mar 17 20:57:10.852486 systemd[1]: Listening on systemd-coredump.socket. Mar 17 20:57:10.852531 systemd[1]: Listening on systemd-initctl.socket. Mar 17 20:57:10.852570 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 20:57:10.852592 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 20:57:10.852611 systemd[1]: Listening on systemd-journald.socket. Mar 17 20:57:10.852639 systemd[1]: Listening on systemd-networkd.socket. Mar 17 20:57:10.852678 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 20:57:10.852701 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 20:57:10.852721 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 20:57:10.852741 systemd[1]: Mounting dev-hugepages.mount... Mar 17 20:57:10.852760 systemd[1]: Mounting dev-mqueue.mount... Mar 17 20:57:10.852780 systemd[1]: Mounting media.mount... Mar 17 20:57:10.852813 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:57:10.852835 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 20:57:10.852855 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 20:57:10.852887 systemd[1]: Mounting tmp.mount... Mar 17 20:57:10.852908 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 20:57:10.852949 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:57:10.852971 systemd[1]: Starting kmod-static-nodes.service... Mar 17 20:57:10.853011 systemd[1]: Starting modprobe@configfs.service... Mar 17 20:57:10.853048 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:57:10.853082 systemd[1]: Starting modprobe@drm.service... Mar 17 20:57:10.853112 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:57:10.853134 systemd[1]: Starting modprobe@fuse.service... Mar 17 20:57:10.853154 systemd[1]: Starting modprobe@loop.service... Mar 17 20:57:10.853193 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 20:57:10.853226 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 20:57:10.853248 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 20:57:10.853287 systemd[1]: Starting systemd-journald.service... Mar 17 20:57:10.853322 systemd[1]: Starting systemd-modules-load.service... Mar 17 20:57:10.853345 systemd[1]: Starting systemd-network-generator.service... Mar 17 20:57:10.853364 systemd[1]: Starting systemd-remount-fs.service... Mar 17 20:57:10.853384 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 20:57:10.853411 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:57:10.853440 systemd[1]: Mounted dev-hugepages.mount. Mar 17 20:57:10.853461 systemd[1]: Mounted dev-mqueue.mount. Mar 17 20:57:10.853493 systemd[1]: Mounted media.mount. Mar 17 20:57:10.853530 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 20:57:10.853566 kernel: loop: module loaded Mar 17 20:57:10.853588 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 20:57:10.853629 kernel: kauditd_printk_skb: 50 callbacks suppressed Mar 17 20:57:10.853651 kernel: audit: type=1305 audit(1742245030.835:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 20:57:10.853678 systemd[1]: Mounted tmp.mount. Mar 17 20:57:10.853699 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 20:57:10.853737 kernel: audit: type=1300 audit(1742245030.835:92): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc26f1e20 a2=4000 a3=7ffdc26f1ebc items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:57:10.853759 kernel: audit: type=1327 audit(1742245030.835:92): proctitle="/usr/lib/systemd/systemd-journald" Mar 17 20:57:10.853794 systemd-journald[1028]: Journal started Mar 17 20:57:10.853923 systemd-journald[1028]: Runtime Journal (/run/log/journal/a23f03c4c016416684a24056fda67c76) is 4.7M, max 38.1M, 33.3M free. Mar 17 20:57:10.619000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 20:57:10.619000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 20:57:10.835000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 20:57:10.835000 audit[1028]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdc26f1e20 a2=4000 a3=7ffdc26f1ebc items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:57:10.835000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 20:57:10.876332 kernel: audit: type=1130 audit(1742245030.860:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.876445 systemd[1]: Started systemd-journald.service. Mar 17 20:57:10.876484 kernel: audit: type=1130 audit(1742245030.869:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.876510 kernel: fuse: init (API version 7.34) Mar 17 20:57:10.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.871910 systemd[1]: Finished kmod-static-nodes.service. Mar 17 20:57:10.877059 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 20:57:10.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.877370 systemd[1]: Finished modprobe@configfs.service. Mar 17 20:57:10.883105 kernel: audit: type=1130 audit(1742245030.876:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.883346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:57:10.883589 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:57:10.889152 kernel: audit: type=1130 audit(1742245030.882:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.889469 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:57:10.889721 systemd[1]: Finished modprobe@drm.service. Mar 17 20:57:10.890690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:57:10.890956 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:57:10.892004 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 20:57:10.892304 systemd[1]: Finished modprobe@fuse.service. Mar 17 20:57:10.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.893711 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:57:10.893958 systemd[1]: Finished modprobe@loop.service. Mar 17 20:57:10.902235 kernel: audit: type=1131 audit(1742245030.882:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.902294 kernel: audit: type=1130 audit(1742245030.888:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.899668 systemd[1]: Finished systemd-modules-load.service. Mar 17 20:57:10.900720 systemd[1]: Finished systemd-network-generator.service. Mar 17 20:57:10.901797 systemd[1]: Finished systemd-remount-fs.service. Mar 17 20:57:10.908774 kernel: audit: type=1131 audit(1742245030.888:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.913756 systemd[1]: Reached target network-pre.target. Mar 17 20:57:10.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.916561 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 20:57:10.923227 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 20:57:10.923898 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 20:57:10.928209 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 20:57:10.939466 systemd[1]: Starting systemd-journal-flush.service... Mar 17 20:57:10.945394 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:57:10.947528 systemd[1]: Starting systemd-random-seed.service... Mar 17 20:57:10.948399 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:57:10.950628 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:57:10.953300 systemd[1]: Starting systemd-sysusers.service... Mar 17 20:57:10.956465 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 20:57:10.957503 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 20:57:10.958774 systemd-journald[1028]: Time spent on flushing to /var/log/journal/a23f03c4c016416684a24056fda67c76 is 59.975ms for 1251 entries. Mar 17 20:57:10.958774 systemd-journald[1028]: System Journal (/var/log/journal/a23f03c4c016416684a24056fda67c76) is 8.0M, max 584.8M, 576.8M free. Mar 17 20:57:11.037902 systemd-journald[1028]: Received client request to flush runtime journal. Mar 17 20:57:10.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:10.967428 systemd[1]: Finished systemd-random-seed.service. Mar 17 20:57:10.968230 systemd[1]: Reached target first-boot-complete.target. Mar 17 20:57:11.018914 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:57:11.030672 systemd[1]: Finished systemd-sysusers.service. Mar 17 20:57:11.033854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 20:57:11.038899 systemd[1]: Finished systemd-journal-flush.service. Mar 17 20:57:11.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.112943 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 20:57:11.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.115700 systemd[1]: Starting systemd-udev-settle.service... Mar 17 20:57:11.153201 udevadm[1066]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 20:57:11.273964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 20:57:11.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.845438 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 20:57:11.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.848171 systemd[1]: Starting systemd-udevd.service... Mar 17 20:57:11.875696 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Mar 17 20:57:11.907287 systemd[1]: Started systemd-udevd.service. Mar 17 20:57:11.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:11.916169 systemd[1]: Starting systemd-networkd.service... Mar 17 20:57:11.930410 systemd[1]: Starting systemd-userdbd.service... Mar 17 20:57:12.002412 systemd[1]: Found device dev-ttyS0.device. Mar 17 20:57:12.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.006815 systemd[1]: Started systemd-userdbd.service. Mar 17 20:57:12.068602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 20:57:12.125131 systemd-networkd[1079]: lo: Link UP Mar 17 20:57:12.125144 systemd-networkd[1079]: lo: Gained carrier Mar 17 20:57:12.126016 systemd-networkd[1079]: Enumeration completed Mar 17 20:57:12.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.126203 systemd[1]: Started systemd-networkd.service. Mar 17 20:57:12.126205 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:57:12.130766 systemd-networkd[1079]: eth0: Link UP Mar 17 20:57:12.130779 systemd-networkd[1079]: eth0: Gained carrier Mar 17 20:57:12.146098 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 20:57:12.149302 systemd-networkd[1079]: eth0: DHCPv4 address 10.243.78.42/30, gateway 10.243.78.41 acquired from 10.243.78.41 Mar 17 20:57:12.160080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 20:57:12.180218 kernel: ACPI: button: Power Button [PWRF] Mar 17 20:57:12.196000 audit[1075]: AVC avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 20:57:12.196000 audit[1075]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55655fd686a0 a1=338ac a2=7f90f20b5bc5 a3=5 items=110 ppid=1069 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:57:12.196000 audit: CWD cwd="/" Mar 17 20:57:12.196000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=1 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=2 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=3 name=(null) inode=15707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=4 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=5 name=(null) inode=15708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=6 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=7 name=(null) inode=15709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=8 name=(null) inode=15709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=9 name=(null) inode=15710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=10 name=(null) inode=15709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=11 name=(null) inode=15711 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=12 name=(null) inode=15709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=13 name=(null) inode=15712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=14 name=(null) inode=15709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=15 name=(null) inode=15713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=16 name=(null) inode=15709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=17 name=(null) inode=15714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=18 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=19 name=(null) inode=15715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=20 name=(null) inode=15715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=21 name=(null) inode=15716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=22 name=(null) inode=15715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=23 name=(null) inode=15717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=24 name=(null) inode=15715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=25 name=(null) inode=15718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=26 name=(null) inode=15715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=27 name=(null) inode=15719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=28 name=(null) inode=15715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=29 name=(null) inode=15720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=30 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=31 name=(null) inode=15721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=32 name=(null) inode=15721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=33 name=(null) inode=15722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=34 name=(null) inode=15721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=35 name=(null) inode=15723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=36 name=(null) inode=15721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=37 name=(null) inode=15724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=38 name=(null) inode=15721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=39 name=(null) inode=15725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=40 name=(null) inode=15721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=41 name=(null) inode=15726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=42 name=(null) inode=15706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=43 name=(null) inode=15727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=44 name=(null) inode=15727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=45 name=(null) inode=15728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=46 name=(null) inode=15727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=47 name=(null) inode=15729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=48 name=(null) inode=15727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=49 name=(null) inode=15730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=50 name=(null) inode=15727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=51 name=(null) inode=15731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=52 name=(null) inode=15727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=53 name=(null) inode=15732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=55 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=56 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=57 name=(null) inode=15734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=58 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=59 name=(null) inode=15735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=60 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=61 name=(null) inode=15736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=62 name=(null) inode=15736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=63 name=(null) inode=15737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=64 name=(null) inode=15736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=65 name=(null) inode=15738 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=66 name=(null) inode=15736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=67 name=(null) inode=15739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=68 name=(null) inode=15736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=69 name=(null) inode=15740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=70 name=(null) inode=15736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=71 name=(null) inode=15741 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=72 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=73 name=(null) inode=15742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=74 name=(null) inode=15742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=75 name=(null) inode=15743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=76 name=(null) inode=15742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=77 name=(null) inode=15744 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=78 name=(null) inode=15742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=79 name=(null) inode=15745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=80 name=(null) inode=15742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=81 name=(null) inode=15746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=82 name=(null) inode=15742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=83 name=(null) inode=15747 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=84 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=85 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=86 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=87 name=(null) inode=15749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=88 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=89 name=(null) inode=15750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=90 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=91 name=(null) inode=15751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=92 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=93 name=(null) inode=15752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=94 name=(null) inode=15748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=95 name=(null) inode=15753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=96 name=(null) inode=15733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=97 name=(null) inode=15754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=98 name=(null) inode=15754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=99 name=(null) inode=15755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=100 name=(null) inode=15754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=101 name=(null) inode=15756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=102 name=(null) inode=15754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=103 name=(null) inode=15757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=104 name=(null) inode=15754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=105 name=(null) inode=15758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=106 name=(null) inode=15754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=107 name=(null) inode=15759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PATH item=109 name=(null) inode=15762 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:57:12.196000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 20:57:12.269104 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 20:57:12.283303 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 20:57:12.283560 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 20:57:12.283751 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 20:57:12.420743 systemd[1]: Finished systemd-udev-settle.service. Mar 17 20:57:12.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.423389 systemd[1]: Starting lvm2-activation-early.service... Mar 17 20:57:12.451463 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:57:12.488966 systemd[1]: Finished lvm2-activation-early.service. Mar 17 20:57:12.489931 systemd[1]: Reached target cryptsetup.target. Mar 17 20:57:12.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.492957 systemd[1]: Starting lvm2-activation.service... Mar 17 20:57:12.500502 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:57:12.527641 systemd[1]: Finished lvm2-activation.service. Mar 17 20:57:12.528499 systemd[1]: Reached target local-fs-pre.target. Mar 17 20:57:12.529136 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 20:57:12.529183 systemd[1]: Reached target local-fs.target. Mar 17 20:57:12.529757 systemd[1]: Reached target machines.target. Mar 17 20:57:12.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.532258 systemd[1]: Starting ldconfig.service... Mar 17 20:57:12.534286 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:57:12.534368 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:12.536169 systemd[1]: Starting systemd-boot-update.service... Mar 17 20:57:12.538214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 20:57:12.541263 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 20:57:12.543845 systemd[1]: Starting systemd-sysext.service... Mar 17 20:57:12.553750 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Mar 17 20:57:12.556233 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 20:57:12.579456 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 20:57:12.585450 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 20:57:12.585777 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 20:57:12.706118 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 20:57:12.717362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 20:57:12.718214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 20:57:12.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.719435 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 20:57:12.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.744264 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 20:57:12.771016 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 20:57:12.796228 (sd-sysext)[1121]: Using extensions 'kubernetes'. Mar 17 20:57:12.797715 (sd-sysext)[1121]: Merged extensions into '/usr'. Mar 17 20:57:12.814882 systemd-fsck[1118]: fsck.fat 4.2 (2021-01-31) Mar 17 20:57:12.814882 systemd-fsck[1118]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 20:57:12.835869 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 20:57:12.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.841866 systemd[1]: Mounting boot.mount... Mar 17 20:57:12.842631 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:57:12.845485 systemd[1]: Mounting usr-share-oem.mount... Mar 17 20:57:12.846577 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:57:12.856367 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:57:12.861653 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:57:12.866545 systemd[1]: Starting modprobe@loop.service... Mar 17 20:57:12.869472 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:57:12.869707 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:12.869936 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:57:12.877670 systemd[1]: Mounted usr-share-oem.mount. Mar 17 20:57:12.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.880267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:57:12.881125 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:57:12.886158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:57:12.886413 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:57:12.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.892576 systemd[1]: Mounted boot.mount. Mar 17 20:57:12.893820 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:57:12.894123 systemd[1]: Finished modprobe@loop.service. Mar 17 20:57:12.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.895395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:57:12.895548 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:57:12.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:12.902786 systemd[1]: Finished systemd-sysext.service. Mar 17 20:57:12.906540 systemd[1]: Starting ensure-sysext.service... Mar 17 20:57:12.911607 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 20:57:12.923849 systemd[1]: Reloading. Mar 17 20:57:12.939910 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 20:57:12.942405 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 20:57:12.956707 systemd-tmpfiles[1140]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 20:57:13.141418 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-03-17T20:57:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:57:13.143968 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-03-17T20:57:13Z" level=info msg="torcx already run" Mar 17 20:57:13.288160 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 20:57:13.338814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:57:13.338854 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:57:13.365136 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:57:13.462496 systemd[1]: Finished ldconfig.service. Mar 17 20:57:13.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.464037 systemd[1]: Finished systemd-boot-update.service. Mar 17 20:57:13.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.466453 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 20:57:13.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.470895 systemd[1]: Starting audit-rules.service... Mar 17 20:57:13.473732 systemd[1]: Starting clean-ca-certificates.service... Mar 17 20:57:13.491398 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 20:57:13.494500 systemd[1]: Starting systemd-resolved.service... Mar 17 20:57:13.498658 systemd[1]: Starting systemd-timesyncd.service... Mar 17 20:57:13.501373 systemd[1]: Starting systemd-update-utmp.service... Mar 17 20:57:13.506769 systemd[1]: Finished clean-ca-certificates.service. Mar 17 20:57:13.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.517964 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.522467 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:57:13.524759 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:57:13.529088 systemd[1]: Starting modprobe@loop.service... Mar 17 20:57:13.529840 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.530102 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:13.530327 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:57:13.531807 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:57:13.532037 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:57:13.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.537115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:57:13.537345 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:57:13.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.544773 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.547030 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:57:13.549710 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:57:13.550000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.551085 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.551311 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:13.551541 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:57:13.553346 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:57:13.553585 systemd[1]: Finished modprobe@loop.service. Mar 17 20:57:13.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.556946 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:57:13.557206 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:57:13.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.561715 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 20:57:13.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.569178 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.572885 systemd[1]: Starting systemd-update-done.service... Mar 17 20:57:13.574997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:57:13.575300 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:57:13.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.576787 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:57:13.591799 systemd[1]: Finished systemd-update-utmp.service. Mar 17 20:57:13.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.596259 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:57:13.596638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.598403 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:57:13.601010 systemd[1]: Starting modprobe@drm.service... Mar 17 20:57:13.603835 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:57:13.609326 systemd[1]: Starting modprobe@loop.service... Mar 17 20:57:13.610552 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.610769 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:13.614478 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 20:57:13.615354 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:57:13.615455 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:57:13.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.616889 systemd[1]: Finished ensure-sysext.service. Mar 17 20:57:13.623845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:57:13.624385 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:57:13.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.626738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:57:13.627020 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:57:13.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.630212 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:57:13.630490 systemd[1]: Finished modprobe@loop.service. Mar 17 20:57:13.634223 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:57:13.634318 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.639343 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:57:13.639598 systemd[1]: Finished modprobe@drm.service. Mar 17 20:57:13.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.641424 systemd[1]: Finished systemd-update-done.service. Mar 17 20:57:13.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:57:13.679000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 20:57:13.679000 audit[1264]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7af5c2b0 a2=420 a3=0 items=0 ppid=1216 pid=1264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:57:13.679000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 20:57:13.680076 augenrules[1264]: No rules Mar 17 20:57:13.680310 systemd[1]: Finished audit-rules.service. Mar 17 20:57:13.710820 systemd[1]: Started systemd-timesyncd.service. Mar 17 20:57:13.711655 systemd[1]: Reached target time-set.target. Mar 17 20:57:13.723491 systemd-resolved[1221]: Positive Trust Anchors: Mar 17 20:57:13.724003 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:57:13.724184 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 20:57:13.731742 systemd-resolved[1221]: Using system hostname 'srv-be9pf.gb1.brightbox.com'. Mar 17 20:57:13.734598 systemd[1]: Started systemd-resolved.service. Mar 17 20:57:13.735442 systemd[1]: Reached target network.target. Mar 17 20:57:13.736078 systemd[1]: Reached target nss-lookup.target. Mar 17 20:57:13.736701 systemd[1]: Reached target sysinit.target. Mar 17 20:57:13.737385 systemd[1]: Started motdgen.path. Mar 17 20:57:13.737974 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 20:57:13.738934 systemd[1]: Started logrotate.timer. Mar 17 20:57:13.739686 systemd[1]: Started mdadm.timer. Mar 17 20:57:13.740267 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 20:57:13.740878 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 20:57:13.740917 systemd[1]: Reached target paths.target. Mar 17 20:57:13.741493 systemd[1]: Reached target timers.target. Mar 17 20:57:13.742474 systemd[1]: Listening on dbus.socket. Mar 17 20:57:13.745156 systemd[1]: Starting docker.socket... Mar 17 20:57:13.747733 systemd[1]: Listening on sshd.socket. Mar 17 20:57:13.748621 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:13.749009 systemd[1]: Listening on docker.socket. Mar 17 20:57:13.749859 systemd[1]: Reached target sockets.target. Mar 17 20:57:13.750565 systemd[1]: Reached target basic.target. Mar 17 20:57:13.751535 systemd[1]: System is tainted: cgroupsv1 Mar 17 20:57:13.751600 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.751639 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 20:57:13.753563 systemd[1]: Starting containerd.service... Mar 17 20:57:13.756382 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 20:57:13.759470 systemd[1]: Starting dbus.service... Mar 17 20:57:13.764645 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 20:57:13.768157 systemd[1]: Starting extend-filesystems.service... Mar 17 20:57:13.769010 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 20:57:13.770424 systemd-networkd[1079]: eth0: Gained IPv6LL Mar 17 20:57:13.772879 systemd[1]: Starting motdgen.service... Mar 17 20:57:13.775953 systemd[1]: Starting prepare-helm.service... Mar 17 20:57:13.785997 jq[1277]: false Mar 17 20:57:13.788717 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 20:57:13.796418 systemd[1]: Starting sshd-keygen.service... Mar 17 20:57:13.809044 systemd[1]: Starting systemd-logind.service... Mar 17 20:57:13.812379 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:57:13.812535 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 20:57:13.815046 systemd[1]: Starting update-engine.service... Mar 17 20:57:13.818351 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 20:57:13.830342 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 20:57:13.834623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 20:57:13.834983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 20:57:13.837229 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 20:57:13.837862 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 20:57:13.843753 systemd[1]: Reached target network-online.target. Mar 17 20:57:13.849526 systemd[1]: Starting kubelet.service... Mar 17 20:57:13.868501 dbus-daemon[1276]: [system] SELinux support is enabled Mar 17 20:57:13.869263 systemd[1]: Started dbus.service. Mar 17 20:57:13.871215 extend-filesystems[1278]: Found loop1 Mar 17 20:57:13.872299 extend-filesystems[1278]: Found vda Mar 17 20:57:13.872845 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 20:57:13.872895 systemd[1]: Reached target system-config.target. Mar 17 20:57:13.873624 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 20:57:13.873677 systemd[1]: Reached target user-config.target. Mar 17 20:57:13.874376 extend-filesystems[1278]: Found vda1 Mar 17 20:57:13.875181 extend-filesystems[1278]: Found vda2 Mar 17 20:57:13.876245 extend-filesystems[1278]: Found vda3 Mar 17 20:57:13.876245 extend-filesystems[1278]: Found usr Mar 17 20:57:13.876245 extend-filesystems[1278]: Found vda4 Mar 17 20:57:13.876245 extend-filesystems[1278]: Found vda6 Mar 17 20:57:13.876245 extend-filesystems[1278]: Found vda7 Mar 17 20:57:13.876245 extend-filesystems[1278]: Found vda9 Mar 17 20:57:13.902404 extend-filesystems[1278]: Checking size of /dev/vda9 Mar 17 20:57:13.894271 systemd[1]: Starting systemd-hostnamed.service... Mar 17 20:57:13.887093 dbus-daemon[1276]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1079 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 20:57:13.907018 jq[1296]: true Mar 17 20:57:13.907668 jq[1314]: true Mar 17 20:57:13.916894 tar[1301]: linux-amd64/helm Mar 17 20:57:13.930432 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 20:57:13.930822 systemd[1]: Finished motdgen.service. Mar 17 20:57:13.960089 extend-filesystems[1278]: Resized partition /dev/vda9 Mar 17 20:57:13.980645 extend-filesystems[1323]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 20:57:13.992927 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 17 20:57:14.030032 update_engine[1294]: I0317 20:57:14.029249 1294 main.cc:92] Flatcar Update Engine starting Mar 17 20:57:14.034341 systemd[1]: Started update-engine.service. Mar 17 20:57:14.037926 systemd[1]: Started locksmithd.service. Mar 17 20:57:14.040367 update_engine[1294]: I0317 20:57:14.040223 1294 update_check_scheduler.cc:74] Next update check in 2m33s Mar 17 20:57:14.237915 bash[1339]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:57:14.236982 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 20:57:14.311488 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 20:57:14.330820 systemd-logind[1292]: Watching system buttons on /dev/input/event2 (Power Button) Mar 17 20:57:14.330860 systemd-logind[1292]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 20:57:14.336527 extend-filesystems[1323]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 20:57:14.336527 extend-filesystems[1323]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 20:57:14.336527 extend-filesystems[1323]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 20:57:14.335328 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 20:57:14.348926 extend-filesystems[1278]: Resized filesystem in /dev/vda9 Mar 17 20:57:14.335747 systemd[1]: Finished extend-filesystems.service. Mar 17 20:57:14.340732 systemd-logind[1292]: New seat seat0. Mar 17 20:57:14.347789 systemd[1]: Started systemd-logind.service. Mar 17 20:57:14.408464 env[1302]: time="2025-03-17T20:57:14.408209132Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 20:57:14.471940 dbus-daemon[1276]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 20:57:14.472160 systemd[1]: Started systemd-hostnamed.service. Mar 17 20:57:14.476739 dbus-daemon[1276]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1315 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 20:57:14.482044 systemd[1]: Starting polkit.service... Mar 17 20:57:14.488925 env[1302]: time="2025-03-17T20:57:14.488830698Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 20:57:14.489955 env[1302]: time="2025-03-17T20:57:14.489925579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:57:14.493445 env[1302]: time="2025-03-17T20:57:14.493394529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:57:14.493514 env[1302]: time="2025-03-17T20:57:14.493442127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:57:14.493808 env[1302]: time="2025-03-17T20:57:14.493771698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:57:14.493808 env[1302]: time="2025-03-17T20:57:14.493805791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 20:57:14.493933 env[1302]: time="2025-03-17T20:57:14.493828194Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 20:57:14.493933 env[1302]: time="2025-03-17T20:57:14.493857921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 20:57:14.494035 env[1302]: time="2025-03-17T20:57:14.494008781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:57:14.498189 env[1302]: time="2025-03-17T20:57:14.498149174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:57:14.498435 env[1302]: time="2025-03-17T20:57:14.498393062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:57:14.498435 env[1302]: time="2025-03-17T20:57:14.498429572Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 20:57:14.498550 env[1302]: time="2025-03-17T20:57:14.498527787Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 20:57:14.498638 env[1302]: time="2025-03-17T20:57:14.498551710Z" level=info msg="metadata content store policy set" policy=shared Mar 17 20:57:14.510318 env[1302]: time="2025-03-17T20:57:14.510279522Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 20:57:14.510443 env[1302]: time="2025-03-17T20:57:14.510333198Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 20:57:14.510443 env[1302]: time="2025-03-17T20:57:14.510357704Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 20:57:14.510554 env[1302]: time="2025-03-17T20:57:14.510530959Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510601 env[1302]: time="2025-03-17T20:57:14.510560123Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510601 env[1302]: time="2025-03-17T20:57:14.510581893Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510726 env[1302]: time="2025-03-17T20:57:14.510603067Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510726 env[1302]: time="2025-03-17T20:57:14.510626742Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510726 env[1302]: time="2025-03-17T20:57:14.510656858Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510726 env[1302]: time="2025-03-17T20:57:14.510676982Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510885 env[1302]: time="2025-03-17T20:57:14.510761706Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.510885 env[1302]: time="2025-03-17T20:57:14.510810777Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 20:57:14.514926 polkitd[1351]: Started polkitd version 121 Mar 17 20:57:14.516444 env[1302]: time="2025-03-17T20:57:14.511044570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 20:57:14.516444 env[1302]: time="2025-03-17T20:57:14.516029049Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 20:57:14.516800 env[1302]: time="2025-03-17T20:57:14.516745870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 20:57:14.516923 env[1302]: time="2025-03-17T20:57:14.516899249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.516984 env[1302]: time="2025-03-17T20:57:14.516932235Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 20:57:14.531769 env[1302]: time="2025-03-17T20:57:14.531704112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541322 env[1302]: time="2025-03-17T20:57:14.531773894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541409 env[1302]: time="2025-03-17T20:57:14.541364501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541467 env[1302]: time="2025-03-17T20:57:14.541417922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541467 env[1302]: time="2025-03-17T20:57:14.541446177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541576 env[1302]: time="2025-03-17T20:57:14.541467661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541576 env[1302]: time="2025-03-17T20:57:14.541488343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541576 env[1302]: time="2025-03-17T20:57:14.541508463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541576 env[1302]: time="2025-03-17T20:57:14.541537690Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 20:57:14.541892 env[1302]: time="2025-03-17T20:57:14.541863071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541950 env[1302]: time="2025-03-17T20:57:14.541900940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.541950 env[1302]: time="2025-03-17T20:57:14.541925096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.542048 env[1302]: time="2025-03-17T20:57:14.541953350Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 20:57:14.542048 env[1302]: time="2025-03-17T20:57:14.541978718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 20:57:14.542048 env[1302]: time="2025-03-17T20:57:14.541997629Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 20:57:14.546493 env[1302]: time="2025-03-17T20:57:14.546452806Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 20:57:14.546599 env[1302]: time="2025-03-17T20:57:14.546569341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 20:57:14.547746 env[1302]: time="2025-03-17T20:57:14.547651771Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 20:57:14.550744 env[1302]: time="2025-03-17T20:57:14.547768526Z" level=info msg="Connect containerd service" Mar 17 20:57:14.553231 env[1302]: time="2025-03-17T20:57:14.553191933Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 20:57:14.555617 env[1302]: time="2025-03-17T20:57:14.555576337Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:57:14.560302 env[1302]: time="2025-03-17T20:57:14.560271422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 20:57:14.560383 env[1302]: time="2025-03-17T20:57:14.560355093Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 20:57:14.561849 env[1302]: time="2025-03-17T20:57:14.560592527Z" level=info msg="containerd successfully booted in 0.172562s" Mar 17 20:57:14.560752 systemd[1]: Started containerd.service. Mar 17 20:57:14.566080 env[1302]: time="2025-03-17T20:57:14.564051758Z" level=info msg="Start subscribing containerd event" Mar 17 20:57:14.566080 env[1302]: time="2025-03-17T20:57:14.564434984Z" level=info msg="Start recovering state" Mar 17 20:57:14.566080 env[1302]: time="2025-03-17T20:57:14.564724387Z" level=info msg="Start event monitor" Mar 17 20:57:14.566080 env[1302]: time="2025-03-17T20:57:14.564801460Z" level=info msg="Start snapshots syncer" Mar 17 20:57:14.566080 env[1302]: time="2025-03-17T20:57:14.564853340Z" level=info msg="Start cni network conf syncer for default" Mar 17 20:57:14.566080 env[1302]: time="2025-03-17T20:57:14.564876083Z" level=info msg="Start streaming server" Mar 17 20:57:14.577721 polkitd[1351]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 20:57:14.577847 polkitd[1351]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 20:57:14.581439 polkitd[1351]: Finished loading, compiling and executing 2 rules Mar 17 20:57:14.584178 dbus-daemon[1276]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 20:57:14.584445 systemd[1]: Started polkit.service. Mar 17 20:57:14.585697 polkitd[1351]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 20:57:14.602560 systemd-hostnamed[1315]: Hostname set to (static) Mar 17 20:57:14.611182 systemd-networkd[1079]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d38a:24:19ff:fef3:4e2a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d38a:24:19ff:fef3:4e2a/64 assigned by NDisc. Mar 17 20:57:14.611193 systemd-networkd[1079]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 17 20:57:15.467805 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 20:57:15.518907 systemd[1]: Finished sshd-keygen.service. Mar 17 20:57:15.522552 systemd[1]: Starting issuegen.service... Mar 17 20:57:15.538328 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 20:57:15.538688 systemd[1]: Finished issuegen.service. Mar 17 20:57:15.541783 systemd[1]: Starting systemd-user-sessions.service... Mar 17 20:57:15.552930 locksmithd[1335]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 20:57:15.560523 systemd[1]: Finished systemd-user-sessions.service. Mar 17 20:57:15.563370 systemd[1]: Started getty@tty1.service. Mar 17 20:57:15.567829 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 20:57:15.569656 systemd[1]: Reached target getty.target. Mar 17 20:57:15.743109 tar[1301]: linux-amd64/LICENSE Mar 17 20:57:15.743852 tar[1301]: linux-amd64/README.md Mar 17 20:57:15.750938 systemd[1]: Finished prepare-helm.service. Mar 17 20:57:16.012512 systemd[1]: Started kubelet.service. Mar 17 20:57:16.801701 kubelet[1392]: E0317 20:57:16.801637 1392 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:57:16.804119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:57:16.804407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:57:20.919572 coreos-metadata[1275]: Mar 17 20:57:20.919 WARN failed to locate config-drive, using the metadata service API instead Mar 17 20:57:20.970530 coreos-metadata[1275]: Mar 17 20:57:20.970 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 20:57:20.990624 coreos-metadata[1275]: Mar 17 20:57:20.990 INFO Fetch successful Mar 17 20:57:20.990808 coreos-metadata[1275]: Mar 17 20:57:20.990 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 20:57:21.017321 coreos-metadata[1275]: Mar 17 20:57:21.017 INFO Fetch successful Mar 17 20:57:21.019131 unknown[1275]: wrote ssh authorized keys file for user: core Mar 17 20:57:21.030902 update-ssh-keys[1403]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:57:21.031460 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 20:57:21.031923 systemd[1]: Reached target multi-user.target. Mar 17 20:57:21.034106 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 20:57:21.044570 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 20:57:21.044913 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 20:57:21.045357 systemd[1]: Startup finished in 14.765s (kernel) + 14.878s (userspace) = 29.644s. Mar 17 20:57:23.962545 systemd[1]: Created slice system-sshd.slice. Mar 17 20:57:23.965127 systemd[1]: Started sshd@0-10.243.78.42:22-139.178.89.65:40958.service. Mar 17 20:57:24.246974 systemd-timesyncd[1224]: Timed out waiting for reply from 91.109.118.94:123 (0.flatcar.pool.ntp.org). Mar 17 20:57:24.257539 systemd-timesyncd[1224]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Mar 17 20:57:24.257671 systemd-timesyncd[1224]: Initial clock synchronization to Mon 2025-03-17 20:57:24.425191 UTC. Mar 17 20:57:24.875124 sshd[1408]: Accepted publickey for core from 139.178.89.65 port 40958 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:57:24.877934 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:57:24.894461 systemd[1]: Created slice user-500.slice. Mar 17 20:57:24.896247 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 20:57:24.906569 systemd-logind[1292]: New session 1 of user core. Mar 17 20:57:24.916382 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 20:57:24.918518 systemd[1]: Starting user@500.service... Mar 17 20:57:24.927744 (systemd)[1413]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:57:25.028907 systemd[1413]: Queued start job for default target default.target. Mar 17 20:57:25.029276 systemd[1413]: Reached target paths.target. Mar 17 20:57:25.029305 systemd[1413]: Reached target sockets.target. Mar 17 20:57:25.029325 systemd[1413]: Reached target timers.target. Mar 17 20:57:25.029343 systemd[1413]: Reached target basic.target. Mar 17 20:57:25.029533 systemd[1]: Started user@500.service. Mar 17 20:57:25.030969 systemd[1]: Started session-1.scope. Mar 17 20:57:25.031637 systemd[1413]: Reached target default.target. Mar 17 20:57:25.031910 systemd[1413]: Startup finished in 94ms. Mar 17 20:57:25.672781 systemd[1]: Started sshd@1-10.243.78.42:22-139.178.89.65:40960.service. Mar 17 20:57:26.575437 sshd[1422]: Accepted publickey for core from 139.178.89.65 port 40960 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:57:26.577365 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:57:26.584609 systemd[1]: Started session-2.scope. Mar 17 20:57:26.586139 systemd-logind[1292]: New session 2 of user core. Mar 17 20:57:27.055685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 20:57:27.055959 systemd[1]: Stopped kubelet.service. Mar 17 20:57:27.058751 systemd[1]: Starting kubelet.service... Mar 17 20:57:27.206383 sshd[1422]: pam_unix(sshd:session): session closed for user core Mar 17 20:57:27.210235 systemd[1]: sshd@1-10.243.78.42:22-139.178.89.65:40960.service: Deactivated successfully. Mar 17 20:57:27.211782 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 20:57:27.211817 systemd-logind[1292]: Session 2 logged out. Waiting for processes to exit. Mar 17 20:57:27.213677 systemd-logind[1292]: Removed session 2. Mar 17 20:57:27.248387 systemd[1]: Started kubelet.service. Mar 17 20:57:27.355370 systemd[1]: Started sshd@2-10.243.78.42:22-139.178.89.65:40974.service. Mar 17 20:57:27.358192 kubelet[1436]: E0317 20:57:27.358147 1436 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:57:27.362891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:57:27.363214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:57:28.261295 sshd[1444]: Accepted publickey for core from 139.178.89.65 port 40974 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:57:28.263649 sshd[1444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:57:28.272794 systemd[1]: Started session-3.scope. Mar 17 20:57:28.273130 systemd-logind[1292]: New session 3 of user core. Mar 17 20:57:28.882647 sshd[1444]: pam_unix(sshd:session): session closed for user core Mar 17 20:57:28.886403 systemd-logind[1292]: Session 3 logged out. Waiting for processes to exit. Mar 17 20:57:28.886739 systemd[1]: sshd@2-10.243.78.42:22-139.178.89.65:40974.service: Deactivated successfully. Mar 17 20:57:28.887918 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 20:57:28.888595 systemd-logind[1292]: Removed session 3. Mar 17 20:57:29.027695 systemd[1]: Started sshd@3-10.243.78.42:22-139.178.89.65:40990.service. Mar 17 20:57:29.928048 sshd[1452]: Accepted publickey for core from 139.178.89.65 port 40990 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:57:29.932627 sshd[1452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:57:29.942064 systemd[1]: Started session-4.scope. Mar 17 20:57:29.943168 systemd-logind[1292]: New session 4 of user core. Mar 17 20:57:30.555389 sshd[1452]: pam_unix(sshd:session): session closed for user core Mar 17 20:57:30.560024 systemd[1]: sshd@3-10.243.78.42:22-139.178.89.65:40990.service: Deactivated successfully. Mar 17 20:57:30.561320 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 20:57:30.563606 systemd-logind[1292]: Session 4 logged out. Waiting for processes to exit. Mar 17 20:57:30.565349 systemd-logind[1292]: Removed session 4. Mar 17 20:57:30.702937 systemd[1]: Started sshd@4-10.243.78.42:22-139.178.89.65:40992.service. Mar 17 20:57:31.600325 sshd[1459]: Accepted publickey for core from 139.178.89.65 port 40992 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:57:31.603259 sshd[1459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:57:31.611009 systemd-logind[1292]: New session 5 of user core. Mar 17 20:57:31.612141 systemd[1]: Started session-5.scope. Mar 17 20:57:32.112176 sudo[1463]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 20:57:32.113365 sudo[1463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 20:57:32.204280 systemd[1]: Starting docker.service... Mar 17 20:57:32.403512 env[1473]: time="2025-03-17T20:57:32.403099606Z" level=info msg="Starting up" Mar 17 20:57:32.408714 env[1473]: time="2025-03-17T20:57:32.408663731Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 20:57:32.408930 env[1473]: time="2025-03-17T20:57:32.408889055Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 20:57:32.409123 env[1473]: time="2025-03-17T20:57:32.409050560Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 20:57:32.409253 env[1473]: time="2025-03-17T20:57:32.409225972Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 20:57:32.422659 env[1473]: time="2025-03-17T20:57:32.422552017Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 20:57:32.422659 env[1473]: time="2025-03-17T20:57:32.422599366Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 20:57:32.422659 env[1473]: time="2025-03-17T20:57:32.422636460Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 20:57:32.422659 env[1473]: time="2025-03-17T20:57:32.422652126Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 20:57:32.435236 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1503241237-merged.mount: Deactivated successfully. Mar 17 20:57:32.603140 env[1473]: time="2025-03-17T20:57:32.603047619Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 20:57:32.603456 env[1473]: time="2025-03-17T20:57:32.603424051Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 20:57:32.604268 env[1473]: time="2025-03-17T20:57:32.604238326Z" level=info msg="Loading containers: start." Mar 17 20:57:32.790153 kernel: Initializing XFRM netlink socket Mar 17 20:57:32.839991 env[1473]: time="2025-03-17T20:57:32.839920112Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 20:57:32.935474 systemd-networkd[1079]: docker0: Link UP Mar 17 20:57:32.953442 env[1473]: time="2025-03-17T20:57:32.953381787Z" level=info msg="Loading containers: done." Mar 17 20:57:32.986118 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3406396075-merged.mount: Deactivated successfully. Mar 17 20:57:32.991709 env[1473]: time="2025-03-17T20:57:32.991602175Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 20:57:32.992446 env[1473]: time="2025-03-17T20:57:32.992399617Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 20:57:32.992832 env[1473]: time="2025-03-17T20:57:32.992789424Z" level=info msg="Daemon has completed initialization" Mar 17 20:57:33.023270 systemd[1]: Started docker.service. Mar 17 20:57:33.043587 env[1473]: time="2025-03-17T20:57:33.043431241Z" level=info msg="API listen on /run/docker.sock" Mar 17 20:57:34.699114 env[1302]: time="2025-03-17T20:57:34.696594314Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 20:57:35.574150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078632264.mount: Deactivated successfully. Mar 17 20:57:37.616984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 20:57:37.617781 systemd[1]: Stopped kubelet.service. Mar 17 20:57:37.626821 systemd[1]: Starting kubelet.service... Mar 17 20:57:37.974649 systemd[1]: Started kubelet.service. Mar 17 20:57:38.117470 kubelet[1612]: E0317 20:57:38.117393 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:57:38.120544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:57:38.120899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:57:38.835658 env[1302]: time="2025-03-17T20:57:38.835503885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:38.840275 env[1302]: time="2025-03-17T20:57:38.840228895Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:38.845643 env[1302]: time="2025-03-17T20:57:38.845547622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:38.848559 env[1302]: time="2025-03-17T20:57:38.848505309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:38.850515 env[1302]: time="2025-03-17T20:57:38.850440253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 20:57:38.881745 env[1302]: time="2025-03-17T20:57:38.881686891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 20:57:41.880017 env[1302]: time="2025-03-17T20:57:41.879763128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:41.883435 env[1302]: time="2025-03-17T20:57:41.883383622Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:41.886130 env[1302]: time="2025-03-17T20:57:41.886076047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:41.888988 env[1302]: time="2025-03-17T20:57:41.888949863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:41.890577 env[1302]: time="2025-03-17T20:57:41.890514443Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 20:57:41.908774 env[1302]: time="2025-03-17T20:57:41.908714755Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 20:57:44.402118 env[1302]: time="2025-03-17T20:57:44.401835721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:44.405125 env[1302]: time="2025-03-17T20:57:44.405087253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:44.407896 env[1302]: time="2025-03-17T20:57:44.407849772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:44.410635 env[1302]: time="2025-03-17T20:57:44.410596569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:44.412102 env[1302]: time="2025-03-17T20:57:44.411991296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 20:57:44.441433 env[1302]: time="2025-03-17T20:57:44.441384166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 20:57:44.658438 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 20:57:46.184608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196297933.mount: Deactivated successfully. Mar 17 20:57:47.271936 env[1302]: time="2025-03-17T20:57:47.271870020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:47.274693 env[1302]: time="2025-03-17T20:57:47.274654023Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:47.276462 env[1302]: time="2025-03-17T20:57:47.276425480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:47.278879 env[1302]: time="2025-03-17T20:57:47.278846068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:47.279862 env[1302]: time="2025-03-17T20:57:47.279802427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 20:57:47.304401 env[1302]: time="2025-03-17T20:57:47.304338458Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 20:57:47.987915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636092999.mount: Deactivated successfully. Mar 17 20:57:48.373267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 20:57:48.374326 systemd[1]: Stopped kubelet.service. Mar 17 20:57:48.383377 systemd[1]: Starting kubelet.service... Mar 17 20:57:48.822475 systemd[1]: Started kubelet.service. Mar 17 20:57:49.011655 kubelet[1652]: E0317 20:57:49.011531 1652 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:57:49.013529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:57:49.013879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:57:49.906728 env[1302]: time="2025-03-17T20:57:49.906471386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:49.910330 env[1302]: time="2025-03-17T20:57:49.910289233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:49.913909 env[1302]: time="2025-03-17T20:57:49.913857784Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:49.920476 env[1302]: time="2025-03-17T20:57:49.920439436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:49.921226 env[1302]: time="2025-03-17T20:57:49.921086521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 20:57:49.952537 env[1302]: time="2025-03-17T20:57:49.952472061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 20:57:50.609483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985452597.mount: Deactivated successfully. Mar 17 20:57:50.616661 env[1302]: time="2025-03-17T20:57:50.616601070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:50.620116 env[1302]: time="2025-03-17T20:57:50.620053471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:50.621667 env[1302]: time="2025-03-17T20:57:50.621628666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:50.624960 env[1302]: time="2025-03-17T20:57:50.624920834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 20:57:50.625146 env[1302]: time="2025-03-17T20:57:50.624026344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:50.644983 env[1302]: time="2025-03-17T20:57:50.644922633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 20:57:51.420867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808888364.mount: Deactivated successfully. Mar 17 20:57:55.583520 env[1302]: time="2025-03-17T20:57:55.583199553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:55.588536 env[1302]: time="2025-03-17T20:57:55.588490183Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:55.591374 env[1302]: time="2025-03-17T20:57:55.591327807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:55.594029 env[1302]: time="2025-03-17T20:57:55.593984686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:57:55.595550 env[1302]: time="2025-03-17T20:57:55.595454040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 20:57:59.182804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 20:57:59.183920 systemd[1]: Stopped kubelet.service. Mar 17 20:57:59.190823 systemd[1]: Starting kubelet.service... Mar 17 20:57:59.416438 update_engine[1294]: I0317 20:57:59.416293 1294 update_attempter.cc:509] Updating boot flags... Mar 17 20:57:59.747264 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 20:57:59.747410 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 20:57:59.747856 systemd[1]: Stopped kubelet.service. Mar 17 20:57:59.752187 systemd[1]: Starting kubelet.service... Mar 17 20:57:59.780490 systemd[1]: Reloading. Mar 17 20:57:59.925351 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2025-03-17T20:57:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:57:59.926181 /usr/lib/systemd/system-generators/torcx-generator[1776]: time="2025-03-17T20:57:59Z" level=info msg="torcx already run" Mar 17 20:58:00.054781 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:58:00.055633 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:58:00.085258 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:58:00.223655 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 20:58:00.223794 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 20:58:00.224293 systemd[1]: Stopped kubelet.service. Mar 17 20:58:00.227330 systemd[1]: Starting kubelet.service... Mar 17 20:58:00.503137 systemd[1]: Started kubelet.service. Mar 17 20:58:00.601657 kubelet[1843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:58:00.601657 kubelet[1843]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 20:58:00.601657 kubelet[1843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:58:00.603404 kubelet[1843]: I0317 20:58:00.603337 1843 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:58:01.174089 kubelet[1843]: I0317 20:58:01.174019 1843 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 20:58:01.174089 kubelet[1843]: I0317 20:58:01.174091 1843 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:58:01.174411 kubelet[1843]: I0317 20:58:01.174379 1843 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 20:58:01.202176 kubelet[1843]: I0317 20:58:01.202134 1843 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:58:01.204415 kubelet[1843]: E0317 20:58:01.203571 1843 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.243.78.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.227593 kubelet[1843]: I0317 20:58:01.227548 1843 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:58:01.229857 kubelet[1843]: I0317 20:58:01.229739 1843 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:58:01.230091 kubelet[1843]: I0317 20:58:01.229804 1843 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-be9pf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 20:58:01.230793 kubelet[1843]: I0317 20:58:01.230751 1843 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:58:01.230793 kubelet[1843]: I0317 20:58:01.230780 1843 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 20:58:01.231010 kubelet[1843]: I0317 20:58:01.230992 1843 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:58:01.232129 kubelet[1843]: I0317 20:58:01.232095 1843 kubelet.go:400] "Attempting to sync node with API server" Mar 17 20:58:01.232222 kubelet[1843]: I0317 20:58:01.232139 1843 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:58:01.232222 kubelet[1843]: I0317 20:58:01.232205 1843 kubelet.go:312] "Adding apiserver pod source" Mar 17 20:58:01.232391 kubelet[1843]: I0317 20:58:01.232256 1843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:58:01.253948 kubelet[1843]: W0317 20:58:01.253772 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.78.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-be9pf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.254540 kubelet[1843]: E0317 20:58:01.254505 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.78.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-be9pf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.255039 kubelet[1843]: W0317 20:58:01.254986 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.78.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.255201 kubelet[1843]: E0317 20:58:01.255177 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.243.78.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.255940 kubelet[1843]: I0317 20:58:01.255913 1843 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 20:58:01.257781 kubelet[1843]: I0317 20:58:01.257754 1843 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:58:01.258165 kubelet[1843]: W0317 20:58:01.258142 1843 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 20:58:01.259695 kubelet[1843]: I0317 20:58:01.259671 1843 server.go:1264] "Started kubelet" Mar 17 20:58:01.264704 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 20:58:01.265090 kubelet[1843]: I0317 20:58:01.265067 1843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:58:01.271897 kubelet[1843]: E0317 20:58:01.269590 1843 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.78.42:6443/api/v1/namespaces/default/events\": dial tcp 10.243.78.42:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-be9pf.gb1.brightbox.com.182db2abac265114 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-be9pf.gb1.brightbox.com,UID:srv-be9pf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-be9pf.gb1.brightbox.com,},FirstTimestamp:2025-03-17 20:58:01.259594004 +0000 UTC m=+0.744115266,LastTimestamp:2025-03-17 20:58:01.259594004 +0000 UTC m=+0.744115266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-be9pf.gb1.brightbox.com,}" Mar 17 20:58:01.271897 kubelet[1843]: I0317 20:58:01.269879 1843 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:58:01.272414 kubelet[1843]: I0317 20:58:01.272385 1843 server.go:455] "Adding debug handlers to kubelet server" Mar 17 20:58:01.273967 kubelet[1843]: I0317 20:58:01.273877 1843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:58:01.274585 kubelet[1843]: I0317 20:58:01.274557 1843 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:58:01.276755 kubelet[1843]: I0317 20:58:01.276731 1843 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 20:58:01.278476 kubelet[1843]: E0317 20:58:01.277817 1843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.78.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-be9pf.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.78.42:6443: connect: connection refused" interval="200ms" Mar 17 20:58:01.280596 kubelet[1843]: I0317 20:58:01.280564 1843 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:58:01.280764 kubelet[1843]: I0317 20:58:01.280736 1843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:58:01.283157 kubelet[1843]: I0317 20:58:01.283132 1843 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:58:01.283403 kubelet[1843]: I0317 20:58:01.283381 1843 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:58:01.284148 kubelet[1843]: I0317 20:58:01.284122 1843 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:58:01.302275 kubelet[1843]: W0317 20:58:01.302198 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.78.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.302556 kubelet[1843]: E0317 20:58:01.302530 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.243.78.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.302812 kubelet[1843]: I0317 20:58:01.302767 1843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:58:01.310494 kubelet[1843]: I0317 20:58:01.310466 1843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:58:01.310673 kubelet[1843]: I0317 20:58:01.310649 1843 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 20:58:01.310832 kubelet[1843]: I0317 20:58:01.310808 1843 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 20:58:01.311092 kubelet[1843]: E0317 20:58:01.311036 1843 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:58:01.325423 kubelet[1843]: W0317 20:58:01.325360 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.78.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.325642 kubelet[1843]: E0317 20:58:01.325616 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.243.78.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:01.326874 kubelet[1843]: I0317 20:58:01.326847 1843 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 20:58:01.326997 kubelet[1843]: I0317 20:58:01.326974 1843 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 20:58:01.327192 kubelet[1843]: I0317 20:58:01.327164 1843 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:58:01.366715 kubelet[1843]: I0317 20:58:01.366660 1843 policy_none.go:49] "None policy: Start" Mar 17 20:58:01.368123 kubelet[1843]: I0317 20:58:01.368100 1843 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 20:58:01.368316 kubelet[1843]: I0317 20:58:01.368294 1843 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:58:01.380335 kubelet[1843]: I0317 20:58:01.380285 1843 kubelet_node_status.go:73] "Attempting to register node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.380994 kubelet[1843]: E0317 20:58:01.380960 1843 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.78.42:6443/api/v1/nodes\": dial tcp 10.243.78.42:6443: connect: connection refused" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.402172 kubelet[1843]: I0317 20:58:01.402131 1843 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:58:01.410791 kubelet[1843]: I0317 20:58:01.410693 1843 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:58:01.411003 kubelet[1843]: I0317 20:58:01.410928 1843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:58:01.411677 kubelet[1843]: I0317 20:58:01.411452 1843 topology_manager.go:215] "Topology Admit Handler" podUID="ef3f78f185cbbd49d47302fd89106b73" podNamespace="kube-system" podName="kube-scheduler-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.414511 kubelet[1843]: I0317 20:58:01.414473 1843 topology_manager.go:215] "Topology Admit Handler" podUID="f0b78320d261f45f8b5a41263f3ad26a" podNamespace="kube-system" podName="kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.417612 kubelet[1843]: I0317 20:58:01.417496 1843 topology_manager.go:215] "Topology Admit Handler" podUID="0746a9f4bbb9cd41dd140a6e34047c58" podNamespace="kube-system" podName="kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.418295 kubelet[1843]: E0317 20:58:01.418265 1843 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:01.479023 kubelet[1843]: E0317 20:58:01.478961 1843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.78.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-be9pf.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.78.42:6443: connect: connection refused" interval="400ms" Mar 17 20:58:01.484822 kubelet[1843]: I0317 20:58:01.484784 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0b78320d261f45f8b5a41263f3ad26a-k8s-certs\") pod \"kube-apiserver-srv-be9pf.gb1.brightbox.com\" (UID: \"f0b78320d261f45f8b5a41263f3ad26a\") " pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.485124 kubelet[1843]: I0317 20:58:01.485091 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-kubeconfig\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.485344 kubelet[1843]: I0317 20:58:01.485304 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.485541 kubelet[1843]: I0317 20:58:01.485512 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef3f78f185cbbd49d47302fd89106b73-kubeconfig\") pod \"kube-scheduler-srv-be9pf.gb1.brightbox.com\" (UID: \"ef3f78f185cbbd49d47302fd89106b73\") " pod="kube-system/kube-scheduler-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.485722 kubelet[1843]: I0317 20:58:01.485696 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0b78320d261f45f8b5a41263f3ad26a-ca-certs\") pod \"kube-apiserver-srv-be9pf.gb1.brightbox.com\" (UID: \"f0b78320d261f45f8b5a41263f3ad26a\") " pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.485920 kubelet[1843]: I0317 20:58:01.485894 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0b78320d261f45f8b5a41263f3ad26a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-be9pf.gb1.brightbox.com\" (UID: \"f0b78320d261f45f8b5a41263f3ad26a\") " pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.486138 kubelet[1843]: I0317 20:58:01.486112 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-ca-certs\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.486308 kubelet[1843]: I0317 20:58:01.486283 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-flexvolume-dir\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.486490 kubelet[1843]: I0317 20:58:01.486465 1843 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-k8s-certs\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.585249 kubelet[1843]: I0317 20:58:01.585185 1843 kubelet_node_status.go:73] "Attempting to register node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.585739 kubelet[1843]: E0317 20:58:01.585696 1843 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.78.42:6443/api/v1/nodes\": dial tcp 10.243.78.42:6443: connect: connection refused" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.725622 env[1302]: time="2025-03-17T20:58:01.725487650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-be9pf.gb1.brightbox.com,Uid:f0b78320d261f45f8b5a41263f3ad26a,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:01.726765 env[1302]: time="2025-03-17T20:58:01.725450970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-be9pf.gb1.brightbox.com,Uid:ef3f78f185cbbd49d47302fd89106b73,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:01.736196 env[1302]: time="2025-03-17T20:58:01.735621867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-be9pf.gb1.brightbox.com,Uid:0746a9f4bbb9cd41dd140a6e34047c58,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:01.880174 kubelet[1843]: E0317 20:58:01.880105 1843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.78.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-be9pf.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.78.42:6443: connect: connection refused" interval="800ms" Mar 17 20:58:01.989675 kubelet[1843]: I0317 20:58:01.989385 1843 kubelet_node_status.go:73] "Attempting to register node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:01.989996 kubelet[1843]: E0317 20:58:01.989950 1843 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.78.42:6443/api/v1/nodes\": dial tcp 10.243.78.42:6443: connect: connection refused" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:02.092483 kubelet[1843]: W0317 20:58:02.092321 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.78.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.092483 kubelet[1843]: E0317 20:58:02.092430 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.243.78.42:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.129304 kubelet[1843]: W0317 20:58:02.129188 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.78.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.129304 kubelet[1843]: E0317 20:58:02.129268 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.243.78.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.192330 kubelet[1843]: W0317 20:58:02.192190 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.78.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.192330 kubelet[1843]: E0317 20:58:02.192293 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.243.78.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.341099 kubelet[1843]: W0317 20:58:02.340466 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.78.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-be9pf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.341099 kubelet[1843]: E0317 20:58:02.340587 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.78.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-be9pf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:02.384043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622934969.mount: Deactivated successfully. Mar 17 20:58:02.391408 env[1302]: time="2025-03-17T20:58:02.391360365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.395859 env[1302]: time="2025-03-17T20:58:02.395812948Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.397952 env[1302]: time="2025-03-17T20:58:02.397919337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.400721 env[1302]: time="2025-03-17T20:58:02.400639354Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.402571 env[1302]: time="2025-03-17T20:58:02.402538622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.404847 env[1302]: time="2025-03-17T20:58:02.404813870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.406706 env[1302]: time="2025-03-17T20:58:02.406616460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.410375 env[1302]: time="2025-03-17T20:58:02.410332627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.412241 env[1302]: time="2025-03-17T20:58:02.412208007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.419604 env[1302]: time="2025-03-17T20:58:02.419529708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.426597 env[1302]: time="2025-03-17T20:58:02.426541573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.440393 env[1302]: time="2025-03-17T20:58:02.440333204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:02.516360 env[1302]: time="2025-03-17T20:58:02.516127502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:02.516632 env[1302]: time="2025-03-17T20:58:02.516344118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:02.516632 env[1302]: time="2025-03-17T20:58:02.516365471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:02.516881 env[1302]: time="2025-03-17T20:58:02.516214398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:02.517078 env[1302]: time="2025-03-17T20:58:02.517017269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:02.517309 env[1302]: time="2025-03-17T20:58:02.517240132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:02.517852 env[1302]: time="2025-03-17T20:58:02.517801326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0524e90d143473e7105ebafff676337462ead2ae93089f524e602cccdec2fa75 pid=1894 runtime=io.containerd.runc.v2 Mar 17 20:58:02.518366 env[1302]: time="2025-03-17T20:58:02.518287918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a14b8fbeed487001d0cd832a9b7208fb4daa565392d3d9c793782e5a440a70f4 pid=1895 runtime=io.containerd.runc.v2 Mar 17 20:58:02.525512 env[1302]: time="2025-03-17T20:58:02.525426534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:02.525669 env[1302]: time="2025-03-17T20:58:02.525487253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:02.525669 env[1302]: time="2025-03-17T20:58:02.525503835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:02.529528 env[1302]: time="2025-03-17T20:58:02.529465151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c858e1e7032f0e0fb5f63ef1d5dc57ca5807d0e33d65c0b7042f1a1429eb174 pid=1901 runtime=io.containerd.runc.v2 Mar 17 20:58:02.682499 kubelet[1843]: E0317 20:58:02.681500 1843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.78.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-be9pf.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.78.42:6443: connect: connection refused" interval="1.6s" Mar 17 20:58:02.695862 env[1302]: time="2025-03-17T20:58:02.695779922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-be9pf.gb1.brightbox.com,Uid:f0b78320d261f45f8b5a41263f3ad26a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a14b8fbeed487001d0cd832a9b7208fb4daa565392d3d9c793782e5a440a70f4\"" Mar 17 20:58:02.696766 env[1302]: time="2025-03-17T20:58:02.696463792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-be9pf.gb1.brightbox.com,Uid:0746a9f4bbb9cd41dd140a6e34047c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"0524e90d143473e7105ebafff676337462ead2ae93089f524e602cccdec2fa75\"" Mar 17 20:58:02.703623 env[1302]: time="2025-03-17T20:58:02.703582882Z" level=info msg="CreateContainer within sandbox \"a14b8fbeed487001d0cd832a9b7208fb4daa565392d3d9c793782e5a440a70f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 20:58:02.706636 env[1302]: time="2025-03-17T20:58:02.706597489Z" level=info msg="CreateContainer within sandbox \"0524e90d143473e7105ebafff676337462ead2ae93089f524e602cccdec2fa75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 20:58:02.740479 env[1302]: time="2025-03-17T20:58:02.740410101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-be9pf.gb1.brightbox.com,Uid:ef3f78f185cbbd49d47302fd89106b73,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c858e1e7032f0e0fb5f63ef1d5dc57ca5807d0e33d65c0b7042f1a1429eb174\"" Mar 17 20:58:02.744146 env[1302]: time="2025-03-17T20:58:02.744107797Z" level=info msg="CreateContainer within sandbox \"2c858e1e7032f0e0fb5f63ef1d5dc57ca5807d0e33d65c0b7042f1a1429eb174\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 20:58:02.770982 env[1302]: time="2025-03-17T20:58:02.770915290Z" level=info msg="CreateContainer within sandbox \"0524e90d143473e7105ebafff676337462ead2ae93089f524e602cccdec2fa75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"469cda02bd8a961c6a0fa1f8196920474370fe6d8f22b0697b40d0ec3833a8d2\"" Mar 17 20:58:02.772644 env[1302]: time="2025-03-17T20:58:02.772590429Z" level=info msg="StartContainer for \"469cda02bd8a961c6a0fa1f8196920474370fe6d8f22b0697b40d0ec3833a8d2\"" Mar 17 20:58:02.777309 env[1302]: time="2025-03-17T20:58:02.777262611Z" level=info msg="CreateContainer within sandbox \"2c858e1e7032f0e0fb5f63ef1d5dc57ca5807d0e33d65c0b7042f1a1429eb174\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12c6ddf7ae92666f09f02d5be2d552da5f2b857107c0502b498463c55f6f8e12\"" Mar 17 20:58:02.779120 env[1302]: time="2025-03-17T20:58:02.779048715Z" level=info msg="CreateContainer within sandbox \"a14b8fbeed487001d0cd832a9b7208fb4daa565392d3d9c793782e5a440a70f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82d2d34c2b4ab5961fb5aade4fb014f977d04589ad9b9700d6f9adf1dce8a4c2\"" Mar 17 20:58:02.779560 env[1302]: time="2025-03-17T20:58:02.779523064Z" level=info msg="StartContainer for \"12c6ddf7ae92666f09f02d5be2d552da5f2b857107c0502b498463c55f6f8e12\"" Mar 17 20:58:02.779954 env[1302]: time="2025-03-17T20:58:02.779915315Z" level=info msg="StartContainer for \"82d2d34c2b4ab5961fb5aade4fb014f977d04589ad9b9700d6f9adf1dce8a4c2\"" Mar 17 20:58:02.795142 kubelet[1843]: I0317 20:58:02.794617 1843 kubelet_node_status.go:73] "Attempting to register node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:02.795646 kubelet[1843]: E0317 20:58:02.795577 1843 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.78.42:6443/api/v1/nodes\": dial tcp 10.243.78.42:6443: connect: connection refused" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:02.933481 env[1302]: time="2025-03-17T20:58:02.932107139Z" level=info msg="StartContainer for \"82d2d34c2b4ab5961fb5aade4fb014f977d04589ad9b9700d6f9adf1dce8a4c2\" returns successfully" Mar 17 20:58:02.989115 env[1302]: time="2025-03-17T20:58:02.989034848Z" level=info msg="StartContainer for \"469cda02bd8a961c6a0fa1f8196920474370fe6d8f22b0697b40d0ec3833a8d2\" returns successfully" Mar 17 20:58:02.999853 env[1302]: time="2025-03-17T20:58:02.999736410Z" level=info msg="StartContainer for \"12c6ddf7ae92666f09f02d5be2d552da5f2b857107c0502b498463c55f6f8e12\" returns successfully" Mar 17 20:58:03.368925 kubelet[1843]: E0317 20:58:03.368761 1843 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.243.78.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:03.959852 kubelet[1843]: W0317 20:58:03.959791 1843 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.78.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-be9pf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:03.960101 kubelet[1843]: E0317 20:58:03.959868 1843 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.78.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-be9pf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.78.42:6443: connect: connection refused Mar 17 20:58:04.399613 kubelet[1843]: I0317 20:58:04.399230 1843 kubelet_node_status.go:73] "Attempting to register node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:06.213987 kubelet[1843]: E0317 20:58:06.213870 1843 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-be9pf.gb1.brightbox.com\" not found" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:06.292516 kubelet[1843]: I0317 20:58:06.292465 1843 kubelet_node_status.go:76] "Successfully registered node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:06.342559 kubelet[1843]: E0317 20:58:06.342513 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:06.443755 kubelet[1843]: E0317 20:58:06.443620 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:06.545450 kubelet[1843]: E0317 20:58:06.545133 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:06.646670 kubelet[1843]: E0317 20:58:06.646604 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:06.747684 kubelet[1843]: E0317 20:58:06.747569 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:06.848724 kubelet[1843]: E0317 20:58:06.848567 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:06.949360 kubelet[1843]: E0317 20:58:06.949303 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.050121 kubelet[1843]: E0317 20:58:07.050036 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.150893 kubelet[1843]: E0317 20:58:07.150718 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.251689 kubelet[1843]: E0317 20:58:07.251581 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.352652 kubelet[1843]: E0317 20:58:07.352599 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.453605 kubelet[1843]: E0317 20:58:07.453529 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.556300 kubelet[1843]: E0317 20:58:07.556205 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.657269 kubelet[1843]: E0317 20:58:07.657186 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.758542 kubelet[1843]: E0317 20:58:07.758370 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.859311 kubelet[1843]: E0317 20:58:07.859229 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:07.960147 kubelet[1843]: E0317 20:58:07.960034 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.060945 kubelet[1843]: E0317 20:58:08.060735 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.161642 kubelet[1843]: E0317 20:58:08.161549 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.262070 kubelet[1843]: E0317 20:58:08.261955 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.362663 kubelet[1843]: E0317 20:58:08.362515 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.464087 kubelet[1843]: E0317 20:58:08.463964 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.516667 systemd[1]: Reloading. Mar 17 20:58:08.564312 kubelet[1843]: E0317 20:58:08.564245 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.645053 /usr/lib/systemd/system-generators/torcx-generator[2138]: time="2025-03-17T20:58:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:58:08.645137 /usr/lib/systemd/system-generators/torcx-generator[2138]: time="2025-03-17T20:58:08Z" level=info msg="torcx already run" Mar 17 20:58:08.665474 kubelet[1843]: E0317 20:58:08.665410 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.763616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:58:08.764091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:58:08.766433 kubelet[1843]: E0317 20:58:08.766379 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.793785 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:58:08.867134 kubelet[1843]: E0317 20:58:08.867069 1843 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-be9pf.gb1.brightbox.com\" not found" Mar 17 20:58:08.941156 systemd[1]: Stopping kubelet.service... Mar 17 20:58:08.941800 kubelet[1843]: E0317 20:58:08.940774 1843 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{srv-be9pf.gb1.brightbox.com.182db2abac265114 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-be9pf.gb1.brightbox.com,UID:srv-be9pf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-be9pf.gb1.brightbox.com,},FirstTimestamp:2025-03-17 20:58:01.259594004 +0000 UTC m=+0.744115266,LastTimestamp:2025-03-17 20:58:01.259594004 +0000 UTC m=+0.744115266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-be9pf.gb1.brightbox.com,}" Mar 17 20:58:08.964571 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:58:08.965477 systemd[1]: Stopped kubelet.service. Mar 17 20:58:08.973310 systemd[1]: Starting kubelet.service... Mar 17 20:58:09.647749 systemd[1]: Started sshd@5-10.243.78.42:22-103.218.122.171:49484.service. Mar 17 20:58:10.352185 systemd[1]: Started kubelet.service. Mar 17 20:58:10.498048 sudo[2214]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 20:58:10.499356 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 20:58:10.546647 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:58:10.546647 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 20:58:10.546647 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:58:10.548564 kubelet[2203]: I0317 20:58:10.548008 2203 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:58:10.570738 kubelet[2203]: I0317 20:58:10.570679 2203 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 20:58:10.570738 kubelet[2203]: I0317 20:58:10.570729 2203 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:58:10.571316 kubelet[2203]: I0317 20:58:10.571291 2203 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 20:58:10.583805 kubelet[2203]: I0317 20:58:10.583770 2203 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 20:58:10.592290 kubelet[2203]: I0317 20:58:10.592233 2203 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:58:10.631207 kubelet[2203]: I0317 20:58:10.630441 2203 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:58:10.633417 kubelet[2203]: I0317 20:58:10.633354 2203 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:58:10.633843 kubelet[2203]: I0317 20:58:10.633538 2203 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-be9pf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 20:58:10.634128 kubelet[2203]: I0317 20:58:10.634102 2203 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:58:10.634323 kubelet[2203]: I0317 20:58:10.634301 2203 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 20:58:10.634620 kubelet[2203]: I0317 20:58:10.634595 2203 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:58:10.636225 kubelet[2203]: I0317 20:58:10.636199 2203 kubelet.go:400] "Attempting to sync node with API server" Mar 17 20:58:10.636411 kubelet[2203]: I0317 20:58:10.636385 2203 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:58:10.636587 kubelet[2203]: I0317 20:58:10.636563 2203 kubelet.go:312] "Adding apiserver pod source" Mar 17 20:58:10.636769 kubelet[2203]: I0317 20:58:10.636743 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:58:10.639865 kubelet[2203]: I0317 20:58:10.639838 2203 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 20:58:10.641392 kubelet[2203]: I0317 20:58:10.641366 2203 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:58:10.643098 kubelet[2203]: I0317 20:58:10.643076 2203 server.go:1264] "Started kubelet" Mar 17 20:58:10.667313 kubelet[2203]: I0317 20:58:10.667279 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:58:10.668128 kubelet[2203]: E0317 20:58:10.668099 2203 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 20:58:10.668631 kubelet[2203]: I0317 20:58:10.668587 2203 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:58:10.670445 kubelet[2203]: I0317 20:58:10.670419 2203 server.go:455] "Adding debug handlers to kubelet server" Mar 17 20:58:10.673192 kubelet[2203]: I0317 20:58:10.673102 2203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:58:10.698594 kubelet[2203]: I0317 20:58:10.678244 2203 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 20:58:10.699105 kubelet[2203]: I0317 20:58:10.678333 2203 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:58:10.710630 kubelet[2203]: I0317 20:58:10.710598 2203 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:58:10.711131 kubelet[2203]: I0317 20:58:10.711106 2203 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:58:10.712179 kubelet[2203]: I0317 20:58:10.693739 2203 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:58:10.712351 kubelet[2203]: I0317 20:58:10.712315 2203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:58:10.720037 kubelet[2203]: I0317 20:58:10.720010 2203 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:58:10.738171 kubelet[2203]: I0317 20:58:10.738124 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:58:10.739640 kubelet[2203]: I0317 20:58:10.739616 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:58:10.739866 kubelet[2203]: I0317 20:58:10.739840 2203 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 20:58:10.740043 kubelet[2203]: I0317 20:58:10.740017 2203 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 20:58:10.740287 kubelet[2203]: E0317 20:58:10.740233 2203 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:58:10.824923 kubelet[2203]: I0317 20:58:10.824862 2203 kubelet_node_status.go:73] "Attempting to register node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:10.835440 sshd[2196]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.218.122.171 user=root Mar 17 20:58:10.841937 kubelet[2203]: I0317 20:58:10.841861 2203 kubelet_node_status.go:112] "Node was previously registered" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:10.842367 kubelet[2203]: I0317 20:58:10.842294 2203 kubelet_node_status.go:76] "Successfully registered node" node="srv-be9pf.gb1.brightbox.com" Mar 17 20:58:10.843434 kubelet[2203]: E0317 20:58:10.842502 2203 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 20:58:10.904857 kubelet[2203]: I0317 20:58:10.903558 2203 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 20:58:10.904857 kubelet[2203]: I0317 20:58:10.903593 2203 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 20:58:10.904857 kubelet[2203]: I0317 20:58:10.903647 2203 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:58:10.904857 kubelet[2203]: I0317 20:58:10.903888 2203 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 20:58:10.904857 kubelet[2203]: I0317 20:58:10.903908 2203 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 20:58:10.904857 kubelet[2203]: I0317 20:58:10.903953 2203 policy_none.go:49] "None policy: Start" Mar 17 20:58:10.905869 kubelet[2203]: I0317 20:58:10.905833 2203 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 20:58:10.905962 kubelet[2203]: I0317 20:58:10.905891 2203 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:58:10.906407 kubelet[2203]: I0317 20:58:10.906369 2203 state_mem.go:75] "Updated machine memory state" Mar 17 20:58:10.911816 kubelet[2203]: I0317 20:58:10.910619 2203 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:58:10.911816 kubelet[2203]: I0317 20:58:10.910926 2203 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:58:10.911816 kubelet[2203]: I0317 20:58:10.911551 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:58:11.044494 kubelet[2203]: I0317 20:58:11.044362 2203 topology_manager.go:215] "Topology Admit Handler" podUID="0746a9f4bbb9cd41dd140a6e34047c58" podNamespace="kube-system" podName="kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.045144 kubelet[2203]: I0317 20:58:11.045097 2203 topology_manager.go:215] "Topology Admit Handler" podUID="ef3f78f185cbbd49d47302fd89106b73" podNamespace="kube-system" podName="kube-scheduler-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.045418 kubelet[2203]: I0317 20:58:11.045387 2203 topology_manager.go:215] "Topology Admit Handler" podUID="f0b78320d261f45f8b5a41263f3ad26a" podNamespace="kube-system" podName="kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.063456 kubelet[2203]: W0317 20:58:11.063385 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:58:11.065937 kubelet[2203]: W0317 20:58:11.065880 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:58:11.066311 kubelet[2203]: W0317 20:58:11.066244 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:58:11.125797 kubelet[2203]: I0317 20:58:11.125745 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-ca-certs\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126003 kubelet[2203]: I0317 20:58:11.125807 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-k8s-certs\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126003 kubelet[2203]: I0317 20:58:11.125843 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-kubeconfig\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126003 kubelet[2203]: I0317 20:58:11.125874 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126003 kubelet[2203]: I0317 20:58:11.125904 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef3f78f185cbbd49d47302fd89106b73-kubeconfig\") pod \"kube-scheduler-srv-be9pf.gb1.brightbox.com\" (UID: \"ef3f78f185cbbd49d47302fd89106b73\") " pod="kube-system/kube-scheduler-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126003 kubelet[2203]: I0317 20:58:11.125976 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0b78320d261f45f8b5a41263f3ad26a-k8s-certs\") pod \"kube-apiserver-srv-be9pf.gb1.brightbox.com\" (UID: \"f0b78320d261f45f8b5a41263f3ad26a\") " pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126366 kubelet[2203]: I0317 20:58:11.126007 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0746a9f4bbb9cd41dd140a6e34047c58-flexvolume-dir\") pod \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" (UID: \"0746a9f4bbb9cd41dd140a6e34047c58\") " pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126366 kubelet[2203]: I0317 20:58:11.126033 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0b78320d261f45f8b5a41263f3ad26a-ca-certs\") pod \"kube-apiserver-srv-be9pf.gb1.brightbox.com\" (UID: \"f0b78320d261f45f8b5a41263f3ad26a\") " pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.126366 kubelet[2203]: I0317 20:58:11.126114 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0b78320d261f45f8b5a41263f3ad26a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-be9pf.gb1.brightbox.com\" (UID: \"f0b78320d261f45f8b5a41263f3ad26a\") " pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.436173 sudo[2214]: pam_unix(sudo:session): session closed for user root Mar 17 20:58:11.659878 kubelet[2203]: I0317 20:58:11.659792 2203 apiserver.go:52] "Watching apiserver" Mar 17 20:58:11.699928 kubelet[2203]: I0317 20:58:11.699616 2203 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:58:11.803016 kubelet[2203]: W0317 20:58:11.802976 2203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:58:11.803349 kubelet[2203]: E0317 20:58:11.803311 2203 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-be9pf.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" Mar 17 20:58:11.837088 kubelet[2203]: I0317 20:58:11.836992 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-be9pf.gb1.brightbox.com" podStartSLOduration=0.836962077 podStartE2EDuration="836.962077ms" podCreationTimestamp="2025-03-17 20:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:58:11.832331418 +0000 UTC m=+1.438072976" watchObservedRunningTime="2025-03-17 20:58:11.836962077 +0000 UTC m=+1.442703629" Mar 17 20:58:11.848485 kubelet[2203]: I0317 20:58:11.848398 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-be9pf.gb1.brightbox.com" podStartSLOduration=0.848380589 podStartE2EDuration="848.380589ms" podCreationTimestamp="2025-03-17 20:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:58:11.846676433 +0000 UTC m=+1.452418004" watchObservedRunningTime="2025-03-17 20:58:11.848380589 +0000 UTC m=+1.454122147" Mar 17 20:58:11.876250 kubelet[2203]: I0317 20:58:11.876186 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-be9pf.gb1.brightbox.com" podStartSLOduration=0.87616931 podStartE2EDuration="876.16931ms" podCreationTimestamp="2025-03-17 20:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:58:11.862004404 +0000 UTC m=+1.467745956" watchObservedRunningTime="2025-03-17 20:58:11.87616931 +0000 UTC m=+1.481910870" Mar 17 20:58:13.248771 sshd[2196]: Failed password for root from 103.218.122.171 port 49484 ssh2 Mar 17 20:58:13.658954 sudo[1463]: pam_unix(sudo:session): session closed for user root Mar 17 20:58:13.805270 sshd[1459]: pam_unix(sshd:session): session closed for user core Mar 17 20:58:13.811535 systemd-logind[1292]: Session 5 logged out. Waiting for processes to exit. Mar 17 20:58:13.813646 systemd[1]: sshd@4-10.243.78.42:22-139.178.89.65:40992.service: Deactivated successfully. Mar 17 20:58:13.816403 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 20:58:13.819125 systemd-logind[1292]: Removed session 5. Mar 17 20:58:14.416034 sshd[2196]: Connection closed by authenticating user root 103.218.122.171 port 49484 [preauth] Mar 17 20:58:14.418435 systemd[1]: sshd@5-10.243.78.42:22-103.218.122.171:49484.service: Deactivated successfully. Mar 17 20:58:23.346751 kubelet[2203]: I0317 20:58:23.346673 2203 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 20:58:23.349269 env[1302]: time="2025-03-17T20:58:23.349130356Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 20:58:23.350024 kubelet[2203]: I0317 20:58:23.349593 2203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 20:58:23.849457 kubelet[2203]: I0317 20:58:23.849348 2203 topology_manager.go:215] "Topology Admit Handler" podUID="872b91af-2026-4436-b1a3-f039a7948d29" podNamespace="kube-system" podName="kube-proxy-crc8f" Mar 17 20:58:23.862950 kubelet[2203]: I0317 20:58:23.862891 2203 topology_manager.go:215] "Topology Admit Handler" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" podNamespace="kube-system" podName="cilium-79gvr" Mar 17 20:58:23.903771 kubelet[2203]: I0317 20:58:23.903686 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-bpf-maps\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.903771 kubelet[2203]: I0317 20:58:23.903771 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-kernel\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904165 kubelet[2203]: I0317 20:58:23.903819 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-etc-cni-netd\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904165 kubelet[2203]: I0317 20:58:23.903873 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-lib-modules\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904165 kubelet[2203]: I0317 20:58:23.903903 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/872b91af-2026-4436-b1a3-f039a7948d29-xtables-lock\") pod \"kube-proxy-crc8f\" (UID: \"872b91af-2026-4436-b1a3-f039a7948d29\") " pod="kube-system/kube-proxy-crc8f" Mar 17 20:58:23.904165 kubelet[2203]: I0317 20:58:23.903945 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhxdm\" (UniqueName: \"kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-kube-api-access-hhxdm\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904165 kubelet[2203]: I0317 20:58:23.904008 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-hostproc\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904165 kubelet[2203]: I0317 20:58:23.904054 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-net\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904597 kubelet[2203]: I0317 20:58:23.904146 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-hubble-tls\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904597 kubelet[2203]: I0317 20:58:23.904192 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-run\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904597 kubelet[2203]: I0317 20:58:23.904235 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-xtables-lock\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904597 kubelet[2203]: I0317 20:58:23.904267 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-config-path\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904597 kubelet[2203]: I0317 20:58:23.904302 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f89049d1-0772-473c-a5f5-ad0d957f2056-clustermesh-secrets\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904597 kubelet[2203]: I0317 20:58:23.904343 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/872b91af-2026-4436-b1a3-f039a7948d29-kube-proxy\") pod \"kube-proxy-crc8f\" (UID: \"872b91af-2026-4436-b1a3-f039a7948d29\") " pod="kube-system/kube-proxy-crc8f" Mar 17 20:58:23.904997 kubelet[2203]: I0317 20:58:23.904413 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/872b91af-2026-4436-b1a3-f039a7948d29-lib-modules\") pod \"kube-proxy-crc8f\" (UID: \"872b91af-2026-4436-b1a3-f039a7948d29\") " pod="kube-system/kube-proxy-crc8f" Mar 17 20:58:23.904997 kubelet[2203]: I0317 20:58:23.904498 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n597j\" (UniqueName: \"kubernetes.io/projected/872b91af-2026-4436-b1a3-f039a7948d29-kube-api-access-n597j\") pod \"kube-proxy-crc8f\" (UID: \"872b91af-2026-4436-b1a3-f039a7948d29\") " pod="kube-system/kube-proxy-crc8f" Mar 17 20:58:23.904997 kubelet[2203]: I0317 20:58:23.904526 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-cgroup\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:23.904997 kubelet[2203]: I0317 20:58:23.904559 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cni-path\") pod \"cilium-79gvr\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " pod="kube-system/cilium-79gvr" Mar 17 20:58:24.160677 env[1302]: time="2025-03-17T20:58:24.158775481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crc8f,Uid:872b91af-2026-4436-b1a3-f039a7948d29,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:24.171904 env[1302]: time="2025-03-17T20:58:24.171840963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79gvr,Uid:f89049d1-0772-473c-a5f5-ad0d957f2056,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:24.232723 env[1302]: time="2025-03-17T20:58:24.232408281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:24.233043 env[1302]: time="2025-03-17T20:58:24.232940723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:24.233043 env[1302]: time="2025-03-17T20:58:24.232982554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:24.233803 env[1302]: time="2025-03-17T20:58:24.233741798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4 pid=2288 runtime=io.containerd.runc.v2 Mar 17 20:58:24.239374 env[1302]: time="2025-03-17T20:58:24.239284556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:24.239529 env[1302]: time="2025-03-17T20:58:24.239480857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:24.239627 env[1302]: time="2025-03-17T20:58:24.239550844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:24.239897 env[1302]: time="2025-03-17T20:58:24.239833352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5cb4869c5cbfb2af5beb290193d988663fdf58e367e20516c2fb1f82a97c6f0 pid=2304 runtime=io.containerd.runc.v2 Mar 17 20:58:24.372523 env[1302]: time="2025-03-17T20:58:24.372454881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crc8f,Uid:872b91af-2026-4436-b1a3-f039a7948d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5cb4869c5cbfb2af5beb290193d988663fdf58e367e20516c2fb1f82a97c6f0\"" Mar 17 20:58:24.380354 env[1302]: time="2025-03-17T20:58:24.380304982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-79gvr,Uid:f89049d1-0772-473c-a5f5-ad0d957f2056,Namespace:kube-system,Attempt:0,} returns sandbox id \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\"" Mar 17 20:58:24.383534 env[1302]: time="2025-03-17T20:58:24.383198112Z" level=info msg="CreateContainer within sandbox \"e5cb4869c5cbfb2af5beb290193d988663fdf58e367e20516c2fb1f82a97c6f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 20:58:24.399418 env[1302]: time="2025-03-17T20:58:24.399344361Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 20:58:24.442584 env[1302]: time="2025-03-17T20:58:24.442530191Z" level=info msg="CreateContainer within sandbox \"e5cb4869c5cbfb2af5beb290193d988663fdf58e367e20516c2fb1f82a97c6f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8cf16c6a426cd1130dff22573abcd89b855bc9d38aeb6632c387f667060606fd\"" Mar 17 20:58:24.443896 env[1302]: time="2025-03-17T20:58:24.443851458Z" level=info msg="StartContainer for \"8cf16c6a426cd1130dff22573abcd89b855bc9d38aeb6632c387f667060606fd\"" Mar 17 20:58:24.447987 kubelet[2203]: I0317 20:58:24.447944 2203 topology_manager.go:215] "Topology Admit Handler" podUID="f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc" podNamespace="kube-system" podName="cilium-operator-599987898-gptvh" Mar 17 20:58:24.510611 kubelet[2203]: I0317 20:58:24.510556 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzj7b\" (UniqueName: \"kubernetes.io/projected/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-kube-api-access-xzj7b\") pod \"cilium-operator-599987898-gptvh\" (UID: \"f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc\") " pod="kube-system/cilium-operator-599987898-gptvh" Mar 17 20:58:24.510802 kubelet[2203]: I0317 20:58:24.510619 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-cilium-config-path\") pod \"cilium-operator-599987898-gptvh\" (UID: \"f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc\") " pod="kube-system/cilium-operator-599987898-gptvh" Mar 17 20:58:24.580937 env[1302]: time="2025-03-17T20:58:24.580883042Z" level=info msg="StartContainer for \"8cf16c6a426cd1130dff22573abcd89b855bc9d38aeb6632c387f667060606fd\" returns successfully" Mar 17 20:58:24.761897 env[1302]: time="2025-03-17T20:58:24.760998380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gptvh,Uid:f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:24.792808 env[1302]: time="2025-03-17T20:58:24.792571447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:24.793532 env[1302]: time="2025-03-17T20:58:24.793489314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:24.793715 env[1302]: time="2025-03-17T20:58:24.793674357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:24.794945 env[1302]: time="2025-03-17T20:58:24.794745640Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae pid=2406 runtime=io.containerd.runc.v2 Mar 17 20:58:24.950530 env[1302]: time="2025-03-17T20:58:24.950397700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gptvh,Uid:f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae\"" Mar 17 20:58:30.778695 kubelet[2203]: I0317 20:58:30.778293 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-crc8f" podStartSLOduration=7.778196436 podStartE2EDuration="7.778196436s" podCreationTimestamp="2025-03-17 20:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:58:24.844656711 +0000 UTC m=+14.450398289" watchObservedRunningTime="2025-03-17 20:58:30.778196436 +0000 UTC m=+20.383937990" Mar 17 20:58:33.785023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869971552.mount: Deactivated successfully. Mar 17 20:58:38.520304 env[1302]: time="2025-03-17T20:58:38.520075205Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:38.526579 env[1302]: time="2025-03-17T20:58:38.525626468Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:38.530093 env[1302]: time="2025-03-17T20:58:38.529971933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 20:58:38.531145 env[1302]: time="2025-03-17T20:58:38.531107524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:38.533754 env[1302]: time="2025-03-17T20:58:38.533716162Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 20:58:38.538664 env[1302]: time="2025-03-17T20:58:38.538626813Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:58:38.570018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060747475.mount: Deactivated successfully. Mar 17 20:58:38.576106 env[1302]: time="2025-03-17T20:58:38.575988215Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\"" Mar 17 20:58:38.579342 env[1302]: time="2025-03-17T20:58:38.579292477Z" level=info msg="StartContainer for \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\"" Mar 17 20:58:38.788182 env[1302]: time="2025-03-17T20:58:38.786933003Z" level=info msg="StartContainer for \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\" returns successfully" Mar 17 20:58:38.838653 env[1302]: time="2025-03-17T20:58:38.838579037Z" level=info msg="shim disconnected" id=8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11 Mar 17 20:58:38.838996 env[1302]: time="2025-03-17T20:58:38.838951727Z" level=warning msg="cleaning up after shim disconnected" id=8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11 namespace=k8s.io Mar 17 20:58:38.839181 env[1302]: time="2025-03-17T20:58:38.839153190Z" level=info msg="cleaning up dead shim" Mar 17 20:58:38.850525 env[1302]: time="2025-03-17T20:58:38.850447592Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:58:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2611 runtime=io.containerd.runc.v2\n" Mar 17 20:58:38.999102 env[1302]: time="2025-03-17T20:58:38.992251514Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:58:39.016566 env[1302]: time="2025-03-17T20:58:39.016483209Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\"" Mar 17 20:58:39.027485 env[1302]: time="2025-03-17T20:58:39.027433907Z" level=info msg="StartContainer for \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\"" Mar 17 20:58:39.107623 env[1302]: time="2025-03-17T20:58:39.107490660Z" level=info msg="StartContainer for \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\" returns successfully" Mar 17 20:58:39.123357 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:58:39.126119 systemd[1]: Stopped systemd-sysctl.service. Mar 17 20:58:39.126649 systemd[1]: Stopping systemd-sysctl.service... Mar 17 20:58:39.130271 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:58:39.146352 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:58:39.163892 env[1302]: time="2025-03-17T20:58:39.163826147Z" level=info msg="shim disconnected" id=d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8 Mar 17 20:58:39.164376 env[1302]: time="2025-03-17T20:58:39.164347245Z" level=warning msg="cleaning up after shim disconnected" id=d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8 namespace=k8s.io Mar 17 20:58:39.164546 env[1302]: time="2025-03-17T20:58:39.164496279Z" level=info msg="cleaning up dead shim" Mar 17 20:58:39.175476 env[1302]: time="2025-03-17T20:58:39.175440627Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:58:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2678 runtime=io.containerd.runc.v2\n" Mar 17 20:58:39.564717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11-rootfs.mount: Deactivated successfully. Mar 17 20:58:39.998453 env[1302]: time="2025-03-17T20:58:39.998288289Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:58:40.022318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321424440.mount: Deactivated successfully. Mar 17 20:58:40.037113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903097434.mount: Deactivated successfully. Mar 17 20:58:40.046515 env[1302]: time="2025-03-17T20:58:40.046430957Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\"" Mar 17 20:58:40.049184 env[1302]: time="2025-03-17T20:58:40.049135426Z" level=info msg="StartContainer for \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\"" Mar 17 20:58:40.155629 env[1302]: time="2025-03-17T20:58:40.155570208Z" level=info msg="StartContainer for \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\" returns successfully" Mar 17 20:58:40.184902 env[1302]: time="2025-03-17T20:58:40.184843694Z" level=info msg="shim disconnected" id=00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622 Mar 17 20:58:40.185358 env[1302]: time="2025-03-17T20:58:40.185321730Z" level=warning msg="cleaning up after shim disconnected" id=00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622 namespace=k8s.io Mar 17 20:58:40.185492 env[1302]: time="2025-03-17T20:58:40.185453312Z" level=info msg="cleaning up dead shim" Mar 17 20:58:40.199860 env[1302]: time="2025-03-17T20:58:40.199808509Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:58:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2736 runtime=io.containerd.runc.v2\n" Mar 17 20:58:40.812366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289801910.mount: Deactivated successfully. Mar 17 20:58:41.023403 env[1302]: time="2025-03-17T20:58:41.022881814Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:58:41.076635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629053297.mount: Deactivated successfully. Mar 17 20:58:41.083196 env[1302]: time="2025-03-17T20:58:41.083127799Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\"" Mar 17 20:58:41.085814 env[1302]: time="2025-03-17T20:58:41.085775886Z" level=info msg="StartContainer for \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\"" Mar 17 20:58:41.187469 env[1302]: time="2025-03-17T20:58:41.183525581Z" level=info msg="StartContainer for \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\" returns successfully" Mar 17 20:58:41.264174 env[1302]: time="2025-03-17T20:58:41.264092673Z" level=info msg="shim disconnected" id=2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be Mar 17 20:58:41.264656 env[1302]: time="2025-03-17T20:58:41.264617438Z" level=warning msg="cleaning up after shim disconnected" id=2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be namespace=k8s.io Mar 17 20:58:41.264941 env[1302]: time="2025-03-17T20:58:41.264907737Z" level=info msg="cleaning up dead shim" Mar 17 20:58:41.301462 env[1302]: time="2025-03-17T20:58:41.301386382Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:58:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2791 runtime=io.containerd.runc.v2\n" Mar 17 20:58:42.008197 env[1302]: time="2025-03-17T20:58:42.007836785Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:58:42.028567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458646897.mount: Deactivated successfully. Mar 17 20:58:42.047258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249703841.mount: Deactivated successfully. Mar 17 20:58:42.062525 env[1302]: time="2025-03-17T20:58:42.062425000Z" level=info msg="CreateContainer within sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\"" Mar 17 20:58:42.065770 env[1302]: time="2025-03-17T20:58:42.065732021Z" level=info msg="StartContainer for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\"" Mar 17 20:58:42.190090 env[1302]: time="2025-03-17T20:58:42.183260896Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:42.190090 env[1302]: time="2025-03-17T20:58:42.186385216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:42.190090 env[1302]: time="2025-03-17T20:58:42.190007659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:58:42.195095 env[1302]: time="2025-03-17T20:58:42.191517590Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 20:58:42.198878 env[1302]: time="2025-03-17T20:58:42.198821056Z" level=info msg="StartContainer for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" returns successfully" Mar 17 20:58:42.211907 env[1302]: time="2025-03-17T20:58:42.211846379Z" level=info msg="CreateContainer within sandbox \"853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 20:58:42.245416 env[1302]: time="2025-03-17T20:58:42.245338925Z" level=info msg="CreateContainer within sandbox \"853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\"" Mar 17 20:58:42.248475 env[1302]: time="2025-03-17T20:58:42.248437023Z" level=info msg="StartContainer for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\"" Mar 17 20:58:42.417790 env[1302]: time="2025-03-17T20:58:42.417641999Z" level=info msg="StartContainer for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" returns successfully" Mar 17 20:58:42.531666 kubelet[2203]: I0317 20:58:42.531590 2203 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 20:58:42.613566 kubelet[2203]: I0317 20:58:42.613478 2203 topology_manager.go:215] "Topology Admit Handler" podUID="c629e809-777a-494c-b1ad-48026db01ac3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j6fvt" Mar 17 20:58:42.636944 kubelet[2203]: I0317 20:58:42.636896 2203 topology_manager.go:215] "Topology Admit Handler" podUID="d1943e01-48d1-499f-8a6f-befbe4188597" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wr5sx" Mar 17 20:58:42.762621 kubelet[2203]: I0317 20:58:42.762548 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97q6\" (UniqueName: \"kubernetes.io/projected/c629e809-777a-494c-b1ad-48026db01ac3-kube-api-access-s97q6\") pod \"coredns-7db6d8ff4d-j6fvt\" (UID: \"c629e809-777a-494c-b1ad-48026db01ac3\") " pod="kube-system/coredns-7db6d8ff4d-j6fvt" Mar 17 20:58:42.762885 kubelet[2203]: I0317 20:58:42.762671 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1943e01-48d1-499f-8a6f-befbe4188597-config-volume\") pod \"coredns-7db6d8ff4d-wr5sx\" (UID: \"d1943e01-48d1-499f-8a6f-befbe4188597\") " pod="kube-system/coredns-7db6d8ff4d-wr5sx" Mar 17 20:58:42.762885 kubelet[2203]: I0317 20:58:42.762722 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c629e809-777a-494c-b1ad-48026db01ac3-config-volume\") pod \"coredns-7db6d8ff4d-j6fvt\" (UID: \"c629e809-777a-494c-b1ad-48026db01ac3\") " pod="kube-system/coredns-7db6d8ff4d-j6fvt" Mar 17 20:58:42.762885 kubelet[2203]: I0317 20:58:42.762788 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g4b2\" (UniqueName: \"kubernetes.io/projected/d1943e01-48d1-499f-8a6f-befbe4188597-kube-api-access-6g4b2\") pod \"coredns-7db6d8ff4d-wr5sx\" (UID: \"d1943e01-48d1-499f-8a6f-befbe4188597\") " pod="kube-system/coredns-7db6d8ff4d-wr5sx" Mar 17 20:58:43.159304 kubelet[2203]: I0317 20:58:43.159111 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-79gvr" podStartSLOduration=6.020671595 podStartE2EDuration="20.159042531s" podCreationTimestamp="2025-03-17 20:58:23 +0000 UTC" firstStartedPulling="2025-03-17 20:58:24.394778941 +0000 UTC m=+14.000520482" lastFinishedPulling="2025-03-17 20:58:38.533149864 +0000 UTC m=+28.138891418" observedRunningTime="2025-03-17 20:58:43.159032787 +0000 UTC m=+32.764774345" watchObservedRunningTime="2025-03-17 20:58:43.159042531 +0000 UTC m=+32.764784089" Mar 17 20:58:43.223663 env[1302]: time="2025-03-17T20:58:43.223027870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j6fvt,Uid:c629e809-777a-494c-b1ad-48026db01ac3,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:43.242633 env[1302]: time="2025-03-17T20:58:43.242562816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wr5sx,Uid:d1943e01-48d1-499f-8a6f-befbe4188597,Namespace:kube-system,Attempt:0,}" Mar 17 20:58:46.556210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 20:58:46.567398 systemd-networkd[1079]: cilium_host: Link UP Mar 17 20:58:46.570810 systemd-networkd[1079]: cilium_net: Link UP Mar 17 20:58:46.570974 systemd-networkd[1079]: cilium_net: Gained carrier Mar 17 20:58:46.576112 systemd-networkd[1079]: cilium_host: Gained carrier Mar 17 20:58:46.577165 systemd-networkd[1079]: cilium_host: Gained IPv6LL Mar 17 20:58:46.757147 systemd-networkd[1079]: cilium_vxlan: Link UP Mar 17 20:58:46.757164 systemd-networkd[1079]: cilium_vxlan: Gained carrier Mar 17 20:58:46.890378 systemd-networkd[1079]: cilium_net: Gained IPv6LL Mar 17 20:58:47.328113 kernel: NET: Registered PF_ALG protocol family Mar 17 20:58:47.914584 systemd-networkd[1079]: cilium_vxlan: Gained IPv6LL Mar 17 20:58:48.431272 systemd-networkd[1079]: lxc_health: Link UP Mar 17 20:58:48.438198 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 20:58:48.437890 systemd-networkd[1079]: lxc_health: Gained carrier Mar 17 20:58:48.896157 systemd-networkd[1079]: lxcde043c5909f7: Link UP Mar 17 20:58:48.907475 systemd-networkd[1079]: lxcc95e6887fa88: Link UP Mar 17 20:58:48.918109 kernel: eth0: renamed from tmp90422 Mar 17 20:58:48.924178 kernel: eth0: renamed from tmp83e81 Mar 17 20:58:48.935739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcde043c5909f7: link becomes ready Mar 17 20:58:48.935863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc95e6887fa88: link becomes ready Mar 17 20:58:48.934773 systemd-networkd[1079]: lxcde043c5909f7: Gained carrier Mar 17 20:58:48.936637 systemd-networkd[1079]: lxcc95e6887fa88: Gained carrier Mar 17 20:58:49.898398 systemd-networkd[1079]: lxc_health: Gained IPv6LL Mar 17 20:58:50.154419 systemd-networkd[1079]: lxcde043c5909f7: Gained IPv6LL Mar 17 20:58:50.236197 kubelet[2203]: I0317 20:58:50.235991 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gptvh" podStartSLOduration=8.991236492 podStartE2EDuration="26.235810134s" podCreationTimestamp="2025-03-17 20:58:24 +0000 UTC" firstStartedPulling="2025-03-17 20:58:24.952722037 +0000 UTC m=+14.558463579" lastFinishedPulling="2025-03-17 20:58:42.197295668 +0000 UTC m=+31.803037221" observedRunningTime="2025-03-17 20:58:43.273988722 +0000 UTC m=+32.879730272" watchObservedRunningTime="2025-03-17 20:58:50.235810134 +0000 UTC m=+39.841551683" Mar 17 20:58:50.474404 systemd-networkd[1079]: lxcc95e6887fa88: Gained IPv6LL Mar 17 20:58:54.419252 env[1302]: time="2025-03-17T20:58:54.419030702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:54.420873 env[1302]: time="2025-03-17T20:58:54.420806880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:54.421120 env[1302]: time="2025-03-17T20:58:54.421077464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:54.421853 env[1302]: time="2025-03-17T20:58:54.421790725Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83e81f9570406bedf17d77ac85d4cf532c9fe828b47b959148bec70bb18ebf2e pid=3375 runtime=io.containerd.runc.v2 Mar 17 20:58:54.548097 systemd[1]: run-containerd-runc-k8s.io-83e81f9570406bedf17d77ac85d4cf532c9fe828b47b959148bec70bb18ebf2e-runc.qw7yMd.mount: Deactivated successfully. Mar 17 20:58:54.556366 env[1302]: time="2025-03-17T20:58:54.556280599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:58:54.556603 env[1302]: time="2025-03-17T20:58:54.556558817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:58:54.556792 env[1302]: time="2025-03-17T20:58:54.556749123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:58:54.559610 env[1302]: time="2025-03-17T20:58:54.559560766Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/90422dd063e9b30793b1b52145e79d16f5622781a562055844b7cf92d37d2693 pid=3399 runtime=io.containerd.runc.v2 Mar 17 20:58:54.708387 env[1302]: time="2025-03-17T20:58:54.708257795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j6fvt,Uid:c629e809-777a-494c-b1ad-48026db01ac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"83e81f9570406bedf17d77ac85d4cf532c9fe828b47b959148bec70bb18ebf2e\"" Mar 17 20:58:54.712414 env[1302]: time="2025-03-17T20:58:54.712328904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wr5sx,Uid:d1943e01-48d1-499f-8a6f-befbe4188597,Namespace:kube-system,Attempt:0,} returns sandbox id \"90422dd063e9b30793b1b52145e79d16f5622781a562055844b7cf92d37d2693\"" Mar 17 20:58:54.721520 env[1302]: time="2025-03-17T20:58:54.721472955Z" level=info msg="CreateContainer within sandbox \"83e81f9570406bedf17d77ac85d4cf532c9fe828b47b959148bec70bb18ebf2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:58:54.721737 env[1302]: time="2025-03-17T20:58:54.721473412Z" level=info msg="CreateContainer within sandbox \"90422dd063e9b30793b1b52145e79d16f5622781a562055844b7cf92d37d2693\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:58:54.751038 env[1302]: time="2025-03-17T20:58:54.750940942Z" level=info msg="CreateContainer within sandbox \"83e81f9570406bedf17d77ac85d4cf532c9fe828b47b959148bec70bb18ebf2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20322b8ed7dd3d13e9a6c915aa2b95108bb454f3be8517d96fa64c64b0e8bfda\"" Mar 17 20:58:54.753097 env[1302]: time="2025-03-17T20:58:54.752233283Z" level=info msg="StartContainer for \"20322b8ed7dd3d13e9a6c915aa2b95108bb454f3be8517d96fa64c64b0e8bfda\"" Mar 17 20:58:54.754025 env[1302]: time="2025-03-17T20:58:54.753966424Z" level=info msg="CreateContainer within sandbox \"90422dd063e9b30793b1b52145e79d16f5622781a562055844b7cf92d37d2693\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"279a69cb43545747fe67cf5a6cc372a8fe5adb2063d6e5a69f933ae0a8ae90cf\"" Mar 17 20:58:54.754878 env[1302]: time="2025-03-17T20:58:54.754843985Z" level=info msg="StartContainer for \"279a69cb43545747fe67cf5a6cc372a8fe5adb2063d6e5a69f933ae0a8ae90cf\"" Mar 17 20:58:54.873987 env[1302]: time="2025-03-17T20:58:54.873888553Z" level=info msg="StartContainer for \"20322b8ed7dd3d13e9a6c915aa2b95108bb454f3be8517d96fa64c64b0e8bfda\" returns successfully" Mar 17 20:58:54.885335 env[1302]: time="2025-03-17T20:58:54.885277207Z" level=info msg="StartContainer for \"279a69cb43545747fe67cf5a6cc372a8fe5adb2063d6e5a69f933ae0a8ae90cf\" returns successfully" Mar 17 20:58:55.121887 kubelet[2203]: I0317 20:58:55.121678 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wr5sx" podStartSLOduration=31.121649622 podStartE2EDuration="31.121649622s" podCreationTimestamp="2025-03-17 20:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:58:55.0938259 +0000 UTC m=+44.699567461" watchObservedRunningTime="2025-03-17 20:58:55.121649622 +0000 UTC m=+44.727391172" Mar 17 20:58:56.089899 kubelet[2203]: I0317 20:58:56.089756 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j6fvt" podStartSLOduration=32.089616479 podStartE2EDuration="32.089616479s" podCreationTimestamp="2025-03-17 20:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:58:55.122752302 +0000 UTC m=+44.728493850" watchObservedRunningTime="2025-03-17 20:58:56.089616479 +0000 UTC m=+45.695358037" Mar 17 20:59:22.234261 systemd[1]: Started sshd@6-10.243.78.42:22-139.178.89.65:43672.service. Mar 17 20:59:23.150528 sshd[3544]: Accepted publickey for core from 139.178.89.65 port 43672 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:23.152233 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:23.165111 systemd-logind[1292]: New session 6 of user core. Mar 17 20:59:23.167036 systemd[1]: Started session-6.scope. Mar 17 20:59:23.993534 sshd[3544]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:23.999110 systemd-logind[1292]: Session 6 logged out. Waiting for processes to exit. Mar 17 20:59:24.000458 systemd[1]: sshd@6-10.243.78.42:22-139.178.89.65:43672.service: Deactivated successfully. Mar 17 20:59:24.002556 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 20:59:24.005460 systemd-logind[1292]: Removed session 6. Mar 17 20:59:29.142583 systemd[1]: Started sshd@7-10.243.78.42:22-139.178.89.65:43678.service. Mar 17 20:59:30.045022 sshd[3560]: Accepted publickey for core from 139.178.89.65 port 43678 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:30.048374 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:30.055452 systemd-logind[1292]: New session 7 of user core. Mar 17 20:59:30.056321 systemd[1]: Started session-7.scope. Mar 17 20:59:30.776904 sshd[3560]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:30.780520 systemd[1]: sshd@7-10.243.78.42:22-139.178.89.65:43678.service: Deactivated successfully. Mar 17 20:59:30.781890 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 20:59:30.784341 systemd-logind[1292]: Session 7 logged out. Waiting for processes to exit. Mar 17 20:59:30.785993 systemd-logind[1292]: Removed session 7. Mar 17 20:59:35.922488 systemd[1]: Started sshd@8-10.243.78.42:22-139.178.89.65:38222.service. Mar 17 20:59:36.815220 sshd[3574]: Accepted publickey for core from 139.178.89.65 port 38222 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:36.817207 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:36.824569 systemd-logind[1292]: New session 8 of user core. Mar 17 20:59:36.825719 systemd[1]: Started session-8.scope. Mar 17 20:59:37.550933 sshd[3574]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:37.554999 systemd[1]: sshd@8-10.243.78.42:22-139.178.89.65:38222.service: Deactivated successfully. Mar 17 20:59:37.556360 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 20:59:37.557400 systemd-logind[1292]: Session 8 logged out. Waiting for processes to exit. Mar 17 20:59:37.558917 systemd-logind[1292]: Removed session 8. Mar 17 20:59:42.693984 systemd[1]: Started sshd@9-10.243.78.42:22-139.178.89.65:35092.service. Mar 17 20:59:43.585586 sshd[3588]: Accepted publickey for core from 139.178.89.65 port 35092 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:43.587696 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:43.596020 systemd[1]: Started session-9.scope. Mar 17 20:59:43.596635 systemd-logind[1292]: New session 9 of user core. Mar 17 20:59:44.310124 sshd[3588]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:44.314576 systemd[1]: sshd@9-10.243.78.42:22-139.178.89.65:35092.service: Deactivated successfully. Mar 17 20:59:44.316127 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 20:59:44.317151 systemd-logind[1292]: Session 9 logged out. Waiting for processes to exit. Mar 17 20:59:44.318599 systemd-logind[1292]: Removed session 9. Mar 17 20:59:44.456396 systemd[1]: Started sshd@10-10.243.78.42:22-139.178.89.65:35104.service. Mar 17 20:59:45.349846 sshd[3601]: Accepted publickey for core from 139.178.89.65 port 35104 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:45.352803 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:45.361412 systemd[1]: Started session-10.scope. Mar 17 20:59:45.362224 systemd-logind[1292]: New session 10 of user core. Mar 17 20:59:46.154867 sshd[3601]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:46.165176 systemd[1]: sshd@10-10.243.78.42:22-139.178.89.65:35104.service: Deactivated successfully. Mar 17 20:59:46.166465 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 20:59:46.168138 systemd-logind[1292]: Session 10 logged out. Waiting for processes to exit. Mar 17 20:59:46.169826 systemd-logind[1292]: Removed session 10. Mar 17 20:59:46.301746 systemd[1]: Started sshd@11-10.243.78.42:22-139.178.89.65:35118.service. Mar 17 20:59:47.200891 sshd[3612]: Accepted publickey for core from 139.178.89.65 port 35118 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:47.203714 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:47.212478 systemd-logind[1292]: New session 11 of user core. Mar 17 20:59:47.213167 systemd[1]: Started session-11.scope. Mar 17 20:59:47.407620 update_engine[1294]: I0317 20:59:47.407394 1294 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 20:59:47.407620 update_engine[1294]: I0317 20:59:47.407511 1294 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 20:59:47.411487 update_engine[1294]: I0317 20:59:47.410336 1294 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 20:59:47.411487 update_engine[1294]: I0317 20:59:47.411386 1294 omaha_request_params.cc:62] Current group set to lts Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.414719 1294 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.414749 1294 update_attempter.cc:643] Scheduling an action processor start. Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.414784 1294 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.414846 1294 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.414981 1294 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.414994 1294 omaha_request_action.cc:271] Request: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: Mar 17 20:59:47.415112 update_engine[1294]: I0317 20:59:47.415001 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:59:47.425209 update_engine[1294]: I0317 20:59:47.424435 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:59:47.425992 update_engine[1294]: I0317 20:59:47.425681 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:59:47.434946 update_engine[1294]: E0317 20:59:47.434449 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:59:47.434946 update_engine[1294]: I0317 20:59:47.434689 1294 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 20:59:47.443697 locksmithd[1335]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 20:59:47.927485 sshd[3612]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:47.932340 systemd-logind[1292]: Session 11 logged out. Waiting for processes to exit. Mar 17 20:59:47.933178 systemd[1]: sshd@11-10.243.78.42:22-139.178.89.65:35118.service: Deactivated successfully. Mar 17 20:59:47.934375 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 20:59:47.935022 systemd-logind[1292]: Removed session 11. Mar 17 20:59:53.076486 systemd[1]: Started sshd@12-10.243.78.42:22-139.178.89.65:53506.service. Mar 17 20:59:53.985586 sshd[3625]: Accepted publickey for core from 139.178.89.65 port 53506 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 20:59:53.988223 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:59:53.996953 systemd-logind[1292]: New session 12 of user core. Mar 17 20:59:53.998006 systemd[1]: Started session-12.scope. Mar 17 20:59:54.711736 sshd[3625]: pam_unix(sshd:session): session closed for user core Mar 17 20:59:54.716247 systemd-logind[1292]: Session 12 logged out. Waiting for processes to exit. Mar 17 20:59:54.716892 systemd[1]: sshd@12-10.243.78.42:22-139.178.89.65:53506.service: Deactivated successfully. Mar 17 20:59:54.718634 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 20:59:54.719352 systemd-logind[1292]: Removed session 12. Mar 17 20:59:57.404470 update_engine[1294]: I0317 20:59:57.404261 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 20:59:57.405380 update_engine[1294]: I0317 20:59:57.404932 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 20:59:57.405507 update_engine[1294]: I0317 20:59:57.405418 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 20:59:57.406076 update_engine[1294]: E0317 20:59:57.406018 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 20:59:57.406205 update_engine[1294]: I0317 20:59:57.406181 1294 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 20:59:59.854484 systemd[1]: Started sshd@13-10.243.78.42:22-139.178.89.65:53516.service. Mar 17 21:00:00.739906 sshd[3639]: Accepted publickey for core from 139.178.89.65 port 53516 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:00.741937 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:00.749627 systemd-logind[1292]: New session 13 of user core. Mar 17 21:00:00.750513 systemd[1]: Started session-13.scope. Mar 17 21:00:01.463436 sshd[3639]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:01.468051 systemd[1]: sshd@13-10.243.78.42:22-139.178.89.65:53516.service: Deactivated successfully. Mar 17 21:00:01.469159 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 21:00:01.470508 systemd-logind[1292]: Session 13 logged out. Waiting for processes to exit. Mar 17 21:00:01.471577 systemd-logind[1292]: Removed session 13. Mar 17 21:00:01.607029 systemd[1]: Started sshd@14-10.243.78.42:22-139.178.89.65:54028.service. Mar 17 21:00:02.492193 sshd[3653]: Accepted publickey for core from 139.178.89.65 port 54028 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:02.492993 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:02.499614 systemd-logind[1292]: New session 14 of user core. Mar 17 21:00:02.500535 systemd[1]: Started session-14.scope. Mar 17 21:00:03.586590 sshd[3653]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:03.594983 systemd[1]: sshd@14-10.243.78.42:22-139.178.89.65:54028.service: Deactivated successfully. Mar 17 21:00:03.596097 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 21:00:03.596606 systemd-logind[1292]: Session 14 logged out. Waiting for processes to exit. Mar 17 21:00:03.598894 systemd-logind[1292]: Removed session 14. Mar 17 21:00:03.734533 systemd[1]: Started sshd@15-10.243.78.42:22-139.178.89.65:54040.service. Mar 17 21:00:04.634682 sshd[3663]: Accepted publickey for core from 139.178.89.65 port 54040 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:04.636846 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:04.644857 systemd-logind[1292]: New session 15 of user core. Mar 17 21:00:04.645543 systemd[1]: Started session-15.scope. Mar 17 21:00:07.404251 update_engine[1294]: I0317 21:00:07.404142 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 21:00:07.404962 update_engine[1294]: I0317 21:00:07.404616 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 21:00:07.405028 update_engine[1294]: I0317 21:00:07.404959 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 21:00:07.405533 update_engine[1294]: E0317 21:00:07.405494 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 21:00:07.405652 update_engine[1294]: I0317 21:00:07.405598 1294 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 21:00:07.755123 sshd[3663]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:07.759107 systemd[1]: sshd@15-10.243.78.42:22-139.178.89.65:54040.service: Deactivated successfully. Mar 17 21:00:07.760572 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 21:00:07.760596 systemd-logind[1292]: Session 15 logged out. Waiting for processes to exit. Mar 17 21:00:07.762289 systemd-logind[1292]: Removed session 15. Mar 17 21:00:07.900026 systemd[1]: Started sshd@16-10.243.78.42:22-139.178.89.65:54044.service. Mar 17 21:00:08.784601 sshd[3680]: Accepted publickey for core from 139.178.89.65 port 54044 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:08.787227 sshd[3680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:08.794712 systemd[1]: Started session-16.scope. Mar 17 21:00:08.795009 systemd-logind[1292]: New session 16 of user core. Mar 17 21:00:09.752375 sshd[3680]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:09.756425 systemd[1]: sshd@16-10.243.78.42:22-139.178.89.65:54044.service: Deactivated successfully. Mar 17 21:00:09.757883 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 21:00:09.757913 systemd-logind[1292]: Session 16 logged out. Waiting for processes to exit. Mar 17 21:00:09.759652 systemd-logind[1292]: Removed session 16. Mar 17 21:00:09.909413 systemd[1]: Started sshd@17-10.243.78.42:22-139.178.89.65:54054.service. Mar 17 21:00:10.804793 sshd[3690]: Accepted publickey for core from 139.178.89.65 port 54054 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:10.807537 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:10.814818 systemd-logind[1292]: New session 17 of user core. Mar 17 21:00:10.815574 systemd[1]: Started session-17.scope. Mar 17 21:00:11.509823 sshd[3690]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:11.514473 systemd[1]: sshd@17-10.243.78.42:22-139.178.89.65:54054.service: Deactivated successfully. Mar 17 21:00:11.515815 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 21:00:11.519138 systemd-logind[1292]: Session 17 logged out. Waiting for processes to exit. Mar 17 21:00:11.521026 systemd-logind[1292]: Removed session 17. Mar 17 21:00:16.658011 systemd[1]: Started sshd@18-10.243.78.42:22-139.178.89.65:60414.service. Mar 17 21:00:17.407125 update_engine[1294]: I0317 21:00:17.406684 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 21:00:17.407125 update_engine[1294]: I0317 21:00:17.407089 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 21:00:17.407963 update_engine[1294]: I0317 21:00:17.407341 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 21:00:17.407963 update_engine[1294]: E0317 21:00:17.407711 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 21:00:17.407963 update_engine[1294]: I0317 21:00:17.407795 1294 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 21:00:17.407963 update_engine[1294]: I0317 21:00:17.407837 1294 omaha_request_action.cc:621] Omaha request response: Mar 17 21:00:17.408356 update_engine[1294]: E0317 21:00:17.408095 1294 omaha_request_action.cc:640] Omaha request network transfer failed. Mar 17 21:00:17.408843 update_engine[1294]: I0317 21:00:17.408782 1294 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 21:00:17.408843 update_engine[1294]: I0317 21:00:17.408803 1294 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 21:00:17.408843 update_engine[1294]: I0317 21:00:17.408810 1294 update_attempter.cc:306] Processing Done. Mar 17 21:00:17.409125 update_engine[1294]: E0317 21:00:17.408851 1294 update_attempter.cc:619] Update failed. Mar 17 21:00:17.409125 update_engine[1294]: I0317 21:00:17.408866 1294 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 21:00:17.409125 update_engine[1294]: I0317 21:00:17.408874 1294 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 21:00:17.409125 update_engine[1294]: I0317 21:00:17.408880 1294 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 21:00:17.409125 update_engine[1294]: I0317 21:00:17.409021 1294 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 21:00:17.409381 update_engine[1294]: I0317 21:00:17.409122 1294 omaha_request_action.cc:270] Posting an Omaha request to disabled Mar 17 21:00:17.409381 update_engine[1294]: I0317 21:00:17.409137 1294 omaha_request_action.cc:271] Request: Mar 17 21:00:17.409381 update_engine[1294]: Mar 17 21:00:17.409381 update_engine[1294]: Mar 17 21:00:17.409381 update_engine[1294]: Mar 17 21:00:17.409381 update_engine[1294]: Mar 17 21:00:17.409381 update_engine[1294]: Mar 17 21:00:17.409381 update_engine[1294]: Mar 17 21:00:17.409381 update_engine[1294]: I0317 21:00:17.409142 1294 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 21:00:17.409381 update_engine[1294]: I0317 21:00:17.409341 1294 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 21:00:17.409992 update_engine[1294]: I0317 21:00:17.409528 1294 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 21:00:17.410565 update_engine[1294]: E0317 21:00:17.410328 1294 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410418 1294 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410429 1294 omaha_request_action.cc:621] Omaha request response: Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410437 1294 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410444 1294 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410449 1294 update_attempter.cc:306] Processing Done. Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410454 1294 update_attempter.cc:310] Error event sent. Mar 17 21:00:17.410565 update_engine[1294]: I0317 21:00:17.410465 1294 update_check_scheduler.cc:74] Next update check in 46m33s Mar 17 21:00:17.411101 locksmithd[1335]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 21:00:17.411101 locksmithd[1335]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 21:00:17.553723 sshd[3704]: Accepted publickey for core from 139.178.89.65 port 60414 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:17.555817 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:17.564189 systemd[1]: Started session-18.scope. Mar 17 21:00:17.565640 systemd-logind[1292]: New session 18 of user core. Mar 17 21:00:18.253559 sshd[3704]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:18.257577 systemd[1]: sshd@18-10.243.78.42:22-139.178.89.65:60414.service: Deactivated successfully. Mar 17 21:00:18.259421 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 21:00:18.259985 systemd-logind[1292]: Session 18 logged out. Waiting for processes to exit. Mar 17 21:00:18.262800 systemd-logind[1292]: Removed session 18. Mar 17 21:00:23.400360 systemd[1]: Started sshd@19-10.243.78.42:22-139.178.89.65:55894.service. Mar 17 21:00:24.286053 sshd[3719]: Accepted publickey for core from 139.178.89.65 port 55894 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:24.288122 sshd[3719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:24.296099 systemd-logind[1292]: New session 19 of user core. Mar 17 21:00:24.297842 systemd[1]: Started session-19.scope. Mar 17 21:00:24.992391 sshd[3719]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:24.996402 systemd-logind[1292]: Session 19 logged out. Waiting for processes to exit. Mar 17 21:00:24.996787 systemd[1]: sshd@19-10.243.78.42:22-139.178.89.65:55894.service: Deactivated successfully. Mar 17 21:00:24.997940 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 21:00:24.999120 systemd-logind[1292]: Removed session 19. Mar 17 21:00:30.139277 systemd[1]: Started sshd@20-10.243.78.42:22-139.178.89.65:55902.service. Mar 17 21:00:31.025688 sshd[3734]: Accepted publickey for core from 139.178.89.65 port 55902 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:31.028121 sshd[3734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:31.036128 systemd-logind[1292]: New session 20 of user core. Mar 17 21:00:31.037530 systemd[1]: Started session-20.scope. Mar 17 21:00:31.749544 sshd[3734]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:31.753158 systemd-logind[1292]: Session 20 logged out. Waiting for processes to exit. Mar 17 21:00:31.753656 systemd[1]: sshd@20-10.243.78.42:22-139.178.89.65:55902.service: Deactivated successfully. Mar 17 21:00:31.754760 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 21:00:31.756483 systemd-logind[1292]: Removed session 20. Mar 17 21:00:31.895648 systemd[1]: Started sshd@21-10.243.78.42:22-139.178.89.65:37330.service. Mar 17 21:00:32.782767 sshd[3747]: Accepted publickey for core from 139.178.89.65 port 37330 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:32.784721 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:32.791812 systemd-logind[1292]: New session 21 of user core. Mar 17 21:00:32.792738 systemd[1]: Started session-21.scope. Mar 17 21:00:34.877006 env[1302]: time="2025-03-17T21:00:34.876893743Z" level=info msg="StopContainer for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" with timeout 30 (s)" Mar 17 21:00:34.879438 env[1302]: time="2025-03-17T21:00:34.879376286Z" level=info msg="Stop container \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" with signal terminated" Mar 17 21:00:34.932254 systemd[1]: run-containerd-runc-k8s.io-f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05-runc.OCGw6L.mount: Deactivated successfully. Mar 17 21:00:34.951959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465-rootfs.mount: Deactivated successfully. Mar 17 21:00:34.961536 env[1302]: time="2025-03-17T21:00:34.961435270Z" level=info msg="shim disconnected" id=fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465 Mar 17 21:00:34.961876 env[1302]: time="2025-03-17T21:00:34.961806492Z" level=warning msg="cleaning up after shim disconnected" id=fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465 namespace=k8s.io Mar 17 21:00:34.962048 env[1302]: time="2025-03-17T21:00:34.962019548Z" level=info msg="cleaning up dead shim" Mar 17 21:00:34.983430 env[1302]: time="2025-03-17T21:00:34.983342718Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 21:00:34.988017 env[1302]: time="2025-03-17T21:00:34.987956717Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3784 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T21:00:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Mar 17 21:00:34.990818 env[1302]: time="2025-03-17T21:00:34.990777909Z" level=info msg="StopContainer for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" returns successfully" Mar 17 21:00:34.992903 env[1302]: time="2025-03-17T21:00:34.992852583Z" level=info msg="StopPodSandbox for \"853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae\"" Mar 17 21:00:34.993470 env[1302]: time="2025-03-17T21:00:34.993418221Z" level=info msg="Container to stop \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:34.996652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae-shm.mount: Deactivated successfully. Mar 17 21:00:34.999392 env[1302]: time="2025-03-17T21:00:34.999356238Z" level=info msg="StopContainer for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" with timeout 2 (s)" Mar 17 21:00:35.000001 env[1302]: time="2025-03-17T21:00:34.999960227Z" level=info msg="Stop container \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" with signal terminated" Mar 17 21:00:35.017472 systemd-networkd[1079]: lxc_health: Link DOWN Mar 17 21:00:35.017484 systemd-networkd[1079]: lxc_health: Lost carrier Mar 17 21:00:35.080826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae-rootfs.mount: Deactivated successfully. Mar 17 21:00:35.089763 env[1302]: time="2025-03-17T21:00:35.087571377Z" level=info msg="shim disconnected" id=853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae Mar 17 21:00:35.089763 env[1302]: time="2025-03-17T21:00:35.088022081Z" level=warning msg="cleaning up after shim disconnected" id=853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae namespace=k8s.io Mar 17 21:00:35.089763 env[1302]: time="2025-03-17T21:00:35.088044861Z" level=info msg="cleaning up dead shim" Mar 17 21:00:35.109701 env[1302]: time="2025-03-17T21:00:35.109619069Z" level=info msg="shim disconnected" id=f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05 Mar 17 21:00:35.110326 env[1302]: time="2025-03-17T21:00:35.110026929Z" level=warning msg="cleaning up after shim disconnected" id=f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05 namespace=k8s.io Mar 17 21:00:35.110476 env[1302]: time="2025-03-17T21:00:35.110447394Z" level=info msg="cleaning up dead shim" Mar 17 21:00:35.114519 env[1302]: time="2025-03-17T21:00:35.114451658Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3844 runtime=io.containerd.runc.v2\n" Mar 17 21:00:35.118049 env[1302]: time="2025-03-17T21:00:35.117998705Z" level=info msg="TearDown network for sandbox \"853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae\" successfully" Mar 17 21:00:35.118215 env[1302]: time="2025-03-17T21:00:35.118095687Z" level=info msg="StopPodSandbox for \"853c795d1dabec4c541b11ab8d96ddc967b48014335eb4882a91447bb0cf24ae\" returns successfully" Mar 17 21:00:35.131236 env[1302]: time="2025-03-17T21:00:35.130326473Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3859 runtime=io.containerd.runc.v2\n" Mar 17 21:00:35.133712 env[1302]: time="2025-03-17T21:00:35.133675555Z" level=info msg="StopContainer for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" returns successfully" Mar 17 21:00:35.134504 env[1302]: time="2025-03-17T21:00:35.134448250Z" level=info msg="StopPodSandbox for \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\"" Mar 17 21:00:35.134908 env[1302]: time="2025-03-17T21:00:35.134872125Z" level=info msg="Container to stop \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:35.135117 env[1302]: time="2025-03-17T21:00:35.135032589Z" level=info msg="Container to stop \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:35.135285 env[1302]: time="2025-03-17T21:00:35.135240029Z" level=info msg="Container to stop \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:35.135461 env[1302]: time="2025-03-17T21:00:35.135419720Z" level=info msg="Container to stop \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:35.135630 env[1302]: time="2025-03-17T21:00:35.135600894Z" level=info msg="Container to stop \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:35.174951 env[1302]: time="2025-03-17T21:00:35.174878917Z" level=info msg="shim disconnected" id=e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4 Mar 17 21:00:35.174951 env[1302]: time="2025-03-17T21:00:35.174950453Z" level=warning msg="cleaning up after shim disconnected" id=e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4 namespace=k8s.io Mar 17 21:00:35.175310 env[1302]: time="2025-03-17T21:00:35.174975190Z" level=info msg="cleaning up dead shim" Mar 17 21:00:35.187122 env[1302]: time="2025-03-17T21:00:35.186958001Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3896 runtime=io.containerd.runc.v2\n" Mar 17 21:00:35.187829 env[1302]: time="2025-03-17T21:00:35.187780146Z" level=info msg="TearDown network for sandbox \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" successfully" Mar 17 21:00:35.187829 env[1302]: time="2025-03-17T21:00:35.187823844Z" level=info msg="StopPodSandbox for \"e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4\" returns successfully" Mar 17 21:00:35.238248 kubelet[2203]: I0317 21:00:35.238181 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-cilium-config-path\") pod \"f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc\" (UID: \"f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc\") " Mar 17 21:00:35.239167 kubelet[2203]: I0317 21:00:35.239138 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-config-path\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.239331 kubelet[2203]: I0317 21:00:35.239302 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzj7b\" (UniqueName: \"kubernetes.io/projected/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-kube-api-access-xzj7b\") pod \"f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc\" (UID: \"f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc\") " Mar 17 21:00:35.239550 kubelet[2203]: I0317 21:00:35.239516 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhxdm\" (UniqueName: \"kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-kube-api-access-hhxdm\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.247538 kubelet[2203]: I0317 21:00:35.247487 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-hubble-tls\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.247798 kubelet[2203]: I0317 21:00:35.247743 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-hostproc\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.251918 kubelet[2203]: I0317 21:00:35.251878 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-cgroup\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.252035 kubelet[2203]: I0317 21:00:35.251942 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f89049d1-0772-473c-a5f5-ad0d957f2056-clustermesh-secrets\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.252622 kubelet[2203]: I0317 21:00:35.248173 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-hostproc" (OuterVolumeSpecName: "hostproc") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.254553 kubelet[2203]: I0317 21:00:35.254511 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.256770 kubelet[2203]: I0317 21:00:35.256739 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc" (UID: "f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 21:00:35.258968 kubelet[2203]: I0317 21:00:35.258921 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-kube-api-access-xzj7b" (OuterVolumeSpecName: "kube-api-access-xzj7b") pod "f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc" (UID: "f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc"). InnerVolumeSpecName "kube-api-access-xzj7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:00:35.259633 kubelet[2203]: I0317 21:00:35.259601 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-kube-api-access-hhxdm" (OuterVolumeSpecName: "kube-api-access-hhxdm") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "kube-api-access-hhxdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:00:35.260594 kubelet[2203]: I0317 21:00:35.260553 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 21:00:35.262611 kubelet[2203]: I0317 21:00:35.262575 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89049d1-0772-473c-a5f5-ad0d957f2056-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 21:00:35.263758 kubelet[2203]: I0317 21:00:35.263726 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:00:35.352741 kubelet[2203]: I0317 21:00:35.352677 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cni-path\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.352741 kubelet[2203]: I0317 21:00:35.352752 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-bpf-maps\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353094 kubelet[2203]: I0317 21:00:35.352795 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-kernel\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353094 kubelet[2203]: I0317 21:00:35.352841 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-lib-modules\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353094 kubelet[2203]: I0317 21:00:35.352880 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-net\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353094 kubelet[2203]: I0317 21:00:35.352921 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-etc-cni-netd\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353094 kubelet[2203]: I0317 21:00:35.352954 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-run\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353094 kubelet[2203]: I0317 21:00:35.353007 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-xtables-lock\") pod \"f89049d1-0772-473c-a5f5-ad0d957f2056\" (UID: \"f89049d1-0772-473c-a5f5-ad0d957f2056\") " Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353109 2203 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hhxdm\" (UniqueName: \"kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-kube-api-access-hhxdm\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353132 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-config-path\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353151 2203 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xzj7b\" (UniqueName: \"kubernetes.io/projected/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-kube-api-access-xzj7b\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353167 2203 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-hostproc\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353181 2203 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f89049d1-0772-473c-a5f5-ad0d957f2056-hubble-tls\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353204 2203 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f89049d1-0772-473c-a5f5-ad0d957f2056-clustermesh-secrets\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353440 kubelet[2203]: I0317 21:00:35.353223 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-cgroup\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353857 kubelet[2203]: I0317 21:00:35.353238 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc-cilium-config-path\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.353857 kubelet[2203]: I0317 21:00:35.353295 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.353857 kubelet[2203]: I0317 21:00:35.353340 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cni-path" (OuterVolumeSpecName: "cni-path") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.353857 kubelet[2203]: I0317 21:00:35.353377 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.353857 kubelet[2203]: I0317 21:00:35.353409 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.354215 kubelet[2203]: I0317 21:00:35.353452 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.354215 kubelet[2203]: I0317 21:00:35.353479 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.354215 kubelet[2203]: I0317 21:00:35.353517 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.354215 kubelet[2203]: I0317 21:00:35.353567 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f89049d1-0772-473c-a5f5-ad0d957f2056" (UID: "f89049d1-0772-473c-a5f5-ad0d957f2056"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:35.384015 kubelet[2203]: I0317 21:00:35.382151 2203 scope.go:117] "RemoveContainer" containerID="f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05" Mar 17 21:00:35.397995 env[1302]: time="2025-03-17T21:00:35.397931051Z" level=info msg="RemoveContainer for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\"" Mar 17 21:00:35.405810 env[1302]: time="2025-03-17T21:00:35.405761967Z" level=info msg="RemoveContainer for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" returns successfully" Mar 17 21:00:35.406672 kubelet[2203]: I0317 21:00:35.406592 2203 scope.go:117] "RemoveContainer" containerID="2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be" Mar 17 21:00:35.408258 env[1302]: time="2025-03-17T21:00:35.408221356Z" level=info msg="RemoveContainer for \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\"" Mar 17 21:00:35.420410 env[1302]: time="2025-03-17T21:00:35.420355882Z" level=info msg="RemoveContainer for \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\" returns successfully" Mar 17 21:00:35.420904 kubelet[2203]: I0317 21:00:35.420876 2203 scope.go:117] "RemoveContainer" containerID="00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622" Mar 17 21:00:35.426635 env[1302]: time="2025-03-17T21:00:35.426591988Z" level=info msg="RemoveContainer for \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\"" Mar 17 21:00:35.435971 env[1302]: time="2025-03-17T21:00:35.435788674Z" level=info msg="RemoveContainer for \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\" returns successfully" Mar 17 21:00:35.436634 kubelet[2203]: I0317 21:00:35.436582 2203 scope.go:117] "RemoveContainer" containerID="d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8" Mar 17 21:00:35.440116 env[1302]: time="2025-03-17T21:00:35.439668015Z" level=info msg="RemoveContainer for \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\"" Mar 17 21:00:35.443795 env[1302]: time="2025-03-17T21:00:35.443674925Z" level=info msg="RemoveContainer for \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\" returns successfully" Mar 17 21:00:35.447522 kubelet[2203]: I0317 21:00:35.447490 2203 scope.go:117] "RemoveContainer" containerID="8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11" Mar 17 21:00:35.451319 env[1302]: time="2025-03-17T21:00:35.451265207Z" level=info msg="RemoveContainer for \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\"" Mar 17 21:00:35.455591 kubelet[2203]: I0317 21:00:35.455542 2203 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-kernel\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.455798 kubelet[2203]: I0317 21:00:35.455762 2203 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-lib-modules\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.455968 kubelet[2203]: I0317 21:00:35.455944 2203 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-host-proc-sys-net\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.456172 kubelet[2203]: I0317 21:00:35.456130 2203 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cni-path\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.456310 kubelet[2203]: I0317 21:00:35.456287 2203 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-bpf-maps\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.456432 kubelet[2203]: I0317 21:00:35.456410 2203 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-xtables-lock\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.456565 kubelet[2203]: I0317 21:00:35.456542 2203 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-etc-cni-netd\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.456707 env[1302]: time="2025-03-17T21:00:35.456543901Z" level=info msg="RemoveContainer for \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\" returns successfully" Mar 17 21:00:35.457313 kubelet[2203]: I0317 21:00:35.457123 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f89049d1-0772-473c-a5f5-ad0d957f2056-cilium-run\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:35.457451 kubelet[2203]: I0317 21:00:35.457185 2203 scope.go:117] "RemoveContainer" containerID="f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05" Mar 17 21:00:35.458098 env[1302]: time="2025-03-17T21:00:35.457832441Z" level=error msg="ContainerStatus for \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\": not found" Mar 17 21:00:35.459333 kubelet[2203]: E0317 21:00:35.459271 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\": not found" containerID="f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05" Mar 17 21:00:35.461027 kubelet[2203]: I0317 21:00:35.460845 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05"} err="failed to get container status \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05\": not found" Mar 17 21:00:35.461188 kubelet[2203]: I0317 21:00:35.461025 2203 scope.go:117] "RemoveContainer" containerID="2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be" Mar 17 21:00:35.461557 env[1302]: time="2025-03-17T21:00:35.461446542Z" level=error msg="ContainerStatus for \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\": not found" Mar 17 21:00:35.461861 kubelet[2203]: E0317 21:00:35.461831 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\": not found" containerID="2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be" Mar 17 21:00:35.462011 kubelet[2203]: I0317 21:00:35.461977 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be"} err="failed to get container status \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e17b3090eb050ee4bee58b7960bba37ad5d4c9436e7b08e96079593eebd79be\": not found" Mar 17 21:00:35.462152 kubelet[2203]: I0317 21:00:35.462128 2203 scope.go:117] "RemoveContainer" containerID="00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622" Mar 17 21:00:35.462765 env[1302]: time="2025-03-17T21:00:35.462548823Z" level=error msg="ContainerStatus for \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\": not found" Mar 17 21:00:35.462953 kubelet[2203]: E0317 21:00:35.462925 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\": not found" containerID="00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622" Mar 17 21:00:35.463116 kubelet[2203]: I0317 21:00:35.463083 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622"} err="failed to get container status \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\": rpc error: code = NotFound desc = an error occurred when try to find container \"00fa54380829b13aaa5154290e47a000b90aa16620a3d9f97539be1c5ffd7622\": not found" Mar 17 21:00:35.463238 kubelet[2203]: I0317 21:00:35.463214 2203 scope.go:117] "RemoveContainer" containerID="d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8" Mar 17 21:00:35.463743 env[1302]: time="2025-03-17T21:00:35.463621159Z" level=error msg="ContainerStatus for \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\": not found" Mar 17 21:00:35.463965 kubelet[2203]: E0317 21:00:35.463927 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\": not found" containerID="d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8" Mar 17 21:00:35.464153 kubelet[2203]: I0317 21:00:35.464122 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8"} err="failed to get container status \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3d096557385ce350bd825a07d9502051fc00b725cdbf6d915cedec361116fd8\": not found" Mar 17 21:00:35.464284 kubelet[2203]: I0317 21:00:35.464261 2203 scope.go:117] "RemoveContainer" containerID="8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11" Mar 17 21:00:35.464779 env[1302]: time="2025-03-17T21:00:35.464647902Z" level=error msg="ContainerStatus for \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\": not found" Mar 17 21:00:35.464954 kubelet[2203]: E0317 21:00:35.464929 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\": not found" containerID="8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11" Mar 17 21:00:35.465158 kubelet[2203]: I0317 21:00:35.465125 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11"} err="failed to get container status \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fa933d3eae0faee8da1b21263e6f9152a6ef25c65f8b6e8abada0266a21aa11\": not found" Mar 17 21:00:35.465293 kubelet[2203]: I0317 21:00:35.465269 2203 scope.go:117] "RemoveContainer" containerID="fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465" Mar 17 21:00:35.467194 env[1302]: time="2025-03-17T21:00:35.466714061Z" level=info msg="RemoveContainer for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\"" Mar 17 21:00:35.470361 env[1302]: time="2025-03-17T21:00:35.470328381Z" level=info msg="RemoveContainer for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" returns successfully" Mar 17 21:00:35.470671 kubelet[2203]: I0317 21:00:35.470646 2203 scope.go:117] "RemoveContainer" containerID="fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465" Mar 17 21:00:35.471011 env[1302]: time="2025-03-17T21:00:35.470953699Z" level=error msg="ContainerStatus for \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\": not found" Mar 17 21:00:35.471334 kubelet[2203]: E0317 21:00:35.471305 2203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\": not found" containerID="fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465" Mar 17 21:00:35.471480 kubelet[2203]: I0317 21:00:35.471448 2203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465"} err="failed to get container status \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc8db661cc47125798765ffe313243add08060de7bf8350ae5d75187dc4e9465\": not found" Mar 17 21:00:35.923940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9e36708f1a1783a8fc540cac1f3bee4b704ff666f477793fc80ee217461af05-rootfs.mount: Deactivated successfully. Mar 17 21:00:35.924206 systemd[1]: var-lib-kubelet-pods-f74c92c7\x2dcb8e\x2d42eb\x2daf32\x2d5e8a32bc6dfc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzj7b.mount: Deactivated successfully. Mar 17 21:00:35.924414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4-rootfs.mount: Deactivated successfully. Mar 17 21:00:35.924663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e18f8db86fcfee7fdd4142667c9b9a5c0b6e4cf14cee3484d52e60f771ba21f4-shm.mount: Deactivated successfully. Mar 17 21:00:35.924848 systemd[1]: var-lib-kubelet-pods-f89049d1\x2d0772\x2d473c\x2da5f5\x2dad0d957f2056-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:00:35.925042 systemd[1]: var-lib-kubelet-pods-f89049d1\x2d0772\x2d473c\x2da5f5\x2dad0d957f2056-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhhxdm.mount: Deactivated successfully. Mar 17 21:00:35.925231 systemd[1]: var-lib-kubelet-pods-f89049d1\x2d0772\x2d473c\x2da5f5\x2dad0d957f2056-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:00:35.959507 kubelet[2203]: E0317 21:00:35.959423 2203 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:00:36.745034 kubelet[2203]: I0317 21:00:36.744865 2203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc" path="/var/lib/kubelet/pods/f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc/volumes" Mar 17 21:00:36.745895 kubelet[2203]: I0317 21:00:36.745843 2203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" path="/var/lib/kubelet/pods/f89049d1-0772-473c-a5f5-ad0d957f2056/volumes" Mar 17 21:00:36.917796 sshd[3747]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:36.921275 systemd[1]: sshd@21-10.243.78.42:22-139.178.89.65:37330.service: Deactivated successfully. Mar 17 21:00:36.922722 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 21:00:36.922754 systemd-logind[1292]: Session 21 logged out. Waiting for processes to exit. Mar 17 21:00:36.924815 systemd-logind[1292]: Removed session 21. Mar 17 21:00:37.062293 systemd[1]: Started sshd@22-10.243.78.42:22-139.178.89.65:37336.service. Mar 17 21:00:37.958241 sshd[3915]: Accepted publickey for core from 139.178.89.65 port 37336 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:37.960800 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:37.968036 systemd[1]: Started session-22.scope. Mar 17 21:00:37.968360 systemd-logind[1292]: New session 22 of user core. Mar 17 21:00:39.213330 kubelet[2203]: I0317 21:00:39.212370 2203 topology_manager.go:215] "Topology Admit Handler" podUID="84cdd7ce-7437-443c-8bcc-49f3578a2d5a" podNamespace="kube-system" podName="cilium-p5wtk" Mar 17 21:00:39.214961 kubelet[2203]: E0317 21:00:39.214926 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" containerName="clean-cilium-state" Mar 17 21:00:39.215086 kubelet[2203]: E0317 21:00:39.214962 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" containerName="cilium-agent" Mar 17 21:00:39.215086 kubelet[2203]: E0317 21:00:39.214995 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" containerName="mount-cgroup" Mar 17 21:00:39.215086 kubelet[2203]: E0317 21:00:39.215007 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" containerName="apply-sysctl-overwrites" Mar 17 21:00:39.215086 kubelet[2203]: E0317 21:00:39.215020 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" containerName="mount-bpf-fs" Mar 17 21:00:39.215086 kubelet[2203]: E0317 21:00:39.215031 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc" containerName="cilium-operator" Mar 17 21:00:39.215396 kubelet[2203]: I0317 21:00:39.215168 2203 memory_manager.go:354] "RemoveStaleState removing state" podUID="f74c92c7-cb8e-42eb-af32-5e8a32bc6dfc" containerName="cilium-operator" Mar 17 21:00:39.215396 kubelet[2203]: I0317 21:00:39.215192 2203 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89049d1-0772-473c-a5f5-ad0d957f2056" containerName="cilium-agent" Mar 17 21:00:39.287672 kubelet[2203]: I0317 21:00:39.287569 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57j9w\" (UniqueName: \"kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-kube-api-access-57j9w\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.287900 kubelet[2203]: I0317 21:00:39.287695 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-cgroup\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.287900 kubelet[2203]: I0317 21:00:39.287728 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-kernel\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.287900 kubelet[2203]: I0317 21:00:39.287812 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-clustermesh-secrets\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288117 kubelet[2203]: I0317 21:00:39.287905 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-bpf-maps\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288117 kubelet[2203]: I0317 21:00:39.288003 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-xtables-lock\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288117 kubelet[2203]: I0317 21:00:39.288036 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-etc-cni-netd\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288310 kubelet[2203]: I0317 21:00:39.288204 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hostproc\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288310 kubelet[2203]: I0317 21:00:39.288238 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cni-path\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288438 kubelet[2203]: I0317 21:00:39.288354 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-net\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288514 kubelet[2203]: I0317 21:00:39.288435 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hubble-tls\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288589 kubelet[2203]: I0317 21:00:39.288528 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-run\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288647 kubelet[2203]: I0317 21:00:39.288597 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-config-path\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288724 kubelet[2203]: I0317 21:00:39.288687 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-ipsec-secrets\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.288778 kubelet[2203]: I0317 21:00:39.288751 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-lib-modules\") pod \"cilium-p5wtk\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " pod="kube-system/cilium-p5wtk" Mar 17 21:00:39.352037 sshd[3915]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:39.356240 systemd-logind[1292]: Session 22 logged out. Waiting for processes to exit. Mar 17 21:00:39.357514 systemd[1]: sshd@22-10.243.78.42:22-139.178.89.65:37336.service: Deactivated successfully. Mar 17 21:00:39.358503 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 21:00:39.360589 systemd-logind[1292]: Removed session 22. Mar 17 21:00:39.498702 systemd[1]: Started sshd@23-10.243.78.42:22-139.178.89.65:37342.service. Mar 17 21:00:39.536741 env[1302]: time="2025-03-17T21:00:39.536665372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5wtk,Uid:84cdd7ce-7437-443c-8bcc-49f3578a2d5a,Namespace:kube-system,Attempt:0,}" Mar 17 21:00:39.566574 env[1302]: time="2025-03-17T21:00:39.566450286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:00:39.566881 env[1302]: time="2025-03-17T21:00:39.566539572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:00:39.566881 env[1302]: time="2025-03-17T21:00:39.566575728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:00:39.567211 env[1302]: time="2025-03-17T21:00:39.566916122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2 pid=3940 runtime=io.containerd.runc.v2 Mar 17 21:00:39.621246 env[1302]: time="2025-03-17T21:00:39.621187042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p5wtk,Uid:84cdd7ce-7437-443c-8bcc-49f3578a2d5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\"" Mar 17 21:00:39.626781 env[1302]: time="2025-03-17T21:00:39.626382419Z" level=info msg="CreateContainer within sandbox \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:00:39.640045 env[1302]: time="2025-03-17T21:00:39.639990078Z" level=info msg="CreateContainer within sandbox \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6\"" Mar 17 21:00:39.640950 env[1302]: time="2025-03-17T21:00:39.640914190Z" level=info msg="StartContainer for \"f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6\"" Mar 17 21:00:39.717951 env[1302]: time="2025-03-17T21:00:39.717845459Z" level=info msg="StartContainer for \"f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6\" returns successfully" Mar 17 21:00:39.762419 env[1302]: time="2025-03-17T21:00:39.762272527Z" level=info msg="shim disconnected" id=f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6 Mar 17 21:00:39.762419 env[1302]: time="2025-03-17T21:00:39.762333502Z" level=warning msg="cleaning up after shim disconnected" id=f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6 namespace=k8s.io Mar 17 21:00:39.762419 env[1302]: time="2025-03-17T21:00:39.762349913Z" level=info msg="cleaning up dead shim" Mar 17 21:00:39.772647 env[1302]: time="2025-03-17T21:00:39.772577209Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4022 runtime=io.containerd.runc.v2\n" Mar 17 21:00:40.386950 sshd[3930]: Accepted publickey for core from 139.178.89.65 port 37342 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:40.389781 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:40.396949 systemd[1]: Started session-23.scope. Mar 17 21:00:40.398189 systemd-logind[1292]: New session 23 of user core. Mar 17 21:00:40.430428 env[1302]: time="2025-03-17T21:00:40.428293297Z" level=info msg="CreateContainer within sandbox \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 21:00:40.451656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634087854.mount: Deactivated successfully. Mar 17 21:00:40.465393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448242199.mount: Deactivated successfully. Mar 17 21:00:40.468917 env[1302]: time="2025-03-17T21:00:40.468870921Z" level=info msg="CreateContainer within sandbox \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df\"" Mar 17 21:00:40.470060 env[1302]: time="2025-03-17T21:00:40.470026791Z" level=info msg="StartContainer for \"11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df\"" Mar 17 21:00:40.539483 env[1302]: time="2025-03-17T21:00:40.539427345Z" level=info msg="StartContainer for \"11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df\" returns successfully" Mar 17 21:00:40.571457 env[1302]: time="2025-03-17T21:00:40.571396443Z" level=info msg="shim disconnected" id=11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df Mar 17 21:00:40.571854 env[1302]: time="2025-03-17T21:00:40.571823407Z" level=warning msg="cleaning up after shim disconnected" id=11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df namespace=k8s.io Mar 17 21:00:40.571999 env[1302]: time="2025-03-17T21:00:40.571971003Z" level=info msg="cleaning up dead shim" Mar 17 21:00:40.585195 env[1302]: time="2025-03-17T21:00:40.585123113Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4084 runtime=io.containerd.runc.v2\n" Mar 17 21:00:40.960210 kubelet[2203]: E0317 21:00:40.960147 2203 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:00:41.134657 sshd[3930]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:41.138666 systemd-logind[1292]: Session 23 logged out. Waiting for processes to exit. Mar 17 21:00:41.139288 systemd[1]: sshd@23-10.243.78.42:22-139.178.89.65:37342.service: Deactivated successfully. Mar 17 21:00:41.140389 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 21:00:41.140991 systemd-logind[1292]: Removed session 23. Mar 17 21:00:41.278658 systemd[1]: Started sshd@24-10.243.78.42:22-139.178.89.65:33190.service. Mar 17 21:00:41.429328 env[1302]: time="2025-03-17T21:00:41.429273085Z" level=info msg="StopPodSandbox for \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\"" Mar 17 21:00:41.429714 env[1302]: time="2025-03-17T21:00:41.429677428Z" level=info msg="Container to stop \"f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:41.429849 env[1302]: time="2025-03-17T21:00:41.429815817Z" level=info msg="Container to stop \"11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 21:00:41.432633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2-shm.mount: Deactivated successfully. Mar 17 21:00:41.471395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2-rootfs.mount: Deactivated successfully. Mar 17 21:00:41.479624 env[1302]: time="2025-03-17T21:00:41.479567469Z" level=info msg="shim disconnected" id=25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2 Mar 17 21:00:41.479834 env[1302]: time="2025-03-17T21:00:41.479627654Z" level=warning msg="cleaning up after shim disconnected" id=25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2 namespace=k8s.io Mar 17 21:00:41.479834 env[1302]: time="2025-03-17T21:00:41.479645107Z" level=info msg="cleaning up dead shim" Mar 17 21:00:41.489772 env[1302]: time="2025-03-17T21:00:41.489699160Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4128 runtime=io.containerd.runc.v2\n" Mar 17 21:00:41.490868 env[1302]: time="2025-03-17T21:00:41.490829006Z" level=info msg="TearDown network for sandbox \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\" successfully" Mar 17 21:00:41.491017 env[1302]: time="2025-03-17T21:00:41.490984203Z" level=info msg="StopPodSandbox for \"25fe432b45b5c0b18db80cbb039c233a633725a25c8eaaf9aa074d16aa73eba2\" returns successfully" Mar 17 21:00:41.619859 kubelet[2203]: I0317 21:00:41.619253 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-cgroup\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620047 kubelet[2203]: I0317 21:00:41.620019 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hostproc\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620168 kubelet[2203]: I0317 21:00:41.620093 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-run\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620168 kubelet[2203]: I0317 21:00:41.620133 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-bpf-maps\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620168 kubelet[2203]: I0317 21:00:41.620161 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-xtables-lock\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620415 kubelet[2203]: I0317 21:00:41.620190 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-kernel\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620415 kubelet[2203]: I0317 21:00:41.620224 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-clustermesh-secrets\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620415 kubelet[2203]: I0317 21:00:41.620247 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-etc-cni-netd\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620415 kubelet[2203]: I0317 21:00:41.620273 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hubble-tls\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620415 kubelet[2203]: I0317 21:00:41.620298 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cni-path\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620415 kubelet[2203]: I0317 21:00:41.620321 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-lib-modules\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620769 kubelet[2203]: I0317 21:00:41.620346 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57j9w\" (UniqueName: \"kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-kube-api-access-57j9w\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620769 kubelet[2203]: I0317 21:00:41.620368 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-net\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620769 kubelet[2203]: I0317 21:00:41.620396 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-config-path\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.620769 kubelet[2203]: I0317 21:00:41.620435 2203 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-ipsec-secrets\") pod \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\" (UID: \"84cdd7ce-7437-443c-8bcc-49f3578a2d5a\") " Mar 17 21:00:41.626264 systemd[1]: var-lib-kubelet-pods-84cdd7ce\x2d7437\x2d443c\x2d8bcc\x2d49f3578a2d5a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 21:00:41.629957 systemd[1]: var-lib-kubelet-pods-84cdd7ce\x2d7437\x2d443c\x2d8bcc\x2d49f3578a2d5a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 21:00:41.632601 kubelet[2203]: I0317 21:00:41.619363 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.632784 kubelet[2203]: I0317 21:00:41.627641 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 21:00:41.632953 kubelet[2203]: I0317 21:00:41.630804 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.633153 kubelet[2203]: I0317 21:00:41.633113 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hostproc" (OuterVolumeSpecName: "hostproc") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.633298 kubelet[2203]: I0317 21:00:41.633272 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.633453 kubelet[2203]: I0317 21:00:41.633427 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.633588 kubelet[2203]: I0317 21:00:41.633563 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.633725 kubelet[2203]: I0317 21:00:41.633698 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.634012 kubelet[2203]: I0317 21:00:41.633983 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 21:00:41.634240 kubelet[2203]: I0317 21:00:41.634205 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:00:41.634318 kubelet[2203]: I0317 21:00:41.634258 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.637437 kubelet[2203]: I0317 21:00:41.637399 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 21:00:41.637541 kubelet[2203]: I0317 21:00:41.637457 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cni-path" (OuterVolumeSpecName: "cni-path") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.637541 kubelet[2203]: I0317 21:00:41.637491 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 21:00:41.637802 kubelet[2203]: I0317 21:00:41.637771 2203 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-kube-api-access-57j9w" (OuterVolumeSpecName: "kube-api-access-57j9w") pod "84cdd7ce-7437-443c-8bcc-49f3578a2d5a" (UID: "84cdd7ce-7437-443c-8bcc-49f3578a2d5a"). InnerVolumeSpecName "kube-api-access-57j9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 21:00:41.721496 kubelet[2203]: I0317 21:00:41.721412 2203 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hostproc\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.721935 kubelet[2203]: I0317 21:00:41.721821 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-cgroup\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.722329 kubelet[2203]: I0317 21:00:41.722304 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-run\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.722478 kubelet[2203]: I0317 21:00:41.722454 2203 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-xtables-lock\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.722720 kubelet[2203]: I0317 21:00:41.722696 2203 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-bpf-maps\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.722851 kubelet[2203]: I0317 21:00:41.722824 2203 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-kernel\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723008 kubelet[2203]: I0317 21:00:41.722953 2203 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-etc-cni-netd\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723163 kubelet[2203]: I0317 21:00:41.723139 2203 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-hubble-tls\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723289 kubelet[2203]: I0317 21:00:41.723266 2203 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-clustermesh-secrets\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723408 kubelet[2203]: I0317 21:00:41.723385 2203 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cni-path\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723554 kubelet[2203]: I0317 21:00:41.723529 2203 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-host-proc-sys-net\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723691 kubelet[2203]: I0317 21:00:41.723667 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-config-path\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723811 kubelet[2203]: I0317 21:00:41.723787 2203 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-cilium-ipsec-secrets\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.723955 kubelet[2203]: I0317 21:00:41.723918 2203 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-lib-modules\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:41.724122 kubelet[2203]: I0317 21:00:41.724045 2203 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-57j9w\" (UniqueName: \"kubernetes.io/projected/84cdd7ce-7437-443c-8bcc-49f3578a2d5a-kube-api-access-57j9w\") on node \"srv-be9pf.gb1.brightbox.com\" DevicePath \"\"" Mar 17 21:00:42.164926 sshd[4107]: Accepted publickey for core from 139.178.89.65 port 33190 ssh2: RSA SHA256:zyhiPLENj58svNToN4BOPPS+na2TgK0IE73Z79n4eiY Mar 17 21:00:42.166917 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 21:00:42.174550 systemd[1]: Started session-24.scope. Mar 17 21:00:42.176361 systemd-logind[1292]: New session 24 of user core. Mar 17 21:00:42.407462 systemd[1]: var-lib-kubelet-pods-84cdd7ce\x2d7437\x2d443c\x2d8bcc\x2d49f3578a2d5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57j9w.mount: Deactivated successfully. Mar 17 21:00:42.407672 systemd[1]: var-lib-kubelet-pods-84cdd7ce\x2d7437\x2d443c\x2d8bcc\x2d49f3578a2d5a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 21:00:42.433442 kubelet[2203]: I0317 21:00:42.433406 2203 scope.go:117] "RemoveContainer" containerID="11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df" Mar 17 21:00:42.435280 env[1302]: time="2025-03-17T21:00:42.435212654Z" level=info msg="RemoveContainer for \"11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df\"" Mar 17 21:00:42.441798 env[1302]: time="2025-03-17T21:00:42.441719302Z" level=info msg="RemoveContainer for \"11105145527cfed23c1d7b7f6922b64153878eb01bc32ce2f0f3a937124cf6df\" returns successfully" Mar 17 21:00:42.443119 kubelet[2203]: I0317 21:00:42.443085 2203 scope.go:117] "RemoveContainer" containerID="f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6" Mar 17 21:00:42.445475 env[1302]: time="2025-03-17T21:00:42.445429885Z" level=info msg="RemoveContainer for \"f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6\"" Mar 17 21:00:42.450343 env[1302]: time="2025-03-17T21:00:42.450207353Z" level=info msg="RemoveContainer for \"f5d301d801befdf6bf13289dfc7666ce2704e07924190b8b1c0d87560a0e95a6\" returns successfully" Mar 17 21:00:42.491089 kubelet[2203]: I0317 21:00:42.488795 2203 topology_manager.go:215] "Topology Admit Handler" podUID="9ce8a96b-b0d9-47a8-b70a-846d087f38c0" podNamespace="kube-system" podName="cilium-dt6b6" Mar 17 21:00:42.491089 kubelet[2203]: E0317 21:00:42.488900 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84cdd7ce-7437-443c-8bcc-49f3578a2d5a" containerName="mount-cgroup" Mar 17 21:00:42.491089 kubelet[2203]: E0317 21:00:42.488918 2203 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84cdd7ce-7437-443c-8bcc-49f3578a2d5a" containerName="apply-sysctl-overwrites" Mar 17 21:00:42.491089 kubelet[2203]: I0317 21:00:42.488970 2203 memory_manager.go:354] "RemoveStaleState removing state" podUID="84cdd7ce-7437-443c-8bcc-49f3578a2d5a" containerName="apply-sysctl-overwrites" Mar 17 21:00:42.530007 kubelet[2203]: I0317 21:00:42.529907 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-host-proc-sys-net\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530007 kubelet[2203]: I0317 21:00:42.529988 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-cilium-run\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530334 kubelet[2203]: I0317 21:00:42.530024 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-bpf-maps\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530334 kubelet[2203]: I0317 21:00:42.530086 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-etc-cni-netd\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530334 kubelet[2203]: I0317 21:00:42.530146 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-699lt\" (UniqueName: \"kubernetes.io/projected/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-kube-api-access-699lt\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530334 kubelet[2203]: I0317 21:00:42.530179 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-host-proc-sys-kernel\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530334 kubelet[2203]: I0317 21:00:42.530226 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-cni-path\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530334 kubelet[2203]: I0317 21:00:42.530254 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-xtables-lock\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530670 kubelet[2203]: I0317 21:00:42.530300 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-cilium-config-path\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530670 kubelet[2203]: I0317 21:00:42.530341 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-hostproc\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530670 kubelet[2203]: I0317 21:00:42.530390 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-lib-modules\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530670 kubelet[2203]: I0317 21:00:42.530425 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-cilium-ipsec-secrets\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530670 kubelet[2203]: I0317 21:00:42.530477 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-cilium-cgroup\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530670 kubelet[2203]: I0317 21:00:42.530513 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-hubble-tls\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.530982 kubelet[2203]: I0317 21:00:42.530576 2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ce8a96b-b0d9-47a8-b70a-846d087f38c0-clustermesh-secrets\") pod \"cilium-dt6b6\" (UID: \"9ce8a96b-b0d9-47a8-b70a-846d087f38c0\") " pod="kube-system/cilium-dt6b6" Mar 17 21:00:42.744223 kubelet[2203]: I0317 21:00:42.744090 2203 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84cdd7ce-7437-443c-8bcc-49f3578a2d5a" path="/var/lib/kubelet/pods/84cdd7ce-7437-443c-8bcc-49f3578a2d5a/volumes" Mar 17 21:00:42.796120 env[1302]: time="2025-03-17T21:00:42.795578762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dt6b6,Uid:9ce8a96b-b0d9-47a8-b70a-846d087f38c0,Namespace:kube-system,Attempt:0,}" Mar 17 21:00:42.811114 env[1302]: time="2025-03-17T21:00:42.810928771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 21:00:42.811313 env[1302]: time="2025-03-17T21:00:42.811152947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 21:00:42.811313 env[1302]: time="2025-03-17T21:00:42.811224106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 21:00:42.811671 env[1302]: time="2025-03-17T21:00:42.811593125Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077 pid=4164 runtime=io.containerd.runc.v2 Mar 17 21:00:42.886162 env[1302]: time="2025-03-17T21:00:42.882242119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dt6b6,Uid:9ce8a96b-b0d9-47a8-b70a-846d087f38c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\"" Mar 17 21:00:42.892273 env[1302]: time="2025-03-17T21:00:42.892234067Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 21:00:42.925157 env[1302]: time="2025-03-17T21:00:42.925041116Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8b19980583f07f05c9cf66ee3163cbd61497cba7cbb40cfc818e34bd6fe5cb15\"" Mar 17 21:00:42.926483 env[1302]: time="2025-03-17T21:00:42.926444988Z" level=info msg="StartContainer for \"8b19980583f07f05c9cf66ee3163cbd61497cba7cbb40cfc818e34bd6fe5cb15\"" Mar 17 21:00:43.016432 env[1302]: time="2025-03-17T21:00:43.015992509Z" level=info msg="StartContainer for \"8b19980583f07f05c9cf66ee3163cbd61497cba7cbb40cfc818e34bd6fe5cb15\" returns successfully" Mar 17 21:00:43.053562 env[1302]: time="2025-03-17T21:00:43.053504301Z" level=info msg="shim disconnected" id=8b19980583f07f05c9cf66ee3163cbd61497cba7cbb40cfc818e34bd6fe5cb15 Mar 17 21:00:43.053977 env[1302]: time="2025-03-17T21:00:43.053947754Z" level=warning msg="cleaning up after shim disconnected" id=8b19980583f07f05c9cf66ee3163cbd61497cba7cbb40cfc818e34bd6fe5cb15 namespace=k8s.io Mar 17 21:00:43.054159 env[1302]: time="2025-03-17T21:00:43.054118791Z" level=info msg="cleaning up dead shim" Mar 17 21:00:43.066081 env[1302]: time="2025-03-17T21:00:43.066013064Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4247 runtime=io.containerd.runc.v2\n" Mar 17 21:00:43.447378 env[1302]: time="2025-03-17T21:00:43.447312800Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 21:00:43.475311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719952645.mount: Deactivated successfully. Mar 17 21:00:43.486894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621595681.mount: Deactivated successfully. Mar 17 21:00:43.490806 env[1302]: time="2025-03-17T21:00:43.490754401Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9e50229cf5d10b7c20c4288c92be1ec38c591addf0b0ac00ca69c1ea02a9afe5\"" Mar 17 21:00:43.492452 env[1302]: time="2025-03-17T21:00:43.492003602Z" level=info msg="StartContainer for \"9e50229cf5d10b7c20c4288c92be1ec38c591addf0b0ac00ca69c1ea02a9afe5\"" Mar 17 21:00:43.590679 env[1302]: time="2025-03-17T21:00:43.590402343Z" level=info msg="StartContainer for \"9e50229cf5d10b7c20c4288c92be1ec38c591addf0b0ac00ca69c1ea02a9afe5\" returns successfully" Mar 17 21:00:43.616715 env[1302]: time="2025-03-17T21:00:43.616654045Z" level=info msg="shim disconnected" id=9e50229cf5d10b7c20c4288c92be1ec38c591addf0b0ac00ca69c1ea02a9afe5 Mar 17 21:00:43.616715 env[1302]: time="2025-03-17T21:00:43.616714921Z" level=warning msg="cleaning up after shim disconnected" id=9e50229cf5d10b7c20c4288c92be1ec38c591addf0b0ac00ca69c1ea02a9afe5 namespace=k8s.io Mar 17 21:00:43.616715 env[1302]: time="2025-03-17T21:00:43.616730919Z" level=info msg="cleaning up dead shim" Mar 17 21:00:43.627411 env[1302]: time="2025-03-17T21:00:43.627360990Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4306 runtime=io.containerd.runc.v2\n" Mar 17 21:00:43.978622 kubelet[2203]: I0317 21:00:43.978543 2203 setters.go:580] "Node became not ready" node="srv-be9pf.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T21:00:43Z","lastTransitionTime":"2025-03-17T21:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 21:00:44.447275 env[1302]: time="2025-03-17T21:00:44.447205414Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 21:00:44.465679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419901909.mount: Deactivated successfully. Mar 17 21:00:44.474311 env[1302]: time="2025-03-17T21:00:44.474249432Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3adb4ddf34fcfe5e0bdf4d346804914fc2858f631a9be519221a418e3a180ca\"" Mar 17 21:00:44.476396 env[1302]: time="2025-03-17T21:00:44.475308313Z" level=info msg="StartContainer for \"c3adb4ddf34fcfe5e0bdf4d346804914fc2858f631a9be519221a418e3a180ca\"" Mar 17 21:00:44.570838 env[1302]: time="2025-03-17T21:00:44.570706423Z" level=info msg="StartContainer for \"c3adb4ddf34fcfe5e0bdf4d346804914fc2858f631a9be519221a418e3a180ca\" returns successfully" Mar 17 21:00:44.602235 env[1302]: time="2025-03-17T21:00:44.602165122Z" level=info msg="shim disconnected" id=c3adb4ddf34fcfe5e0bdf4d346804914fc2858f631a9be519221a418e3a180ca Mar 17 21:00:44.602235 env[1302]: time="2025-03-17T21:00:44.602235993Z" level=warning msg="cleaning up after shim disconnected" id=c3adb4ddf34fcfe5e0bdf4d346804914fc2858f631a9be519221a418e3a180ca namespace=k8s.io Mar 17 21:00:44.602235 env[1302]: time="2025-03-17T21:00:44.602254412Z" level=info msg="cleaning up dead shim" Mar 17 21:00:44.614448 env[1302]: time="2025-03-17T21:00:44.614383683Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4364 runtime=io.containerd.runc.v2\n" Mar 17 21:00:45.410843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3adb4ddf34fcfe5e0bdf4d346804914fc2858f631a9be519221a418e3a180ca-rootfs.mount: Deactivated successfully. Mar 17 21:00:45.456563 env[1302]: time="2025-03-17T21:00:45.456371770Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 21:00:45.484366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589928068.mount: Deactivated successfully. Mar 17 21:00:45.494215 env[1302]: time="2025-03-17T21:00:45.494137000Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1de430843d034a661281206447c911d72f21ee03e46a6557c382d481cd68f8b3\"" Mar 17 21:00:45.496123 env[1302]: time="2025-03-17T21:00:45.496077981Z" level=info msg="StartContainer for \"1de430843d034a661281206447c911d72f21ee03e46a6557c382d481cd68f8b3\"" Mar 17 21:00:45.588356 env[1302]: time="2025-03-17T21:00:45.588303335Z" level=info msg="StartContainer for \"1de430843d034a661281206447c911d72f21ee03e46a6557c382d481cd68f8b3\" returns successfully" Mar 17 21:00:45.614573 env[1302]: time="2025-03-17T21:00:45.614506842Z" level=info msg="shim disconnected" id=1de430843d034a661281206447c911d72f21ee03e46a6557c382d481cd68f8b3 Mar 17 21:00:45.615006 env[1302]: time="2025-03-17T21:00:45.614974573Z" level=warning msg="cleaning up after shim disconnected" id=1de430843d034a661281206447c911d72f21ee03e46a6557c382d481cd68f8b3 namespace=k8s.io Mar 17 21:00:45.615200 env[1302]: time="2025-03-17T21:00:45.615157162Z" level=info msg="cleaning up dead shim" Mar 17 21:00:45.627134 env[1302]: time="2025-03-17T21:00:45.627095967Z" level=warning msg="cleanup warnings time=\"2025-03-17T21:00:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4421 runtime=io.containerd.runc.v2\n" Mar 17 21:00:45.961890 kubelet[2203]: E0317 21:00:45.961686 2203 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 21:00:46.411103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1de430843d034a661281206447c911d72f21ee03e46a6557c382d481cd68f8b3-rootfs.mount: Deactivated successfully. Mar 17 21:00:46.476140 env[1302]: time="2025-03-17T21:00:46.473372864Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 21:00:46.504834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874316906.mount: Deactivated successfully. Mar 17 21:00:46.514791 env[1302]: time="2025-03-17T21:00:46.514689327Z" level=info msg="CreateContainer within sandbox \"ce75a9a1abc60e8e62fa2661e8f94738060dcbfe37aea570d567aff0a715a077\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18\"" Mar 17 21:00:46.518140 env[1302]: time="2025-03-17T21:00:46.516479599Z" level=info msg="StartContainer for \"033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18\"" Mar 17 21:00:46.609894 env[1302]: time="2025-03-17T21:00:46.609838197Z" level=info msg="StartContainer for \"033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18\" returns successfully" Mar 17 21:00:47.303112 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 21:00:47.500341 kubelet[2203]: I0317 21:00:47.500248 2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dt6b6" podStartSLOduration=5.500202315 podStartE2EDuration="5.500202315s" podCreationTimestamp="2025-03-17 21:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 21:00:47.498351293 +0000 UTC m=+157.104092859" watchObservedRunningTime="2025-03-17 21:00:47.500202315 +0000 UTC m=+157.105943869" Mar 17 21:00:49.025438 systemd[1]: run-containerd-runc-k8s.io-033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18-runc.acVosB.mount: Deactivated successfully. Mar 17 21:00:51.030325 systemd-networkd[1079]: lxc_health: Link UP Mar 17 21:00:51.050110 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 21:00:51.045427 systemd-networkd[1079]: lxc_health: Gained carrier Mar 17 21:00:51.340122 systemd[1]: run-containerd-runc-k8s.io-033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18-runc.JKzqKw.mount: Deactivated successfully. Mar 17 21:00:52.521771 systemd[1]: Started sshd@25-10.243.78.42:22-103.218.122.171:49970.service. Mar 17 21:00:52.651485 systemd-networkd[1079]: lxc_health: Gained IPv6LL Mar 17 21:00:53.637899 systemd[1]: run-containerd-runc-k8s.io-033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18-runc.EKQ1Td.mount: Deactivated successfully. Mar 17 21:00:54.539464 sshd[5038]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.218.122.171 user=root Mar 17 21:00:55.884016 systemd[1]: Started sshd@26-10.243.78.42:22-103.218.122.171:50018.service. Mar 17 21:00:55.919333 systemd[1]: run-containerd-runc-k8s.io-033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18-runc.O4mf1i.mount: Deactivated successfully. Mar 17 21:00:56.466859 sshd[5038]: Failed password for root from 103.218.122.171 port 49970 ssh2 Mar 17 21:00:57.533612 sshd[5068]: Invalid user pi from 103.218.122.171 port 50018 Mar 17 21:00:57.897481 sshd[5068]: pam_faillock(sshd:auth): User unknown Mar 17 21:00:57.899499 sshd[5068]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:00:57.899615 sshd[5068]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.218.122.171 Mar 17 21:00:57.900525 sshd[5068]: pam_faillock(sshd:auth): User unknown Mar 17 21:00:58.157143 systemd[1]: run-containerd-runc-k8s.io-033e3cfbd242d0c68d7af557da1389112d5c1f93f7e07a2210425666ead4cf18-runc.IePTfS.mount: Deactivated successfully. Mar 17 21:00:58.241124 sshd[5038]: Connection closed by authenticating user root 103.218.122.171 port 49970 [preauth] Mar 17 21:00:58.241641 systemd[1]: sshd@25-10.243.78.42:22-103.218.122.171:49970.service: Deactivated successfully. Mar 17 21:00:58.404702 sshd[4107]: pam_unix(sshd:session): session closed for user core Mar 17 21:00:58.409832 systemd[1]: sshd@24-10.243.78.42:22-139.178.89.65:33190.service: Deactivated successfully. Mar 17 21:00:58.411277 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 21:00:58.413271 systemd-logind[1292]: Session 24 logged out. Waiting for processes to exit. Mar 17 21:00:58.415609 systemd-logind[1292]: Removed session 24. Mar 17 21:00:58.742335 systemd[1]: Started sshd@27-10.243.78.42:22-103.218.122.171:50062.service. Mar 17 21:01:00.238395 sshd[5068]: Failed password for invalid user pi from 103.218.122.171 port 50018 ssh2 Mar 17 21:01:00.280417 sshd[5132]: Invalid user hive from 103.218.122.171 port 50062 Mar 17 21:01:00.627732 sshd[5132]: pam_faillock(sshd:auth): User unknown Mar 17 21:01:00.628574 sshd[5132]: pam_unix(sshd:auth): check pass; user unknown Mar 17 21:01:00.628629 sshd[5132]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=103.218.122.171 Mar 17 21:01:00.629467 sshd[5132]: pam_faillock(sshd:auth): User unknown Mar 17 21:01:01.522656 systemd[1]: Started sshd@28-10.243.78.42:22-103.218.122.171:50106.service. Mar 17 21:01:01.988097 sshd[5068]: Connection closed by invalid user pi 103.218.122.171 port 50018 [preauth] Mar 17 21:01:01.989433 systemd[1]: sshd@26-10.243.78.42:22-103.218.122.171:50018.service: Deactivated successfully.