Dec 13 14:47:57.895586 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:47:57.897761 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:47:57.897788 kernel: BIOS-provided physical RAM map: Dec 13 14:47:57.897798 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:47:57.897807 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:47:57.897817 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:47:57.897828 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 14:47:57.897837 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 14:47:57.897847 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:47:57.897856 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:47:57.897870 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:47:57.897879 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:47:57.897889 kernel: NX (Execute Disable) protection: active Dec 13 14:47:57.897898 kernel: SMBIOS 2.8 present. Dec 13 14:47:57.897910 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Dec 13 14:47:57.897920 kernel: Hypervisor detected: KVM Dec 13 14:47:57.897935 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:47:57.897945 kernel: kvm-clock: cpu 0, msr 7f19a001, primary cpu clock Dec 13 14:47:57.897955 kernel: kvm-clock: using sched offset of 4833819509 cycles Dec 13 14:47:57.897966 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:47:57.897976 kernel: tsc: Detected 2799.998 MHz processor Dec 13 14:47:57.897987 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:47:57.897997 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:47:57.898008 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 14:47:57.898018 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:47:57.898032 kernel: Using GB pages for direct mapping Dec 13 14:47:57.898042 kernel: ACPI: Early table checksum verification disabled Dec 13 14:47:57.898052 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Dec 13 14:47:57.898062 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898073 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898083 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898093 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 14:47:57.898103 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898113 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898127 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898138 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:47:57.898148 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 14:47:57.898158 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 14:47:57.898168 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 14:47:57.898179 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 14:47:57.898195 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 14:47:57.898209 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 14:47:57.898220 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 14:47:57.898231 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:47:57.898242 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:47:57.898253 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 14:47:57.898263 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 14:47:57.898274 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 14:47:57.898289 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 14:47:57.898300 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 14:47:57.898310 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 14:47:57.898321 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 14:47:57.898332 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 14:47:57.898343 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 14:47:57.898353 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 14:47:57.898364 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 14:47:57.898375 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 14:47:57.898385 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 14:47:57.898400 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 14:47:57.898411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:47:57.898422 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 14:47:57.898433 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 14:47:57.898444 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 14:47:57.898454 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 14:47:57.898465 kernel: Zone ranges: Dec 13 14:47:57.898476 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:47:57.898487 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 14:47:57.898502 kernel: Normal empty Dec 13 14:47:57.898513 kernel: Movable zone start for each node Dec 13 14:47:57.898523 kernel: Early memory node ranges Dec 13 14:47:57.898534 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:47:57.898545 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 14:47:57.898556 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 14:47:57.898567 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:47:57.898578 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:47:57.898588 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 14:47:57.898603 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:47:57.898614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:47:57.898641 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:47:57.898654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:47:57.898665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:47:57.898675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:47:57.898686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:47:57.898697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:47:57.898708 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:47:57.898724 kernel: TSC deadline timer available Dec 13 14:47:57.898735 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 14:47:57.898756 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:47:57.898767 kernel: Booting paravirtualized kernel on KVM Dec 13 14:47:57.898778 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:47:57.898788 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 14:47:57.898799 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 14:47:57.898810 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 14:47:57.898820 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 14:47:57.898835 kernel: kvm-guest: stealtime: cpu 0, msr 7fa1c0c0 Dec 13 14:47:57.898847 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:47:57.898857 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:47:57.898868 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 14:47:57.898879 kernel: Policy zone: DMA32 Dec 13 14:47:57.898891 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:47:57.898902 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:47:57.898913 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:47:57.898928 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:47:57.898939 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:47:57.898950 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 192524K reserved, 0K cma-reserved) Dec 13 14:47:57.898961 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 14:47:57.898972 kernel: Kernel/User page tables isolation: enabled Dec 13 14:47:57.898983 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:47:57.898994 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:47:57.899004 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:47:57.899016 kernel: rcu: RCU event tracing is enabled. Dec 13 14:47:57.899031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 14:47:57.899042 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:47:57.899053 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:47:57.899064 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:47:57.899075 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 14:47:57.899086 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 14:47:57.899097 kernel: random: crng init done Dec 13 14:47:57.899120 kernel: Console: colour VGA+ 80x25 Dec 13 14:47:57.899132 kernel: printk: console [tty0] enabled Dec 13 14:47:57.899143 kernel: printk: console [ttyS0] enabled Dec 13 14:47:57.899155 kernel: ACPI: Core revision 20210730 Dec 13 14:47:57.899166 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:47:57.899181 kernel: x2apic enabled Dec 13 14:47:57.899192 kernel: Switched APIC routing to physical x2apic. Dec 13 14:47:57.899204 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 14:47:57.899216 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 13 14:47:57.899227 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:47:57.899242 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 14:47:57.899254 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 14:47:57.899265 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:47:57.899276 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:47:57.899288 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:47:57.899299 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:47:57.899310 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 14:47:57.899322 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:47:57.899333 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:47:57.899344 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:47:57.899359 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 14:47:57.899371 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 14:47:57.899387 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:47:57.899399 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:47:57.899410 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:47:57.899422 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:47:57.899433 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:47:57.899444 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:47:57.899455 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:47:57.899466 kernel: LSM: Security Framework initializing Dec 13 14:47:57.899477 kernel: SELinux: Initializing. Dec 13 14:47:57.899493 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:47:57.899505 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:47:57.899516 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 14:47:57.899527 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 14:47:57.899539 kernel: signal: max sigframe size: 1776 Dec 13 14:47:57.899550 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:47:57.899562 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:47:57.899573 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:47:57.899584 kernel: x86: Booting SMP configuration: Dec 13 14:47:57.899595 kernel: .... node #0, CPUs: #1 Dec 13 14:47:57.899610 kernel: kvm-clock: cpu 1, msr 7f19a041, secondary cpu clock Dec 13 14:47:57.899635 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 14:47:57.899649 kernel: kvm-guest: stealtime: cpu 1, msr 7fa5c0c0 Dec 13 14:47:57.899660 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:47:57.899671 kernel: smpboot: Max logical packages: 16 Dec 13 14:47:57.899683 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 13 14:47:57.899694 kernel: devtmpfs: initialized Dec 13 14:47:57.899705 kernel: x86/mm: Memory block size: 128MB Dec 13 14:47:57.899717 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:47:57.899733 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 14:47:57.899753 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:47:57.899765 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:47:57.899776 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:47:57.899788 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:47:57.899799 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:47:57.899810 kernel: audit: type=2000 audit(1734101276.683:1): state=initialized audit_enabled=0 res=1 Dec 13 14:47:57.899821 kernel: cpuidle: using governor menu Dec 13 14:47:57.899833 kernel: ACPI: bus type PCI registered Dec 13 14:47:57.899848 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:47:57.899860 kernel: dca service started, version 1.12.1 Dec 13 14:47:57.899872 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:47:57.899883 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:47:57.899894 kernel: PCI: Using configuration type 1 for base access Dec 13 14:47:57.899906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:47:57.899917 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:47:57.899929 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:47:57.899940 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:47:57.899955 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:47:57.899966 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:47:57.899978 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:47:57.899989 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:47:57.900000 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:47:57.900012 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:47:57.900023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:47:57.900035 kernel: ACPI: Interpreter enabled Dec 13 14:47:57.900046 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:47:57.900061 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:47:57.900073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:47:57.900084 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:47:57.900095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:47:57.900357 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:47:57.900519 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:47:57.900687 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:47:57.900705 kernel: PCI host bridge to bus 0000:00 Dec 13 14:47:57.900865 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:47:57.901000 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:47:57.901141 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:47:57.901288 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 14:47:57.901440 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:47:57.901580 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 14:47:57.901735 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:47:57.901915 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:47:57.902072 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 14:47:57.902222 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 14:47:57.902370 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 14:47:57.902515 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 14:47:57.906424 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:47:57.906604 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.910831 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 14:47:57.911004 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.911158 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 14:47:57.911338 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.911490 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 14:47:57.911673 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.911837 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 14:47:57.912005 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.912153 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 14:47:57.912307 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.912454 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 14:47:57.912615 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.912790 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 14:47:57.912947 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 14:47:57.913093 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 14:47:57.913261 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:47:57.913425 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Dec 13 14:47:57.913577 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 14:47:57.913773 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 14:47:57.913921 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 14:47:57.914077 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:47:57.914243 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Dec 13 14:47:57.914399 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 14:47:57.914554 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 14:47:57.914721 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:47:57.914887 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:47:57.915070 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:47:57.915232 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Dec 13 14:47:57.915389 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 14:47:57.915562 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:47:57.915735 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:47:57.915915 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 14:47:57.916070 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 14:47:57.916230 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 14:47:57.916377 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 14:47:57.916531 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 14:47:57.916699 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:47:57.916897 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 14:47:57.917070 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 14:47:57.917232 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 14:47:57.917388 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 14:47:57.917543 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 14:47:57.921818 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 14:47:57.921986 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:47:57.922164 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 14:47:57.922323 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 14:47:57.922475 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 14:47:57.922640 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 14:47:57.922803 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:47:57.922968 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 14:47:57.923124 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 14:47:57.923280 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 14:47:57.923426 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 14:47:57.923572 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:47:57.923734 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 14:47:57.923894 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 14:47:57.924039 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:47:57.924185 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 14:47:57.924328 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 14:47:57.924491 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:47:57.924672 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 14:47:57.924832 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 14:47:57.924998 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:47:57.925145 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 14:47:57.925290 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 14:47:57.925436 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:47:57.925592 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 14:47:57.925767 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 14:47:57.925914 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:47:57.925932 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:47:57.925944 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:47:57.925956 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:47:57.925967 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:47:57.925979 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:47:57.925990 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:47:57.926002 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:47:57.926020 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:47:57.926032 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:47:57.926043 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:47:57.926055 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:47:57.926066 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:47:57.926078 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:47:57.926090 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:47:57.926101 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:47:57.926113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:47:57.926128 kernel: iommu: Default domain type: Translated Dec 13 14:47:57.926140 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:47:57.926284 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:47:57.926430 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:47:57.926572 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:47:57.926590 kernel: vgaarb: loaded Dec 13 14:47:57.926602 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:47:57.926613 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:47:57.926644 kernel: PTP clock support registered Dec 13 14:47:57.926656 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:47:57.926668 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:47:57.926679 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:47:57.926691 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 14:47:57.926702 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:47:57.926714 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:47:57.926725 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:47:57.926737 kernel: pnp: PnP ACPI init Dec 13 14:47:57.926934 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:47:57.926953 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:47:57.926965 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:47:57.926977 kernel: NET: Registered PF_INET protocol family Dec 13 14:47:57.926989 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:47:57.927001 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:47:57.927013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:47:57.927024 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:47:57.927042 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:47:57.927053 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:47:57.927065 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:47:57.927077 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:47:57.927089 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:47:57.927100 kernel: NET: Registered PF_XDP protocol family Dec 13 14:47:57.927244 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:47:57.927391 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:47:57.927543 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 14:47:57.927716 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 14:47:57.927875 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 14:47:57.928020 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 14:47:57.928164 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 14:47:57.928307 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 14:47:57.928457 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 14:47:57.928600 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 14:47:57.937835 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 14:47:57.937998 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 14:47:57.938150 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 14:47:57.938321 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 14:47:57.938502 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 14:47:57.938659 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Dec 13 14:47:57.938871 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 14:47:57.939025 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:47:57.939179 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 14:47:57.939345 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Dec 13 14:47:57.939492 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 14:47:57.939648 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:47:57.939811 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 14:47:57.939958 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Dec 13 14:47:57.940105 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 14:47:57.940265 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:47:57.940421 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 14:47:57.940577 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Dec 13 14:47:57.940753 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 14:47:57.940909 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:47:57.941061 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 14:47:57.941215 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Dec 13 14:47:57.941361 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 14:47:57.941514 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:47:57.941684 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 14:47:57.941843 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Dec 13 14:47:57.941987 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 14:47:57.942141 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:47:57.942300 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 14:47:57.942458 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Dec 13 14:47:57.942606 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 14:47:57.942803 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:47:57.942949 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 14:47:57.943106 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Dec 13 14:47:57.943263 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 14:47:57.943428 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:47:57.943579 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 14:47:57.943757 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Dec 13 14:47:57.943919 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 14:47:57.944078 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:47:57.944211 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:47:57.944342 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:47:57.944483 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:47:57.944625 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 14:47:57.951446 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:47:57.951613 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 14:47:57.951802 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Dec 13 14:47:57.951943 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 14:47:57.952081 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:47:57.952231 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Dec 13 14:47:57.952377 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 14:47:57.952529 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 14:47:57.952693 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Dec 13 14:47:57.952847 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 14:47:57.952986 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 14:47:57.953133 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Dec 13 14:47:57.953272 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 14:47:57.953410 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 14:47:57.953565 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Dec 13 14:47:57.953720 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 14:47:57.953869 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 14:47:57.954040 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Dec 13 14:47:57.954181 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 14:47:57.954342 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 14:47:57.954518 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Dec 13 14:47:57.954662 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 14:47:57.954838 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 14:47:57.954986 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Dec 13 14:47:57.955135 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 14:47:57.955269 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 14:47:57.955421 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Dec 13 14:47:57.955559 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 14:47:57.955715 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 14:47:57.955756 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:47:57.955770 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:47:57.955782 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:47:57.955795 kernel: software IO TLB: mapped [mem 0x0000000074000000-0x0000000078000000] (64MB) Dec 13 14:47:57.955807 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:47:57.955826 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 14:47:57.955839 kernel: Initialise system trusted keyrings Dec 13 14:47:57.955851 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:47:57.955864 kernel: Key type asymmetric registered Dec 13 14:47:57.955875 kernel: Asymmetric key parser 'x509' registered Dec 13 14:47:57.955887 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:47:57.955899 kernel: io scheduler mq-deadline registered Dec 13 14:47:57.955911 kernel: io scheduler kyber registered Dec 13 14:47:57.955924 kernel: io scheduler bfq registered Dec 13 14:47:57.956109 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 14:47:57.956272 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 14:47:57.956416 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.956577 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 14:47:57.956772 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 14:47:57.956922 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.957069 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 14:47:57.957223 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 14:47:57.957369 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.957528 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 14:47:57.963935 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 14:47:57.964097 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.964248 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 14:47:57.964408 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 14:47:57.964552 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.964730 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 14:47:57.964893 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 14:47:57.965040 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.965218 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 14:47:57.965380 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 14:47:57.965539 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.965703 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 14:47:57.966880 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 14:47:57.967036 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:47:57.967069 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:47:57.967090 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:47:57.967102 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:47:57.967115 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:47:57.967139 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:47:57.967152 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:47:57.967164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:47:57.967176 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:47:57.967189 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:47:57.967375 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 14:47:57.967557 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 14:47:57.967720 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T14:47:57 UTC (1734101277) Dec 13 14:47:57.967872 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 14:47:57.967890 kernel: intel_pstate: CPU model not supported Dec 13 14:47:57.967903 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:47:57.967915 kernel: Segment Routing with IPv6 Dec 13 14:47:57.967927 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:47:57.967939 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:47:57.967958 kernel: Key type dns_resolver registered Dec 13 14:47:57.967970 kernel: IPI shorthand broadcast: enabled Dec 13 14:47:57.967983 kernel: sched_clock: Marking stable (958491917, 214336296)->(1431362099, -258533886) Dec 13 14:47:57.967996 kernel: registered taskstats version 1 Dec 13 14:47:57.968012 kernel: Loading compiled-in X.509 certificates Dec 13 14:47:57.968025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:47:57.968037 kernel: Key type .fscrypt registered Dec 13 14:47:57.968049 kernel: Key type fscrypt-provisioning registered Dec 13 14:47:57.968061 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:47:57.968077 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:47:57.968090 kernel: ima: No architecture policies found Dec 13 14:47:57.968102 kernel: clk: Disabling unused clocks Dec 13 14:47:57.968114 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:47:57.968127 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:47:57.968139 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:47:57.968152 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:47:57.968164 kernel: Run /init as init process Dec 13 14:47:57.968176 kernel: with arguments: Dec 13 14:47:57.968192 kernel: /init Dec 13 14:47:57.968204 kernel: with environment: Dec 13 14:47:57.968216 kernel: HOME=/ Dec 13 14:47:57.968228 kernel: TERM=linux Dec 13 14:47:57.968240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:47:57.968262 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:47:57.968279 systemd[1]: Detected virtualization kvm. Dec 13 14:47:57.968297 systemd[1]: Detected architecture x86-64. Dec 13 14:47:57.968326 systemd[1]: Running in initrd. Dec 13 14:47:57.968338 systemd[1]: No hostname configured, using default hostname. Dec 13 14:47:57.968350 systemd[1]: Hostname set to . Dec 13 14:47:57.968376 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:47:57.968388 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:47:57.968399 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:47:57.968411 systemd[1]: Reached target cryptsetup.target. Dec 13 14:47:57.968423 systemd[1]: Reached target paths.target. Dec 13 14:47:57.968451 systemd[1]: Reached target slices.target. Dec 13 14:47:57.968462 systemd[1]: Reached target swap.target. Dec 13 14:47:57.968474 systemd[1]: Reached target timers.target. Dec 13 14:47:57.968486 systemd[1]: Listening on iscsid.socket. Dec 13 14:47:57.968498 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:47:57.968510 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:47:57.968522 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:47:57.968537 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:47:57.968550 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:47:57.968561 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:47:57.968573 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:47:57.968585 systemd[1]: Reached target sockets.target. Dec 13 14:47:57.968596 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:47:57.968608 systemd[1]: Finished network-cleanup.service. Dec 13 14:47:57.968620 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:47:57.968631 systemd[1]: Starting systemd-journald.service... Dec 13 14:47:57.968646 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:47:57.968668 systemd[1]: Starting systemd-resolved.service... Dec 13 14:47:57.968682 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:47:57.968694 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:47:57.968705 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:47:57.968746 systemd-journald[202]: Journal started Dec 13 14:47:57.968825 systemd-journald[202]: Runtime Journal (/run/log/journal/d57fb89da13747809847896b2123108c) is 4.7M, max 38.1M, 33.3M free. Dec 13 14:47:57.895704 systemd-modules-load[203]: Inserted module 'overlay' Dec 13 14:47:57.998024 kernel: Bridge firewalling registered Dec 13 14:47:57.998050 systemd[1]: Started systemd-resolved.service. Dec 13 14:47:57.998070 kernel: SCSI subsystem initialized Dec 13 14:47:57.998094 kernel: audit: type=1130 audit(1734101277.988:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:57.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:57.943098 systemd-resolved[204]: Positive Trust Anchors: Dec 13 14:47:57.943119 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:47:57.943174 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:47:58.012758 systemd[1]: Started systemd-journald.service. Dec 13 14:47:58.012787 kernel: audit: type=1130 audit(1734101277.990:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.012806 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:47:58.012823 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:47:57.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:57.952200 systemd-resolved[204]: Defaulting to hostname 'linux'. Dec 13 14:47:58.027701 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:47:58.027748 kernel: audit: type=1130 audit(1734101278.015:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.027768 kernel: audit: type=1130 audit(1734101278.021:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:57.971355 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 13 14:47:58.033701 kernel: audit: type=1130 audit(1734101278.027:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.016964 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:47:58.039729 kernel: audit: type=1130 audit(1734101278.033:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.021596 systemd-modules-load[203]: Inserted module 'dm_multipath' Dec 13 14:47:58.022428 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:47:58.028603 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:47:58.034554 systemd[1]: Reached target nss-lookup.target. Dec 13 14:47:58.041240 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:47:58.043932 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:47:58.049615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:47:58.057562 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:47:58.078002 kernel: audit: type=1130 audit(1734101278.057:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.063368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:47:58.083685 kernel: audit: type=1130 audit(1734101278.077:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.083189 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:47:58.091039 kernel: audit: type=1130 audit(1734101278.083:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.085342 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:47:58.098606 dracut-cmdline[223]: dracut-dracut-053 Dec 13 14:47:58.101583 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:47:58.182652 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:47:58.203654 kernel: iscsi: registered transport (tcp) Dec 13 14:47:58.231233 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:47:58.231268 kernel: QLogic iSCSI HBA Driver Dec 13 14:47:58.276936 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:47:58.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.278774 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:47:58.335720 kernel: raid6: sse2x4 gen() 14043 MB/s Dec 13 14:47:58.353798 kernel: raid6: sse2x4 xor() 7984 MB/s Dec 13 14:47:58.371751 kernel: raid6: sse2x2 gen() 9629 MB/s Dec 13 14:47:58.389711 kernel: raid6: sse2x2 xor() 8073 MB/s Dec 13 14:47:58.407682 kernel: raid6: sse2x1 gen() 9733 MB/s Dec 13 14:47:58.426326 kernel: raid6: sse2x1 xor() 7565 MB/s Dec 13 14:47:58.426392 kernel: raid6: using algorithm sse2x4 gen() 14043 MB/s Dec 13 14:47:58.426410 kernel: raid6: .... xor() 7984 MB/s, rmw enabled Dec 13 14:47:58.427589 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 14:47:58.444671 kernel: xor: automatically using best checksumming function avx Dec 13 14:47:58.555686 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:47:58.568156 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:47:58.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.568000 audit: BPF prog-id=7 op=LOAD Dec 13 14:47:58.568000 audit: BPF prog-id=8 op=LOAD Dec 13 14:47:58.569907 systemd[1]: Starting systemd-udevd.service... Dec 13 14:47:58.586776 systemd-udevd[400]: Using default interface naming scheme 'v252'. Dec 13 14:47:58.595552 systemd[1]: Started systemd-udevd.service. Dec 13 14:47:58.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.599087 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:47:58.615522 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Dec 13 14:47:58.652293 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:47:58.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.653965 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:47:58.739278 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:47:58.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:47:58.836319 kernel: ACPI: bus type USB registered Dec 13 14:47:58.836381 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 14:47:58.902078 kernel: usbcore: registered new interface driver usbfs Dec 13 14:47:58.902103 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:47:58.902120 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:47:58.902136 kernel: GPT:17805311 != 125829119 Dec 13 14:47:58.902151 kernel: usbcore: registered new interface driver hub Dec 13 14:47:58.902175 kernel: usbcore: registered new device driver usb Dec 13 14:47:58.902191 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:47:58.902206 kernel: GPT:17805311 != 125829119 Dec 13 14:47:58.902220 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:47:58.902236 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:47:58.902251 kernel: AVX version of gcm_enc/dec engaged. Dec 13 14:47:58.902267 kernel: AES CTR mode by8 optimization enabled Dec 13 14:47:58.902282 kernel: libata version 3.00 loaded. Dec 13 14:47:58.915652 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:47:58.966356 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:47:58.966393 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 14:47:58.966579 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:47:58.966786 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:47:58.966956 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:47:58.967121 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 14:47:58.967295 kernel: scsi host0: ahci Dec 13 14:47:58.967490 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 14:47:58.967692 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:47:58.967874 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:47:58.968041 kernel: hub 1-0:1.0: USB hub found Dec 13 14:47:58.968240 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:47:58.968426 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) Dec 13 14:47:58.968445 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:47:58.968917 kernel: hub 2-0:1.0: USB hub found Dec 13 14:47:58.969126 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:47:58.969311 kernel: scsi host1: ahci Dec 13 14:47:58.969492 kernel: scsi host2: ahci Dec 13 14:47:58.969690 kernel: scsi host3: ahci Dec 13 14:47:58.969937 kernel: scsi host4: ahci Dec 13 14:47:58.970118 kernel: scsi host5: ahci Dec 13 14:47:58.970316 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 14:47:58.970336 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 14:47:58.970352 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 14:47:58.970367 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 14:47:58.970383 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 14:47:58.970398 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 14:47:58.948047 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:47:59.045861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:47:59.050191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:47:59.050970 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:47:59.056907 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:47:59.058976 systemd[1]: Starting disk-uuid.service... Dec 13 14:47:59.065197 disk-uuid[523]: Primary Header is updated. Dec 13 14:47:59.065197 disk-uuid[523]: Secondary Entries is updated. Dec 13 14:47:59.065197 disk-uuid[523]: Secondary Header is updated. Dec 13 14:47:59.068560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:47:59.173654 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 14:47:59.281660 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:47:59.284900 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:47:59.284938 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:47:59.286488 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 14:47:59.289797 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:47:59.289841 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:47:59.314648 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:47:59.321028 kernel: usbcore: registered new interface driver usbhid Dec 13 14:47:59.321073 kernel: usbhid: USB HID core driver Dec 13 14:47:59.329670 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 14:47:59.329744 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 14:48:00.078671 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:48:00.079801 disk-uuid[524]: The operation has completed successfully. Dec 13 14:48:00.129757 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:48:00.130905 systemd[1]: Finished disk-uuid.service. Dec 13 14:48:00.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.137865 systemd[1]: Starting verity-setup.service... Dec 13 14:48:00.160653 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 14:48:00.210472 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:48:00.213870 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:48:00.216540 systemd[1]: Finished verity-setup.service. Dec 13 14:48:00.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.307654 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:48:00.307993 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:48:00.308785 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:48:00.309755 systemd[1]: Starting ignition-setup.service... Dec 13 14:48:00.313291 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:48:00.327921 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:48:00.327970 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:48:00.327988 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:48:00.341481 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:48:00.349003 systemd[1]: Finished ignition-setup.service. Dec 13 14:48:00.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.350785 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:48:00.455401 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:48:00.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.456000 audit: BPF prog-id=9 op=LOAD Dec 13 14:48:00.458088 systemd[1]: Starting systemd-networkd.service... Dec 13 14:48:00.487898 systemd-networkd[705]: lo: Link UP Dec 13 14:48:00.487915 systemd-networkd[705]: lo: Gained carrier Dec 13 14:48:00.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.489158 systemd-networkd[705]: Enumeration completed Dec 13 14:48:00.489876 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:48:00.493375 systemd-networkd[705]: eth0: Link UP Dec 13 14:48:00.493382 systemd-networkd[705]: eth0: Gained carrier Dec 13 14:48:00.501824 systemd[1]: Started systemd-networkd.service. Dec 13 14:48:00.504536 systemd[1]: Reached target network.target. Dec 13 14:48:00.506757 systemd[1]: Starting iscsiuio.service... Dec 13 14:48:00.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.516474 systemd[1]: Started iscsiuio.service. Dec 13 14:48:00.520178 systemd[1]: Starting iscsid.service... Dec 13 14:48:00.521909 systemd-networkd[705]: eth0: DHCPv4 address 10.243.72.102/30, gateway 10.243.72.101 acquired from 10.243.72.101 Dec 13 14:48:00.526119 iscsid[711]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:48:00.526119 iscsid[711]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:48:00.526119 iscsid[711]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:48:00.526119 iscsid[711]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:48:00.526119 iscsid[711]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:48:00.526119 iscsid[711]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:48:00.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.527834 systemd[1]: Started iscsid.service. Dec 13 14:48:00.532057 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:48:00.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.552778 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:48:00.553596 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:48:00.554198 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:48:00.554838 systemd[1]: Reached target remote-fs.target. Dec 13 14:48:00.558937 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:48:00.562823 ignition[627]: Ignition 2.14.0 Dec 13 14:48:00.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.567322 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:48:00.562848 ignition[627]: Stage: fetch-offline Dec 13 14:48:00.562963 ignition[627]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:00.570530 systemd[1]: Starting ignition-fetch.service... Dec 13 14:48:00.563013 ignition[627]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:00.564958 ignition[627]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:00.565129 ignition[627]: parsed url from cmdline: "" Dec 13 14:48:00.565136 ignition[627]: no config URL provided Dec 13 14:48:00.565145 ignition[627]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:48:00.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.576947 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:48:00.565160 ignition[627]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:48:00.565179 ignition[627]: failed to fetch config: resource requires networking Dec 13 14:48:00.565352 ignition[627]: Ignition finished successfully Dec 13 14:48:00.582968 ignition[724]: Ignition 2.14.0 Dec 13 14:48:00.582984 ignition[724]: Stage: fetch Dec 13 14:48:00.583218 ignition[724]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:00.583253 ignition[724]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:00.584725 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:00.584875 ignition[724]: parsed url from cmdline: "" Dec 13 14:48:00.584882 ignition[724]: no config URL provided Dec 13 14:48:00.584891 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:48:00.584906 ignition[724]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:48:00.587908 ignition[724]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 14:48:00.587952 ignition[724]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 14:48:00.588658 ignition[724]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 14:48:00.612773 ignition[724]: GET result: OK Dec 13 14:48:00.612940 ignition[724]: parsing config with SHA512: 89077c25f4f4ddf0867cce6155dd964de05b0c9f6d303aef4116ee082e0f345df36a97de1c33e27f195c77f4cbc3c07e902bc1a8428485c649e80090a620f4df Dec 13 14:48:00.625696 unknown[724]: fetched base config from "system" Dec 13 14:48:00.625719 unknown[724]: fetched base config from "system" Dec 13 14:48:00.626310 ignition[724]: fetch: fetch complete Dec 13 14:48:00.625729 unknown[724]: fetched user config from "openstack" Dec 13 14:48:00.626339 ignition[724]: fetch: fetch passed Dec 13 14:48:00.628232 systemd[1]: Finished ignition-fetch.service. Dec 13 14:48:00.626398 ignition[724]: Ignition finished successfully Dec 13 14:48:00.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.630754 systemd[1]: Starting ignition-kargs.service... Dec 13 14:48:00.643200 ignition[731]: Ignition 2.14.0 Dec 13 14:48:00.643219 ignition[731]: Stage: kargs Dec 13 14:48:00.643414 ignition[731]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:00.643456 ignition[731]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:00.644781 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:00.646504 ignition[731]: kargs: kargs passed Dec 13 14:48:00.647747 systemd[1]: Finished ignition-kargs.service. Dec 13 14:48:00.646574 ignition[731]: Ignition finished successfully Dec 13 14:48:00.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.649821 systemd[1]: Starting ignition-disks.service... Dec 13 14:48:00.661691 ignition[737]: Ignition 2.14.0 Dec 13 14:48:00.662664 ignition[737]: Stage: disks Dec 13 14:48:00.663491 ignition[737]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:00.664477 ignition[737]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:00.665876 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:00.668531 ignition[737]: disks: disks passed Dec 13 14:48:00.669387 ignition[737]: Ignition finished successfully Dec 13 14:48:00.670993 systemd[1]: Finished ignition-disks.service. Dec 13 14:48:00.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.671881 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:48:00.672941 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:48:00.674200 systemd[1]: Reached target local-fs.target. Dec 13 14:48:00.675465 systemd[1]: Reached target sysinit.target. Dec 13 14:48:00.676616 systemd[1]: Reached target basic.target. Dec 13 14:48:00.679376 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:48:00.699938 systemd-fsck[744]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:48:00.704560 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:48:00.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.706454 systemd[1]: Mounting sysroot.mount... Dec 13 14:48:00.721664 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:48:00.720325 systemd[1]: Mounted sysroot.mount. Dec 13 14:48:00.721076 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:48:00.723421 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:48:00.724607 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:48:00.725800 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 14:48:00.726582 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:48:00.726667 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:48:00.733593 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:48:00.736877 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:48:00.745313 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:48:00.756810 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:48:00.767811 initrd-setup-root[771]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:48:00.777526 initrd-setup-root[780]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:48:00.848148 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:48:00.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.850044 systemd[1]: Starting ignition-mount.service... Dec 13 14:48:00.851654 systemd[1]: Starting sysroot-boot.service... Dec 13 14:48:00.860032 bash[798]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:48:00.871057 ignition[799]: INFO : Ignition 2.14.0 Dec 13 14:48:00.871057 ignition[799]: INFO : Stage: mount Dec 13 14:48:00.872569 ignition[799]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:00.872569 ignition[799]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:00.872569 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:00.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.876699 ignition[799]: INFO : mount: mount passed Dec 13 14:48:00.876699 ignition[799]: INFO : Ignition finished successfully Dec 13 14:48:00.875045 systemd[1]: Finished ignition-mount.service. Dec 13 14:48:00.890483 coreos-metadata[750]: Dec 13 14:48:00.890 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:48:00.894261 systemd[1]: Finished sysroot-boot.service. Dec 13 14:48:00.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.911744 coreos-metadata[750]: Dec 13 14:48:00.911 INFO Fetch successful Dec 13 14:48:00.913846 coreos-metadata[750]: Dec 13 14:48:00.911 INFO wrote hostname srv-997hs.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 14:48:00.915525 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 14:48:00.915694 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 14:48:00.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:00.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:01.235930 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:48:01.249647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (808) Dec 13 14:48:01.254679 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:48:01.254725 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:48:01.254744 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:48:01.260315 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:48:01.262884 systemd[1]: Starting ignition-files.service... Dec 13 14:48:01.283688 ignition[828]: INFO : Ignition 2.14.0 Dec 13 14:48:01.284813 ignition[828]: INFO : Stage: files Dec 13 14:48:01.285417 ignition[828]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:01.285417 ignition[828]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:01.287591 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:01.288527 ignition[828]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:48:01.289496 ignition[828]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:48:01.289496 ignition[828]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:48:01.292483 ignition[828]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:48:01.293549 ignition[828]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:48:01.295071 unknown[828]: wrote ssh authorized keys file for user: core Dec 13 14:48:01.296139 ignition[828]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:48:01.297808 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:48:01.297808 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:48:01.297808 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:48:01.297808 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:48:01.445701 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:48:01.667228 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:48:01.669006 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:48:01.670216 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:48:02.236004 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 14:48:02.290302 systemd-networkd[705]: eth0: Gained IPv6LL Dec 13 14:48:02.507415 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:48:02.508990 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:48:02.510301 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:48:02.511369 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:48:02.512608 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:48:02.513712 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:48:02.514794 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:48:02.514794 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:48:02.514794 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:48:02.514794 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:48:02.514794 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:48:02.514794 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:48:02.521328 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:48:02.521328 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:48:02.521328 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:48:02.987915 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 14:48:03.580921 systemd-networkd[705]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d219:24:19ff:fef3:4866/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d219:24:19ff:fef3:4866/64 assigned by NDisc. Dec 13 14:48:03.580946 systemd-networkd[705]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 14:48:03.957201 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:48:03.957201 ignition[828]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:48:03.957201 ignition[828]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:48:03.957201 ignition[828]: INFO : files: op(e): [started] processing unit "containerd.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(e): [finished] processing unit "containerd.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:48:03.961738 ignition[828]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:48:03.987124 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 14:48:03.987171 kernel: audit: type=1130 audit(1734101283.971:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:03.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:03.968517 systemd[1]: Finished ignition-files.service. Dec 13 14:48:03.988843 ignition[828]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:48:03.988843 ignition[828]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:48:03.988843 ignition[828]: INFO : files: files passed Dec 13 14:48:03.988843 ignition[828]: INFO : Ignition finished successfully Dec 13 14:48:04.008298 kernel: audit: type=1130 audit(1734101283.989:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.008359 kernel: audit: type=1131 audit(1734101283.989:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.008383 kernel: audit: type=1130 audit(1734101284.000:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:03.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:03.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:03.974486 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:48:03.982319 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:48:04.011284 initrd-setup-root-after-ignition[853]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:48:03.983690 systemd[1]: Starting ignition-quench.service... Dec 13 14:48:03.988612 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:48:03.988934 systemd[1]: Finished ignition-quench.service. Dec 13 14:48:03.990888 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:48:04.001664 systemd[1]: Reached target ignition-complete.target. Dec 13 14:48:04.008679 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:48:04.040098 kernel: audit: type=1130 audit(1734101284.029:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.040137 kernel: audit: type=1131 audit(1734101284.029:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.028752 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:48:04.028877 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:48:04.030621 systemd[1]: Reached target initrd-fs.target. Dec 13 14:48:04.040677 systemd[1]: Reached target initrd.target. Dec 13 14:48:04.042057 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:48:04.043132 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:48:04.059055 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:48:04.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.061721 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:48:04.067767 kernel: audit: type=1130 audit(1734101284.059:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.075288 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:48:04.076736 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:48:04.078165 systemd[1]: Stopped target timers.target. Dec 13 14:48:04.079471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:48:04.080395 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:48:04.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.083828 systemd[1]: Stopped target initrd.target. Dec 13 14:48:04.088606 kernel: audit: type=1131 audit(1734101284.081:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.087376 systemd[1]: Stopped target basic.target. Dec 13 14:48:04.088084 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:48:04.089281 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:48:04.090445 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:48:04.091709 systemd[1]: Stopped target remote-fs.target. Dec 13 14:48:04.092915 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:48:04.094154 systemd[1]: Stopped target sysinit.target. Dec 13 14:48:04.095362 systemd[1]: Stopped target local-fs.target. Dec 13 14:48:04.096564 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:48:04.097776 systemd[1]: Stopped target swap.target. Dec 13 14:48:04.105067 kernel: audit: type=1131 audit(1734101284.099:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.098909 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:48:04.099059 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:48:04.111993 kernel: audit: type=1131 audit(1734101284.106:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.100308 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:48:04.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.105823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:48:04.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.106026 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:48:04.107127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:48:04.107327 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:48:04.112915 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:48:04.113118 systemd[1]: Stopped ignition-files.service. Dec 13 14:48:04.115237 systemd[1]: Stopping ignition-mount.service... Dec 13 14:48:04.123916 iscsid[711]: iscsid shutting down. Dec 13 14:48:04.127752 systemd[1]: Stopping iscsid.service... Dec 13 14:48:04.129017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:48:04.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.129259 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:48:04.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.133047 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:48:04.134542 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:48:04.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.134764 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:48:04.135487 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:48:04.135679 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:48:04.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.148183 ignition[866]: INFO : Ignition 2.14.0 Dec 13 14:48:04.148183 ignition[866]: INFO : Stage: umount Dec 13 14:48:04.138121 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:48:04.138269 systemd[1]: Stopped iscsid.service. Dec 13 14:48:04.140266 systemd[1]: Stopping iscsiuio.service... Dec 13 14:48:04.141174 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:48:04.141308 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:48:04.144177 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:48:04.153560 ignition[866]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:48:04.153560 ignition[866]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:48:04.144293 systemd[1]: Stopped iscsiuio.service. Dec 13 14:48:04.158444 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:48:04.158444 ignition[866]: INFO : umount: umount passed Dec 13 14:48:04.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.156997 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:48:04.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.165834 ignition[866]: INFO : Ignition finished successfully Dec 13 14:48:04.159857 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:48:04.159995 systemd[1]: Stopped ignition-mount.service. Dec 13 14:48:04.161060 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:48:04.161136 systemd[1]: Stopped ignition-disks.service. Dec 13 14:48:04.161760 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:48:04.161813 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:48:04.162408 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:48:04.162460 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:48:04.163066 systemd[1]: Stopped target network.target. Dec 13 14:48:04.163677 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:48:04.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.163755 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:48:04.165076 systemd[1]: Stopped target paths.target. Dec 13 14:48:04.166274 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:48:04.170404 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:48:04.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.171162 systemd[1]: Stopped target slices.target. Dec 13 14:48:04.172410 systemd[1]: Stopped target sockets.target. Dec 13 14:48:04.173608 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:48:04.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.173684 systemd[1]: Closed iscsid.socket. Dec 13 14:48:04.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.174784 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:48:04.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.174835 systemd[1]: Closed iscsiuio.socket. Dec 13 14:48:04.175902 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:48:04.175965 systemd[1]: Stopped ignition-setup.service. Dec 13 14:48:04.177357 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:48:04.179197 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:48:04.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.180692 systemd-networkd[705]: eth0: DHCPv6 lease lost Dec 13 14:48:04.203000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:48:04.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.183043 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:48:04.183194 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:48:04.206000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:48:04.184744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:48:04.184790 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:48:04.187089 systemd[1]: Stopping network-cleanup.service... Dec 13 14:48:04.187788 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:48:04.187900 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:48:04.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.190851 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:48:04.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.190923 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:48:04.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.192443 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:48:04.192506 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:48:04.197723 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:48:04.200402 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:48:04.201103 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:48:04.201235 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:48:04.203879 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:48:04.204064 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:48:04.206427 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:48:04.206510 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:48:04.209514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:48:04.209564 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:48:04.224381 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:48:04.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.224454 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:48:04.225865 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:48:04.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.225929 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:48:04.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.227046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:48:04.227103 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:48:04.229304 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:48:04.238005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:48:04.238091 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:48:04.240191 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:48:04.240331 systemd[1]: Stopped network-cleanup.service. Dec 13 14:48:04.241792 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:48:04.241908 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:48:04.305434 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:48:04.305620 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:48:04.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.307401 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:48:04.308298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:48:04.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:04.308363 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:48:04.310463 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:48:04.320000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:48:04.320000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:48:04.321000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:48:04.321000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:48:04.321000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:48:04.318969 systemd[1]: Switching root. Dec 13 14:48:04.339614 systemd-journald[202]: Journal stopped Dec 13 14:48:08.242950 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 13 14:48:08.243046 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:48:08.243084 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:48:08.243104 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:48:08.243122 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:48:08.243140 kernel: SELinux: policy capability open_perms=1 Dec 13 14:48:08.243165 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:48:08.243190 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:48:08.243209 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:48:08.243238 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:48:08.243268 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:48:08.243287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:48:08.243307 systemd[1]: Successfully loaded SELinux policy in 56.783ms. Dec 13 14:48:08.243340 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.848ms. Dec 13 14:48:08.243365 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:48:08.243385 systemd[1]: Detected virtualization kvm. Dec 13 14:48:08.243405 systemd[1]: Detected architecture x86-64. Dec 13 14:48:08.243424 systemd[1]: Detected first boot. Dec 13 14:48:08.243443 systemd[1]: Hostname set to . Dec 13 14:48:08.243477 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:48:08.243510 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:48:08.243531 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:48:08.243552 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:48:08.243572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:48:08.243593 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:48:08.243640 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:48:08.243663 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:48:08.243682 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:48:08.243702 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:48:08.243721 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:48:08.243755 systemd[1]: Created slice system-getty.slice. Dec 13 14:48:08.243775 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:48:08.243795 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:48:08.243826 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:48:08.243846 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:48:08.243867 systemd[1]: Created slice user.slice. Dec 13 14:48:08.243886 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:48:08.243910 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:48:08.243931 systemd[1]: Set up automount boot.automount. Dec 13 14:48:08.243950 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:48:08.243969 systemd[1]: Reached target integritysetup.target. Dec 13 14:48:08.243999 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:48:08.244019 systemd[1]: Reached target remote-fs.target. Dec 13 14:48:08.244039 systemd[1]: Reached target slices.target. Dec 13 14:48:08.244058 systemd[1]: Reached target swap.target. Dec 13 14:48:08.244077 systemd[1]: Reached target torcx.target. Dec 13 14:48:08.244096 systemd[1]: Reached target veritysetup.target. Dec 13 14:48:08.244115 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:48:08.244134 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:48:08.244162 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:48:08.244183 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:48:08.244214 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:48:08.244240 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:48:08.244265 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:48:08.244286 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:48:08.244306 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:48:08.244325 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:48:08.244345 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:48:08.244375 systemd[1]: Mounting media.mount... Dec 13 14:48:08.244397 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:48:08.244416 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:48:08.244435 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:48:08.244454 systemd[1]: Mounting tmp.mount... Dec 13 14:48:08.244474 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:48:08.244502 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:48:08.244525 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:48:08.244544 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:48:08.244574 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:48:08.244596 systemd[1]: Starting modprobe@drm.service... Dec 13 14:48:08.244615 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:48:08.244646 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:48:08.244666 systemd[1]: Starting modprobe@loop.service... Dec 13 14:48:08.244699 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:48:08.244725 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:48:08.244745 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:48:08.244763 systemd[1]: Starting systemd-journald.service... Dec 13 14:48:08.244794 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:48:08.244816 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:48:08.244835 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:48:08.244854 kernel: fuse: init (API version 7.34) Dec 13 14:48:08.244873 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:48:08.244893 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:48:08.244912 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:48:08.244931 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:48:08.244950 systemd[1]: Mounted media.mount. Dec 13 14:48:08.244980 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:48:08.245002 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:48:08.245021 systemd[1]: Mounted tmp.mount. Dec 13 14:48:08.245040 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:48:08.245059 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:48:08.245078 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:48:08.245098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:48:08.245121 systemd-journald[1014]: Journal started Dec 13 14:48:08.245201 systemd-journald[1014]: Runtime Journal (/run/log/journal/d57fb89da13747809847896b2123108c) is 4.7M, max 38.1M, 33.3M free. Dec 13 14:48:08.227000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:48:08.227000 audit[1014]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc67d73e10 a2=4000 a3=7ffc67d73eac items=0 ppid=1 pid=1014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:48:08.227000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:48:08.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.257746 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:48:08.257823 systemd[1]: Started systemd-journald.service. Dec 13 14:48:08.257856 kernel: loop: module loaded Dec 13 14:48:08.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.254425 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:48:08.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.254712 systemd[1]: Finished modprobe@drm.service. Dec 13 14:48:08.255818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:48:08.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.256179 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:48:08.257261 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:48:08.260605 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:48:08.261703 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:48:08.262906 systemd[1]: Finished modprobe@loop.service. Dec 13 14:48:08.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.264144 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:48:08.271031 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:48:08.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.273461 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:48:08.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.274695 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:48:08.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.275927 systemd[1]: Reached target network-pre.target. Dec 13 14:48:08.278374 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:48:08.281022 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:48:08.281743 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:48:08.286674 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:48:08.289093 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:48:08.291834 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:48:08.293477 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:48:08.297909 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:48:08.302106 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:48:08.304657 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:48:08.308080 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:48:08.309156 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:48:08.313755 systemd-journald[1014]: Time spent on flushing to /var/log/journal/d57fb89da13747809847896b2123108c is 68.160ms for 1236 entries. Dec 13 14:48:08.313755 systemd-journald[1014]: System Journal (/var/log/journal/d57fb89da13747809847896b2123108c) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:48:08.409252 systemd-journald[1014]: Received client request to flush runtime journal. Dec 13 14:48:08.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.317603 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:48:08.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.318381 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:48:08.361376 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:48:08.380252 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:48:08.384174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:48:08.410562 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:48:08.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.437226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:48:08.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.465921 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:48:08.468311 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:48:08.480433 udevadm[1065]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:48:08.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:08.952278 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:48:08.954864 systemd[1]: Starting systemd-udevd.service... Dec 13 14:48:08.982728 systemd-udevd[1067]: Using default interface naming scheme 'v252'. Dec 13 14:48:09.017491 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 13 14:48:09.017587 kernel: audit: type=1130 audit(1734101289.013:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.013125 systemd[1]: Started systemd-udevd.service. Dec 13 14:48:09.016275 systemd[1]: Starting systemd-networkd.service... Dec 13 14:48:09.032080 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:48:09.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.094266 systemd[1]: Started systemd-userdbd.service. Dec 13 14:48:09.100654 kernel: audit: type=1130 audit(1734101289.094:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.127275 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:48:09.215276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:48:09.224487 systemd-networkd[1072]: lo: Link UP Dec 13 14:48:09.224500 systemd-networkd[1072]: lo: Gained carrier Dec 13 14:48:09.225354 systemd-networkd[1072]: Enumeration completed Dec 13 14:48:09.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.225543 systemd[1]: Started systemd-networkd.service. Dec 13 14:48:09.225884 systemd-networkd[1072]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:48:09.232442 kernel: audit: type=1130 audit(1734101289.225:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.235053 systemd-networkd[1072]: eth0: Link UP Dec 13 14:48:09.235066 systemd-networkd[1072]: eth0: Gained carrier Dec 13 14:48:09.247663 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:48:09.250661 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:48:09.250790 systemd-networkd[1072]: eth0: DHCPv4 address 10.243.72.102/30, gateway 10.243.72.101 acquired from 10.243.72.101 Dec 13 14:48:09.255663 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:48:09.326000 audit[1075]: AVC avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:48:09.361644 kernel: audit: type=1400 audit(1734101289.326:121): avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:48:09.326000 audit[1075]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559c695d18f0 a1=337fc a2=7f6ed1051bc5 a3=5 items=110 ppid=1067 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:48:09.374655 kernel: audit: type=1300 audit(1734101289.326:121): arch=c000003e syscall=175 success=yes exit=0 a0=559c695d18f0 a1=337fc a2=7f6ed1051bc5 a3=5 items=110 ppid=1067 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:48:09.326000 audit: CWD cwd="/" Dec 13 14:48:09.326000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.382011 kernel: audit: type=1307 audit(1734101289.326:121): cwd="/" Dec 13 14:48:09.382073 kernel: audit: type=1302 audit(1734101289.326:121): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=1 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=2 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=3 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=4 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=5 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=6 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=7 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=8 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=9 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=10 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=11 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=12 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=13 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.387694 kernel: audit: type=1302 audit(1734101289.326:121): item=1 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.387736 kernel: audit: type=1302 audit(1734101289.326:121): item=2 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.387779 kernel: audit: type=1302 audit(1734101289.326:121): item=3 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=14 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=15 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=16 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=17 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=18 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=19 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=20 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=21 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=22 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=23 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=24 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=25 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=26 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=27 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=28 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=29 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=30 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=31 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=32 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=33 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=34 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=35 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=36 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=37 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=38 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=39 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=40 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=41 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=42 name=(null) inode=14743 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=43 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=44 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=45 name=(null) inode=14765 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=46 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=47 name=(null) inode=14766 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=48 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=49 name=(null) inode=14767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=50 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=51 name=(null) inode=14768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=52 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=53 name=(null) inode=14769 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=55 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=56 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=57 name=(null) inode=14771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=58 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=59 name=(null) inode=14772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=60 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=61 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=62 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=63 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=64 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=65 name=(null) inode=14775 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=66 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=67 name=(null) inode=14776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=68 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=69 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=70 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=71 name=(null) inode=14778 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=72 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=73 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=74 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=75 name=(null) inode=14780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=76 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=77 name=(null) inode=14781 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=78 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=79 name=(null) inode=14782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=80 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=81 name=(null) inode=14783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=82 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=83 name=(null) inode=14784 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=84 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=85 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=86 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=87 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=88 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=89 name=(null) inode=14787 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=90 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=91 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=92 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=93 name=(null) inode=14789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=94 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.398725 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:48:09.326000 audit: PATH item=95 name=(null) inode=14790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=96 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=97 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=98 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=99 name=(null) inode=14792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=100 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=101 name=(null) inode=14793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=102 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=103 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=104 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=105 name=(null) inode=14795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=106 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.399659 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:48:09.403194 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:48:09.403461 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:48:09.326000 audit: PATH item=107 name=(null) inode=14796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PATH item=109 name=(null) inode=14797 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:48:09.326000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:48:09.554393 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:48:09.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.557107 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:48:09.579299 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:48:09.609015 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:48:09.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.609951 systemd[1]: Reached target cryptsetup.target. Dec 13 14:48:09.612333 systemd[1]: Starting lvm2-activation.service... Dec 13 14:48:09.619668 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:48:09.645207 systemd[1]: Finished lvm2-activation.service. Dec 13 14:48:09.646049 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:48:09.646681 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:48:09.646722 systemd[1]: Reached target local-fs.target. Dec 13 14:48:09.647290 systemd[1]: Reached target machines.target. Dec 13 14:48:09.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.649829 systemd[1]: Starting ldconfig.service... Dec 13 14:48:09.651716 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:48:09.651812 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:09.653455 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:48:09.655414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:48:09.657969 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:48:09.660343 systemd[1]: Starting systemd-sysext.service... Dec 13 14:48:09.675679 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1102 (bootctl) Dec 13 14:48:09.677975 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:48:09.686170 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:48:09.690900 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:48:09.691185 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:48:09.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.788391 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:48:09.802673 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:48:09.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:09.927107 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:48:09.928050 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:48:09.952884 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:48:09.975687 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:48:09.987897 (sd-sysext)[1119]: Using extensions 'kubernetes'. Dec 13 14:48:09.988866 (sd-sysext)[1119]: Merged extensions into '/usr'. Dec 13 14:48:09.998931 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Dec 13 14:48:09.998931 systemd-fsck[1116]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:48:10.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.006052 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:48:10.008700 systemd[1]: Mounting boot.mount... Dec 13 14:48:10.044541 systemd[1]: Mounted boot.mount. Dec 13 14:48:10.049694 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:48:10.052143 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:48:10.053315 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.055176 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:48:10.057839 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:48:10.060668 systemd[1]: Starting modprobe@loop.service... Dec 13 14:48:10.063782 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.063994 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:10.064211 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:48:10.076975 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:48:10.088465 systemd[1]: Finished systemd-sysext.service. Dec 13 14:48:10.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.090051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:48:10.090843 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:48:10.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.092853 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:48:10.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.096207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:48:10.096909 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:48:10.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.099178 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:48:10.100003 systemd[1]: Finished modprobe@loop.service. Dec 13 14:48:10.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.107034 systemd[1]: Starting ensure-sysext.service... Dec 13 14:48:10.108207 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:48:10.108440 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.110236 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:48:10.120341 systemd[1]: Reloading. Dec 13 14:48:10.139601 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:48:10.147279 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:48:10.156365 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:48:10.274568 /usr/lib/systemd/system-generators/torcx-generator[1157]: time="2024-12-13T14:48:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:48:10.275201 /usr/lib/systemd/system-generators/torcx-generator[1157]: time="2024-12-13T14:48:10Z" level=info msg="torcx already run" Dec 13 14:48:10.338230 ldconfig[1101]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:48:10.431524 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:48:10.431843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:48:10.458051 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:48:10.539318 systemd[1]: Finished ldconfig.service. Dec 13 14:48:10.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.542104 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:48:10.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.545948 systemd[1]: Starting audit-rules.service... Dec 13 14:48:10.548706 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:48:10.551687 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:48:10.559672 systemd[1]: Starting systemd-resolved.service... Dec 13 14:48:10.563380 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:48:10.584497 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:48:10.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.586422 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:48:10.590000 audit[1225]: SYSTEM_BOOT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.598524 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.601079 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:48:10.605384 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:48:10.608656 systemd[1]: Starting modprobe@loop.service... Dec 13 14:48:10.610818 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.611100 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:10.611387 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:48:10.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.619071 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:48:10.620380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:48:10.620639 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:48:10.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.623018 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.623209 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.623341 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:10.623510 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:48:10.623824 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:48:10.626978 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:48:10.627186 systemd[1]: Finished modprobe@loop.service. Dec 13 14:48:10.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.636234 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.638082 systemd[1]: Starting modprobe@drm.service... Dec 13 14:48:10.640220 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:48:10.642618 systemd[1]: Starting modprobe@loop.service... Dec 13 14:48:10.643639 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.643834 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:10.647809 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:48:10.651054 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:48:10.658663 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:48:10.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.663071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:48:10.663288 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:48:10.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.665058 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:48:10.665270 systemd[1]: Finished modprobe@drm.service. Dec 13 14:48:10.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.667982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:48:10.668222 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:48:10.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.669451 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:48:10.669702 systemd[1]: Finished modprobe@loop.service. Dec 13 14:48:10.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.671055 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:48:10.671173 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.673540 systemd[1]: Starting systemd-update-done.service... Dec 13 14:48:10.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.678308 systemd[1]: Finished ensure-sysext.service. Dec 13 14:48:10.691673 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:48:10.691704 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:48:10.697504 systemd[1]: Finished systemd-update-done.service. Dec 13 14:48:10.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:48:10.746000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:48:10.746000 audit[1254]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb03fa5c0 a2=420 a3=0 items=0 ppid=1213 pid=1254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:48:10.746000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:48:10.747500 augenrules[1254]: No rules Dec 13 14:48:10.748147 systemd[1]: Finished audit-rules.service. Dec 13 14:48:10.760230 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:48:10.761056 systemd[1]: Reached target time-set.target. Dec 13 14:48:10.770333 systemd-resolved[1218]: Positive Trust Anchors: Dec 13 14:48:10.770817 systemd-resolved[1218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:48:10.770970 systemd-resolved[1218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:48:10.778418 systemd-resolved[1218]: Using system hostname 'srv-997hs.gb1.brightbox.com'. Dec 13 14:48:10.781177 systemd[1]: Started systemd-resolved.service. Dec 13 14:48:10.782006 systemd[1]: Reached target network.target. Dec 13 14:48:10.782617 systemd[1]: Reached target nss-lookup.target. Dec 13 14:48:10.783291 systemd[1]: Reached target sysinit.target. Dec 13 14:48:10.784021 systemd[1]: Started motdgen.path. Dec 13 14:48:10.784685 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:48:10.785645 systemd[1]: Started logrotate.timer. Dec 13 14:48:10.786340 systemd[1]: Started mdadm.timer. Dec 13 14:48:10.786944 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:48:10.787571 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:48:10.787653 systemd[1]: Reached target paths.target. Dec 13 14:48:10.788210 systemd[1]: Reached target timers.target. Dec 13 14:48:10.789226 systemd[1]: Listening on dbus.socket. Dec 13 14:48:10.791898 systemd[1]: Starting docker.socket... Dec 13 14:48:10.794383 systemd[1]: Listening on sshd.socket. Dec 13 14:48:10.795226 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:10.795890 systemd[1]: Listening on docker.socket. Dec 13 14:48:10.796661 systemd[1]: Reached target sockets.target. Dec 13 14:48:10.797412 systemd[1]: Reached target basic.target. Dec 13 14:48:10.798311 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:48:10.798391 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.798449 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:48:10.800093 systemd[1]: Starting containerd.service... Dec 13 14:48:10.802323 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:48:10.804938 systemd[1]: Starting dbus.service... Dec 13 14:48:10.807794 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:48:10.810968 systemd[1]: Starting extend-filesystems.service... Dec 13 14:48:10.812840 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:48:10.820413 systemd[1]: Starting motdgen.service... Dec 13 14:48:10.826242 systemd[1]: Starting prepare-helm.service... Dec 13 14:48:10.827904 jq[1268]: false Dec 13 14:48:10.831855 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:48:10.836507 systemd[1]: Starting sshd-keygen.service... Dec 13 14:48:10.843685 systemd[1]: Starting systemd-logind.service... Dec 13 14:48:10.844347 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:48:10.844489 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:48:10.850619 systemd[1]: Starting update-engine.service... Dec 13 14:48:10.856748 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:48:10.862372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:48:10.862841 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:48:10.864539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:48:10.897533 tar[1288]: linux-amd64/helm Dec 13 14:48:10.900946 jq[1282]: true Dec 13 14:48:10.866993 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:48:10.915080 jq[1291]: true Dec 13 14:48:10.946865 dbus-daemon[1265]: [system] SELinux support is enabled Dec 13 14:48:10.947515 systemd[1]: Started dbus.service. Dec 13 14:48:10.950706 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:48:10.950763 systemd[1]: Reached target system-config.target. Dec 13 14:48:10.951402 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:48:10.951449 systemd[1]: Reached target user-config.target. Dec 13 14:48:10.953660 extend-filesystems[1269]: Found loop1 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda1 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda2 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda3 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found usr Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda4 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda6 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda7 Dec 13 14:48:10.954637 extend-filesystems[1269]: Found vda9 Dec 13 14:48:10.954637 extend-filesystems[1269]: Checking size of /dev/vda9 Dec 13 14:48:10.959606 dbus-daemon[1265]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1072 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:48:10.964693 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:48:10.960317 dbus-daemon[1265]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:48:10.972125 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:48:10.972446 systemd[1]: Finished motdgen.service. Dec 13 14:48:10.993969 systemd-networkd[1072]: eth0: Gained IPv6LL Dec 13 14:48:10.997909 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:48:10.998756 systemd[1]: Reached target network-online.target. Dec 13 14:48:11.001324 systemd[1]: Starting kubelet.service... Dec 13 14:48:11.523138 systemd-timesyncd[1219]: Contacted time server 129.250.35.250:123 (0.flatcar.pool.ntp.org). Dec 13 14:48:11.523210 systemd-timesyncd[1219]: Initial clock synchronization to Fri 2024-12-13 14:48:11.522953 UTC. Dec 13 14:48:11.523441 systemd-resolved[1218]: Clock change detected. Flushing caches. Dec 13 14:48:11.548324 update_engine[1281]: I1213 14:48:11.547730 1281 main.cc:92] Flatcar Update Engine starting Dec 13 14:48:11.564363 update_engine[1281]: I1213 14:48:11.557924 1281 update_check_scheduler.cc:74] Next update check in 9m2s Dec 13 14:48:11.564474 extend-filesystems[1269]: Resized partition /dev/vda9 Dec 13 14:48:11.557806 systemd[1]: Started update-engine.service. Dec 13 14:48:11.568768 bash[1324]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:48:11.561167 systemd[1]: Started locksmithd.service. Dec 13 14:48:11.567146 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:48:11.569631 extend-filesystems[1331]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:48:11.640443 env[1295]: time="2024-12-13T14:48:11.639723359Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:48:11.650341 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 14:48:11.653931 systemd-logind[1280]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 14:48:11.657043 systemd-logind[1280]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:48:11.657459 systemd-logind[1280]: New seat seat0. Dec 13 14:48:11.659686 systemd[1]: Started systemd-logind.service. Dec 13 14:48:11.803782 env[1295]: time="2024-12-13T14:48:11.803620802Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:48:11.804010 env[1295]: time="2024-12-13T14:48:11.803980177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:48:11.809574 env[1295]: time="2024-12-13T14:48:11.809522352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:48:11.809651 env[1295]: time="2024-12-13T14:48:11.809572417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:48:11.810158 env[1295]: time="2024-12-13T14:48:11.810075314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:48:11.810258 env[1295]: time="2024-12-13T14:48:11.810155969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:48:11.810258 env[1295]: time="2024-12-13T14:48:11.810225845Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:48:11.810374 env[1295]: time="2024-12-13T14:48:11.810255894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:48:11.810574 env[1295]: time="2024-12-13T14:48:11.810534700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:48:11.814026 env[1295]: time="2024-12-13T14:48:11.813957165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:48:11.816589 env[1295]: time="2024-12-13T14:48:11.816490533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:48:11.816787 env[1295]: time="2024-12-13T14:48:11.816742967Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:48:11.817349 env[1295]: time="2024-12-13T14:48:11.817060039Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:48:11.817435 env[1295]: time="2024-12-13T14:48:11.817357561Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:48:11.840255 env[1295]: time="2024-12-13T14:48:11.840200504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:48:11.840395 env[1295]: time="2024-12-13T14:48:11.840266902Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:48:11.840395 env[1295]: time="2024-12-13T14:48:11.840310930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:48:11.840493 env[1295]: time="2024-12-13T14:48:11.840411770Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840564 env[1295]: time="2024-12-13T14:48:11.840503931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840564 env[1295]: time="2024-12-13T14:48:11.840539254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840660 env[1295]: time="2024-12-13T14:48:11.840568367Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840660 env[1295]: time="2024-12-13T14:48:11.840598380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840660 env[1295]: time="2024-12-13T14:48:11.840626574Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840778 env[1295]: time="2024-12-13T14:48:11.840656943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840778 env[1295]: time="2024-12-13T14:48:11.840687039Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.840778 env[1295]: time="2024-12-13T14:48:11.840716492Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:48:11.840953 env[1295]: time="2024-12-13T14:48:11.840895108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:48:11.841130 env[1295]: time="2024-12-13T14:48:11.841101705Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:48:11.841736 env[1295]: time="2024-12-13T14:48:11.841705343Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:48:11.841818 env[1295]: time="2024-12-13T14:48:11.841770343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.841818 env[1295]: time="2024-12-13T14:48:11.841800254Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:48:11.841951 env[1295]: time="2024-12-13T14:48:11.841927401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842003 env[1295]: time="2024-12-13T14:48:11.841954932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842003 env[1295]: time="2024-12-13T14:48:11.841984539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842146 env[1295]: time="2024-12-13T14:48:11.842008901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842146 env[1295]: time="2024-12-13T14:48:11.842061597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842146 env[1295]: time="2024-12-13T14:48:11.842126732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842313 env[1295]: time="2024-12-13T14:48:11.842165368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842313 env[1295]: time="2024-12-13T14:48:11.842187282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842313 env[1295]: time="2024-12-13T14:48:11.842214325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:48:11.842488 env[1295]: time="2024-12-13T14:48:11.842454025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842488 env[1295]: time="2024-12-13T14:48:11.842480952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842613 env[1295]: time="2024-12-13T14:48:11.842499268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.842613 env[1295]: time="2024-12-13T14:48:11.842516581Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:48:11.842613 env[1295]: time="2024-12-13T14:48:11.842552663Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:48:11.842613 env[1295]: time="2024-12-13T14:48:11.842572937Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:48:11.842813 env[1295]: time="2024-12-13T14:48:11.842639348Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:48:11.842813 env[1295]: time="2024-12-13T14:48:11.842736212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:48:11.843224 env[1295]: time="2024-12-13T14:48:11.843139624Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:48:11.846066 env[1295]: time="2024-12-13T14:48:11.843246246Z" level=info msg="Connect containerd service" Dec 13 14:48:11.846066 env[1295]: time="2024-12-13T14:48:11.843690730Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:48:11.849759 env[1295]: time="2024-12-13T14:48:11.849722362Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:48:11.851145 env[1295]: time="2024-12-13T14:48:11.850062872Z" level=info msg="Start subscribing containerd event" Dec 13 14:48:11.851145 env[1295]: time="2024-12-13T14:48:11.850165107Z" level=info msg="Start recovering state" Dec 13 14:48:11.851145 env[1295]: time="2024-12-13T14:48:11.850344429Z" level=info msg="Start event monitor" Dec 13 14:48:11.851145 env[1295]: time="2024-12-13T14:48:11.850410818Z" level=info msg="Start snapshots syncer" Dec 13 14:48:11.851145 env[1295]: time="2024-12-13T14:48:11.850450733Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:48:11.851145 env[1295]: time="2024-12-13T14:48:11.850487873Z" level=info msg="Start streaming server" Dec 13 14:48:11.852527 env[1295]: time="2024-12-13T14:48:11.852497525Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:48:11.853818 env[1295]: time="2024-12-13T14:48:11.852586005Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:48:11.857470 systemd[1]: Started containerd.service. Dec 13 14:48:11.858970 env[1295]: time="2024-12-13T14:48:11.858789406Z" level=info msg="containerd successfully booted in 0.239279s" Dec 13 14:48:11.861315 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 14:48:11.877353 dbus-daemon[1265]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:48:11.880262 dbus-daemon[1265]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1321 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:48:11.887563 extend-filesystems[1331]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:48:11.887563 extend-filesystems[1331]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 14:48:11.887563 extend-filesystems[1331]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 14:48:11.877544 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:48:11.899443 extend-filesystems[1269]: Resized filesystem in /dev/vda9 Dec 13 14:48:11.882967 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:48:11.883314 systemd[1]: Finished extend-filesystems.service. Dec 13 14:48:11.886634 systemd[1]: Starting polkit.service... Dec 13 14:48:11.908879 polkitd[1343]: Started polkitd version 121 Dec 13 14:48:11.935244 polkitd[1343]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:48:11.935348 polkitd[1343]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:48:11.938715 polkitd[1343]: Finished loading, compiling and executing 2 rules Dec 13 14:48:11.940009 dbus-daemon[1265]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:48:11.940257 systemd[1]: Started polkit.service. Dec 13 14:48:11.941177 polkitd[1343]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:48:11.975360 systemd-hostnamed[1321]: Hostname set to (static) Dec 13 14:48:12.088422 systemd-networkd[1072]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d219:24:19ff:fef3:4866/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d219:24:19ff:fef3:4866/64 assigned by NDisc. Dec 13 14:48:12.088434 systemd-networkd[1072]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 14:48:12.337920 tar[1288]: linux-amd64/LICENSE Dec 13 14:48:12.338127 tar[1288]: linux-amd64/README.md Dec 13 14:48:12.344376 systemd[1]: Finished prepare-helm.service. Dec 13 14:48:12.448010 locksmithd[1332]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:48:12.623192 sshd_keygen[1300]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:48:12.654694 systemd[1]: Finished sshd-keygen.service. Dec 13 14:48:12.657824 systemd[1]: Starting issuegen.service... Dec 13 14:48:12.666161 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:48:12.666473 systemd[1]: Finished issuegen.service. Dec 13 14:48:12.669694 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:48:12.680899 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:48:12.683927 systemd[1]: Started getty@tty1.service. Dec 13 14:48:12.689787 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:48:12.692148 systemd[1]: Reached target getty.target. Dec 13 14:48:12.715818 systemd[1]: Started kubelet.service. Dec 13 14:48:13.494505 kubelet[1380]: E1213 14:48:13.494406 1380 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:48:13.497306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:48:13.497601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:48:18.487337 coreos-metadata[1264]: Dec 13 14:48:18.487 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:48:18.535988 coreos-metadata[1264]: Dec 13 14:48:18.535 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 14:48:18.561397 coreos-metadata[1264]: Dec 13 14:48:18.561 INFO Fetch successful Dec 13 14:48:18.561784 coreos-metadata[1264]: Dec 13 14:48:18.561 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:48:18.600397 coreos-metadata[1264]: Dec 13 14:48:18.600 INFO Fetch successful Dec 13 14:48:18.602499 unknown[1264]: wrote ssh authorized keys file for user: core Dec 13 14:48:18.616878 update-ssh-keys[1391]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:48:18.617525 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:48:18.618078 systemd[1]: Reached target multi-user.target. Dec 13 14:48:18.620433 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:48:18.631889 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:48:18.632222 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:48:18.632459 systemd[1]: Startup finished in 7.941s (kernel) + 13.626s (userspace) = 21.568s. Dec 13 14:48:21.304994 systemd[1]: Created slice system-sshd.slice. Dec 13 14:48:21.307354 systemd[1]: Started sshd@0-10.243.72.102:22-139.178.68.195:38410.service. Dec 13 14:48:22.218176 sshd[1397]: Accepted publickey for core from 139.178.68.195 port 38410 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:48:22.221672 sshd[1397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:48:22.237318 systemd[1]: Created slice user-500.slice. Dec 13 14:48:22.239206 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:48:22.243573 systemd-logind[1280]: New session 1 of user core. Dec 13 14:48:22.254872 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:48:22.257097 systemd[1]: Starting user@500.service... Dec 13 14:48:22.266149 (systemd)[1402]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:48:22.371746 systemd[1402]: Queued start job for default target default.target. Dec 13 14:48:22.372082 systemd[1402]: Reached target paths.target. Dec 13 14:48:22.372107 systemd[1402]: Reached target sockets.target. Dec 13 14:48:22.372128 systemd[1402]: Reached target timers.target. Dec 13 14:48:22.372146 systemd[1402]: Reached target basic.target. Dec 13 14:48:22.372211 systemd[1402]: Reached target default.target. Dec 13 14:48:22.372273 systemd[1402]: Startup finished in 97ms. Dec 13 14:48:22.372989 systemd[1]: Started user@500.service. Dec 13 14:48:22.374608 systemd[1]: Started session-1.scope. Dec 13 14:48:22.997670 systemd[1]: Started sshd@1-10.243.72.102:22-139.178.68.195:38424.service. Dec 13 14:48:23.581088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:48:23.581397 systemd[1]: Stopped kubelet.service. Dec 13 14:48:23.583706 systemd[1]: Starting kubelet.service... Dec 13 14:48:23.735342 systemd[1]: Started kubelet.service. Dec 13 14:48:23.843244 kubelet[1421]: E1213 14:48:23.842857 1421 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:48:23.847128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:48:23.847418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:48:23.877343 sshd[1411]: Accepted publickey for core from 139.178.68.195 port 38424 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:48:23.878866 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:48:23.885393 systemd-logind[1280]: New session 2 of user core. Dec 13 14:48:23.886093 systemd[1]: Started session-2.scope. Dec 13 14:48:24.493136 sshd[1411]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:24.497195 systemd[1]: sshd@1-10.243.72.102:22-139.178.68.195:38424.service: Deactivated successfully. Dec 13 14:48:24.498445 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:48:24.500467 systemd-logind[1280]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:48:24.502024 systemd-logind[1280]: Removed session 2. Dec 13 14:48:24.638954 systemd[1]: Started sshd@2-10.243.72.102:22-139.178.68.195:38434.service. Dec 13 14:48:25.521445 sshd[1433]: Accepted publickey for core from 139.178.68.195 port 38434 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:48:25.524336 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:48:25.532070 systemd[1]: Started session-3.scope. Dec 13 14:48:25.532807 systemd-logind[1280]: New session 3 of user core. Dec 13 14:48:26.133631 sshd[1433]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:26.138176 systemd-logind[1280]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:48:26.138557 systemd[1]: sshd@2-10.243.72.102:22-139.178.68.195:38434.service: Deactivated successfully. Dec 13 14:48:26.139657 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:48:26.140333 systemd-logind[1280]: Removed session 3. Dec 13 14:48:26.279055 systemd[1]: Started sshd@3-10.243.72.102:22-139.178.68.195:60782.service. Dec 13 14:48:27.166996 sshd[1440]: Accepted publickey for core from 139.178.68.195 port 60782 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:48:27.170172 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:48:27.179162 systemd-logind[1280]: New session 4 of user core. Dec 13 14:48:27.179545 systemd[1]: Started session-4.scope. Dec 13 14:48:27.788463 sshd[1440]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:27.792471 systemd[1]: sshd@3-10.243.72.102:22-139.178.68.195:60782.service: Deactivated successfully. Dec 13 14:48:27.793917 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:48:27.793953 systemd-logind[1280]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:48:27.795328 systemd-logind[1280]: Removed session 4. Dec 13 14:48:27.933432 systemd[1]: Started sshd@4-10.243.72.102:22-139.178.68.195:60798.service. Dec 13 14:48:28.819624 sshd[1447]: Accepted publickey for core from 139.178.68.195 port 60798 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:48:28.824902 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:48:28.831825 systemd-logind[1280]: New session 5 of user core. Dec 13 14:48:28.832653 systemd[1]: Started session-5.scope. Dec 13 14:48:29.309113 sudo[1451]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:48:29.309498 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:48:29.361113 systemd[1]: Starting docker.service... Dec 13 14:48:29.445627 env[1461]: time="2024-12-13T14:48:29.445493334Z" level=info msg="Starting up" Dec 13 14:48:29.449010 env[1461]: time="2024-12-13T14:48:29.448974966Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:48:29.449010 env[1461]: time="2024-12-13T14:48:29.449008136Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:48:29.449162 env[1461]: time="2024-12-13T14:48:29.449039617Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:48:29.449162 env[1461]: time="2024-12-13T14:48:29.449063250Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:48:29.460157 env[1461]: time="2024-12-13T14:48:29.460107149Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:48:29.460417 env[1461]: time="2024-12-13T14:48:29.460389430Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:48:29.460552 env[1461]: time="2024-12-13T14:48:29.460519321Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:48:29.460714 env[1461]: time="2024-12-13T14:48:29.460686450Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:48:29.631330 env[1461]: time="2024-12-13T14:48:29.631252223Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:48:29.631697 env[1461]: time="2024-12-13T14:48:29.631668502Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:48:29.632166 env[1461]: time="2024-12-13T14:48:29.632131711Z" level=info msg="Loading containers: start." Dec 13 14:48:29.811339 kernel: Initializing XFRM netlink socket Dec 13 14:48:29.859196 env[1461]: time="2024-12-13T14:48:29.859053163Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:48:29.950508 systemd-networkd[1072]: docker0: Link UP Dec 13 14:48:29.992209 env[1461]: time="2024-12-13T14:48:29.992143788Z" level=info msg="Loading containers: done." Dec 13 14:48:30.015273 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3323671947-merged.mount: Deactivated successfully. Dec 13 14:48:30.020733 env[1461]: time="2024-12-13T14:48:30.020679260Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:48:30.021250 env[1461]: time="2024-12-13T14:48:30.021219330Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:48:30.021590 env[1461]: time="2024-12-13T14:48:30.021551964Z" level=info msg="Daemon has completed initialization" Dec 13 14:48:30.048504 systemd[1]: Started docker.service. Dec 13 14:48:30.057548 env[1461]: time="2024-12-13T14:48:30.057225260Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:48:31.985473 env[1295]: time="2024-12-13T14:48:31.985341368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:48:32.865523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899535997.mount: Deactivated successfully. Dec 13 14:48:34.081066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:48:34.081408 systemd[1]: Stopped kubelet.service. Dec 13 14:48:34.083843 systemd[1]: Starting kubelet.service... Dec 13 14:48:34.230807 systemd[1]: Started kubelet.service. Dec 13 14:48:34.363482 kubelet[1601]: E1213 14:48:34.356280 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:48:34.358658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:48:34.358921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:48:35.362239 env[1295]: time="2024-12-13T14:48:35.362178739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:35.365365 env[1295]: time="2024-12-13T14:48:35.365330787Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:35.367478 env[1295]: time="2024-12-13T14:48:35.367447219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:35.369603 env[1295]: time="2024-12-13T14:48:35.369570682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:35.370748 env[1295]: time="2024-12-13T14:48:35.370693112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:48:35.383834 env[1295]: time="2024-12-13T14:48:35.383773966Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:48:38.233575 env[1295]: time="2024-12-13T14:48:38.233510374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:38.235465 env[1295]: time="2024-12-13T14:48:38.235410986Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:38.237765 env[1295]: time="2024-12-13T14:48:38.237725954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:38.240049 env[1295]: time="2024-12-13T14:48:38.240016662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:38.241142 env[1295]: time="2024-12-13T14:48:38.241106093Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:48:38.254005 env[1295]: time="2024-12-13T14:48:38.253949062Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:48:39.968463 env[1295]: time="2024-12-13T14:48:39.968388570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:39.970918 env[1295]: time="2024-12-13T14:48:39.970886210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:39.973137 env[1295]: time="2024-12-13T14:48:39.973098855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:39.975444 env[1295]: time="2024-12-13T14:48:39.975400662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:39.976579 env[1295]: time="2024-12-13T14:48:39.976543124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:48:39.992986 env[1295]: time="2024-12-13T14:48:39.992921005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:48:41.555309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219531454.mount: Deactivated successfully. Dec 13 14:48:42.107693 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:48:42.460844 env[1295]: time="2024-12-13T14:48:42.460316611Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:42.463609 env[1295]: time="2024-12-13T14:48:42.463572751Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:42.466551 env[1295]: time="2024-12-13T14:48:42.466490032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:42.468025 env[1295]: time="2024-12-13T14:48:42.467984283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:42.468949 env[1295]: time="2024-12-13T14:48:42.468908904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:48:42.483020 env[1295]: time="2024-12-13T14:48:42.482947273Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:48:43.089074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847378040.mount: Deactivated successfully. Dec 13 14:48:44.513122 env[1295]: time="2024-12-13T14:48:44.512967587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:44.515603 env[1295]: time="2024-12-13T14:48:44.515563382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:44.518032 env[1295]: time="2024-12-13T14:48:44.517997936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:44.520194 env[1295]: time="2024-12-13T14:48:44.520160468Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:44.521684 env[1295]: time="2024-12-13T14:48:44.521630504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:48:44.536886 env[1295]: time="2024-12-13T14:48:44.536820892Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:48:44.581189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:48:44.581606 systemd[1]: Stopped kubelet.service. Dec 13 14:48:44.584432 systemd[1]: Starting kubelet.service... Dec 13 14:48:44.709780 systemd[1]: Started kubelet.service. Dec 13 14:48:44.772265 kubelet[1647]: E1213 14:48:44.771472 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:48:44.774813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:48:44.775100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:48:45.219506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429629595.mount: Deactivated successfully. Dec 13 14:48:45.224073 env[1295]: time="2024-12-13T14:48:45.224019672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:45.226804 env[1295]: time="2024-12-13T14:48:45.226772091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:45.230060 env[1295]: time="2024-12-13T14:48:45.229999685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:45.233022 env[1295]: time="2024-12-13T14:48:45.232980839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:45.235188 env[1295]: time="2024-12-13T14:48:45.235137888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:48:45.248985 env[1295]: time="2024-12-13T14:48:45.248930627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:48:45.930588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838061992.mount: Deactivated successfully. Dec 13 14:48:49.547283 env[1295]: time="2024-12-13T14:48:49.546593544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:49.550969 env[1295]: time="2024-12-13T14:48:49.550927759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:49.555754 env[1295]: time="2024-12-13T14:48:49.555693291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:49.558265 env[1295]: time="2024-12-13T14:48:49.558230784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:49.559364 env[1295]: time="2024-12-13T14:48:49.559330566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:48:54.831324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:48:54.831718 systemd[1]: Stopped kubelet.service. Dec 13 14:48:54.835485 systemd[1]: Starting kubelet.service... Dec 13 14:48:55.322485 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:48:55.322638 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:48:55.323046 systemd[1]: Stopped kubelet.service. Dec 13 14:48:55.326917 systemd[1]: Starting kubelet.service... Dec 13 14:48:55.358016 systemd[1]: Reloading. Dec 13 14:48:55.496164 /usr/lib/systemd/system-generators/torcx-generator[1752]: time="2024-12-13T14:48:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:48:55.498951 /usr/lib/systemd/system-generators/torcx-generator[1752]: time="2024-12-13T14:48:55Z" level=info msg="torcx already run" Dec 13 14:48:55.620412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:48:55.620459 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:48:55.647418 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:48:55.790713 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:48:55.790855 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:48:55.791379 systemd[1]: Stopped kubelet.service. Dec 13 14:48:55.795033 systemd[1]: Starting kubelet.service... Dec 13 14:48:56.001706 systemd[1]: Started kubelet.service. Dec 13 14:48:56.102705 kubelet[1818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:48:56.102705 kubelet[1818]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:48:56.102705 kubelet[1818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:48:56.103601 kubelet[1818]: I1213 14:48:56.102828 1818 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:48:56.661800 kubelet[1818]: I1213 14:48:56.658486 1818 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:48:56.661800 kubelet[1818]: I1213 14:48:56.658531 1818 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:48:56.661800 kubelet[1818]: I1213 14:48:56.658895 1818 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:48:56.697491 kubelet[1818]: E1213 14:48:56.697440 1818 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.243.72.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.706045 kubelet[1818]: I1213 14:48:56.705998 1818 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:48:56.721051 kubelet[1818]: I1213 14:48:56.721012 1818 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:48:56.722354 kubelet[1818]: I1213 14:48:56.722321 1818 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:48:56.722629 kubelet[1818]: I1213 14:48:56.722593 1818 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:48:56.722856 kubelet[1818]: I1213 14:48:56.722653 1818 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:48:56.722856 kubelet[1818]: I1213 14:48:56.722673 1818 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:48:56.723000 kubelet[1818]: I1213 14:48:56.722879 1818 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:48:56.723107 kubelet[1818]: I1213 14:48:56.723086 1818 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:48:56.723225 kubelet[1818]: I1213 14:48:56.723120 1818 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:48:56.723225 kubelet[1818]: I1213 14:48:56.723199 1818 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:48:56.723377 kubelet[1818]: I1213 14:48:56.723245 1818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:48:56.725139 kubelet[1818]: W1213 14:48:56.724900 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.243.72.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.725139 kubelet[1818]: E1213 14:48:56.724989 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.243.72.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.725139 kubelet[1818]: W1213 14:48:56.725078 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.243.72.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-997hs.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.725139 kubelet[1818]: E1213 14:48:56.725135 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.72.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-997hs.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.725445 kubelet[1818]: I1213 14:48:56.725238 1818 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:48:56.728457 kubelet[1818]: I1213 14:48:56.728413 1818 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:48:56.728542 kubelet[1818]: W1213 14:48:56.728531 1818 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:48:56.729954 kubelet[1818]: I1213 14:48:56.729768 1818 server.go:1256] "Started kubelet" Dec 13 14:48:56.735507 kubelet[1818]: I1213 14:48:56.735482 1818 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:48:56.736969 kubelet[1818]: I1213 14:48:56.736946 1818 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:48:56.738461 kubelet[1818]: I1213 14:48:56.738429 1818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:48:56.738829 kubelet[1818]: I1213 14:48:56.738801 1818 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:48:56.740510 kubelet[1818]: E1213 14:48:56.740451 1818 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.72.102:6443/api/v1/namespaces/default/events\": dial tcp 10.243.72.102:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-997hs.gb1.brightbox.com.1810c3fa644e2550 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-997hs.gb1.brightbox.com,UID:srv-997hs.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-997hs.gb1.brightbox.com,},FirstTimestamp:2024-12-13 14:48:56.72972424 +0000 UTC m=+0.716076685,LastTimestamp:2024-12-13 14:48:56.72972424 +0000 UTC m=+0.716076685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-997hs.gb1.brightbox.com,}" Dec 13 14:48:56.748108 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:48:56.751626 kubelet[1818]: I1213 14:48:56.747124 1818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:48:56.758559 kubelet[1818]: E1213 14:48:56.758508 1818 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"srv-997hs.gb1.brightbox.com\" not found" Dec 13 14:48:56.759122 kubelet[1818]: I1213 14:48:56.759099 1818 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:48:56.759554 kubelet[1818]: I1213 14:48:56.759531 1818 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:48:56.759822 kubelet[1818]: I1213 14:48:56.759787 1818 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:48:56.760691 kubelet[1818]: W1213 14:48:56.760643 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.243.72.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.760852 kubelet[1818]: E1213 14:48:56.760829 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.243.72.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.761396 kubelet[1818]: E1213 14:48:56.761362 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-997hs.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.102:6443: connect: connection refused" interval="200ms" Dec 13 14:48:56.766099 kubelet[1818]: I1213 14:48:56.766066 1818 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:48:56.766397 kubelet[1818]: I1213 14:48:56.766372 1818 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:48:56.770282 kubelet[1818]: I1213 14:48:56.770253 1818 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:48:56.823611 kubelet[1818]: I1213 14:48:56.823566 1818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:48:56.826015 kubelet[1818]: I1213 14:48:56.825985 1818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:48:56.826202 kubelet[1818]: I1213 14:48:56.826172 1818 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:48:56.826395 kubelet[1818]: I1213 14:48:56.826364 1818 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:48:56.826631 kubelet[1818]: E1213 14:48:56.826610 1818 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:48:56.830635 kubelet[1818]: W1213 14:48:56.830592 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.243.72.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.830731 kubelet[1818]: E1213 14:48:56.830657 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.243.72.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:56.850313 kubelet[1818]: I1213 14:48:56.850265 1818 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:48:56.850492 kubelet[1818]: I1213 14:48:56.850470 1818 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:48:56.850654 kubelet[1818]: I1213 14:48:56.850633 1818 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:48:56.852758 kubelet[1818]: I1213 14:48:56.852735 1818 policy_none.go:49] "None policy: Start" Dec 13 14:48:56.853878 kubelet[1818]: I1213 14:48:56.853851 1818 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:48:56.853976 kubelet[1818]: I1213 14:48:56.853897 1818 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:48:56.863905 kubelet[1818]: I1213 14:48:56.863876 1818 kubelet_node_status.go:73] "Attempting to register node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.864551 kubelet[1818]: E1213 14:48:56.864527 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.72.102:6443/api/v1/nodes\": dial tcp 10.243.72.102:6443: connect: connection refused" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.865878 kubelet[1818]: I1213 14:48:56.865852 1818 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:48:56.866210 kubelet[1818]: I1213 14:48:56.866186 1818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:48:56.874260 kubelet[1818]: E1213 14:48:56.874225 1818 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-997hs.gb1.brightbox.com\" not found" Dec 13 14:48:56.928037 kubelet[1818]: I1213 14:48:56.927775 1818 topology_manager.go:215] "Topology Admit Handler" podUID="1049da98cbdbbea43a750169e2e5a59d" podNamespace="kube-system" podName="kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.932803 kubelet[1818]: I1213 14:48:56.932760 1818 topology_manager.go:215] "Topology Admit Handler" podUID="8c5b9af39bb1f3eaff1860bd630fffa7" podNamespace="kube-system" podName="kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.935054 kubelet[1818]: I1213 14:48:56.935009 1818 topology_manager.go:215] "Topology Admit Handler" podUID="9b59f3048dc6de3f3ff5b652437ce3cd" podNamespace="kube-system" podName="kube-scheduler-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.962723 kubelet[1818]: I1213 14:48:56.962404 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1049da98cbdbbea43a750169e2e5a59d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-997hs.gb1.brightbox.com\" (UID: \"1049da98cbdbbea43a750169e2e5a59d\") " pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.963175 kubelet[1818]: I1213 14:48:56.962984 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-flexvolume-dir\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.963373 kubelet[1818]: I1213 14:48:56.963349 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-k8s-certs\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.963574 kubelet[1818]: I1213 14:48:56.963543 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.963768 kubelet[1818]: I1213 14:48:56.963727 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b59f3048dc6de3f3ff5b652437ce3cd-kubeconfig\") pod \"kube-scheduler-srv-997hs.gb1.brightbox.com\" (UID: \"9b59f3048dc6de3f3ff5b652437ce3cd\") " pod="kube-system/kube-scheduler-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.963938 kubelet[1818]: I1213 14:48:56.963918 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1049da98cbdbbea43a750169e2e5a59d-ca-certs\") pod \"kube-apiserver-srv-997hs.gb1.brightbox.com\" (UID: \"1049da98cbdbbea43a750169e2e5a59d\") " pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.964137 kubelet[1818]: I1213 14:48:56.964106 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1049da98cbdbbea43a750169e2e5a59d-k8s-certs\") pod \"kube-apiserver-srv-997hs.gb1.brightbox.com\" (UID: \"1049da98cbdbbea43a750169e2e5a59d\") " pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.964316 kubelet[1818]: I1213 14:48:56.964274 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-ca-certs\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.964505 kubelet[1818]: I1213 14:48:56.964469 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-kubeconfig\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:48:56.964670 kubelet[1818]: E1213 14:48:56.963466 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-997hs.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.102:6443: connect: connection refused" interval="400ms" Dec 13 14:48:57.068910 kubelet[1818]: I1213 14:48:57.068839 1818 kubelet_node_status.go:73] "Attempting to register node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:57.069505 kubelet[1818]: E1213 14:48:57.069454 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.72.102:6443/api/v1/nodes\": dial tcp 10.243.72.102:6443: connect: connection refused" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:57.080371 update_engine[1281]: I1213 14:48:57.079610 1281 update_attempter.cc:509] Updating boot flags... Dec 13 14:48:57.248751 env[1295]: time="2024-12-13T14:48:57.248535782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-997hs.gb1.brightbox.com,Uid:1049da98cbdbbea43a750169e2e5a59d,Namespace:kube-system,Attempt:0,}" Dec 13 14:48:57.253700 env[1295]: time="2024-12-13T14:48:57.253198134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-997hs.gb1.brightbox.com,Uid:9b59f3048dc6de3f3ff5b652437ce3cd,Namespace:kube-system,Attempt:0,}" Dec 13 14:48:57.253700 env[1295]: time="2024-12-13T14:48:57.253479475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-997hs.gb1.brightbox.com,Uid:8c5b9af39bb1f3eaff1860bd630fffa7,Namespace:kube-system,Attempt:0,}" Dec 13 14:48:57.267998 kubelet[1818]: E1213 14:48:57.267953 1818 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.72.102:6443/api/v1/namespaces/default/events\": dial tcp 10.243.72.102:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-997hs.gb1.brightbox.com.1810c3fa644e2550 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-997hs.gb1.brightbox.com,UID:srv-997hs.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-997hs.gb1.brightbox.com,},FirstTimestamp:2024-12-13 14:48:56.72972424 +0000 UTC m=+0.716076685,LastTimestamp:2024-12-13 14:48:56.72972424 +0000 UTC m=+0.716076685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-997hs.gb1.brightbox.com,}" Dec 13 14:48:57.365955 kubelet[1818]: E1213 14:48:57.365887 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-997hs.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.102:6443: connect: connection refused" interval="800ms" Dec 13 14:48:57.473834 kubelet[1818]: I1213 14:48:57.473348 1818 kubelet_node_status.go:73] "Attempting to register node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:57.473834 kubelet[1818]: E1213 14:48:57.473799 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.72.102:6443/api/v1/nodes\": dial tcp 10.243.72.102:6443: connect: connection refused" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:57.698095 kubelet[1818]: W1213 14:48:57.697927 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.243.72.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-997hs.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:57.698095 kubelet[1818]: E1213 14:48:57.698045 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.243.72.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-997hs.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:57.873043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586592171.mount: Deactivated successfully. Dec 13 14:48:57.881125 env[1295]: time="2024-12-13T14:48:57.881076386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.882461 env[1295]: time="2024-12-13T14:48:57.882428669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.884023 env[1295]: time="2024-12-13T14:48:57.883990443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.886027 env[1295]: time="2024-12-13T14:48:57.885993782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.888311 env[1295]: time="2024-12-13T14:48:57.888247904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.890222 env[1295]: time="2024-12-13T14:48:57.890188759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.894263 env[1295]: time="2024-12-13T14:48:57.894216658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.895797 env[1295]: time="2024-12-13T14:48:57.895761377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.898378 env[1295]: time="2024-12-13T14:48:57.898338144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.899270 env[1295]: time="2024-12-13T14:48:57.899236509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.900758 env[1295]: time="2024-12-13T14:48:57.900722786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.902460 env[1295]: time="2024-12-13T14:48:57.902399105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:48:57.946405 env[1295]: time="2024-12-13T14:48:57.946265489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:48:57.946405 env[1295]: time="2024-12-13T14:48:57.946361490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:48:57.946779 env[1295]: time="2024-12-13T14:48:57.946383213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:48:57.946920 env[1295]: time="2024-12-13T14:48:57.946263766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:48:57.947780 env[1295]: time="2024-12-13T14:48:57.947724816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:48:57.947989 env[1295]: time="2024-12-13T14:48:57.947938063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:48:57.948483 env[1295]: time="2024-12-13T14:48:57.948348534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2d4c26832d0c0aeb53802a9859ac8f35710838c9fbfb88952a346e3da9b3d02 pid=1879 runtime=io.containerd.runc.v2 Dec 13 14:48:57.950392 env[1295]: time="2024-12-13T14:48:57.950329577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dce13b3179ea63466a34ca7730a8ffaf4fa553c3c023f243751a2b61ab8502dd pid=1885 runtime=io.containerd.runc.v2 Dec 13 14:48:57.963526 env[1295]: time="2024-12-13T14:48:57.963394280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:48:57.963889 env[1295]: time="2024-12-13T14:48:57.963820244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:48:57.964078 env[1295]: time="2024-12-13T14:48:57.964027477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:48:57.964554 env[1295]: time="2024-12-13T14:48:57.964497246Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed6b5450cef3fa747cb53fe8a6e72ba3f7df0fa555c6f1abc097cfaf1eec71af pid=1913 runtime=io.containerd.runc.v2 Dec 13 14:48:57.992506 kubelet[1818]: W1213 14:48:57.992401 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.243.72.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:57.992827 kubelet[1818]: E1213 14:48:57.992800 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.243.72.102:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:58.099326 env[1295]: time="2024-12-13T14:48:58.095262299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-997hs.gb1.brightbox.com,Uid:8c5b9af39bb1f3eaff1860bd630fffa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dce13b3179ea63466a34ca7730a8ffaf4fa553c3c023f243751a2b61ab8502dd\"" Dec 13 14:48:58.116325 env[1295]: time="2024-12-13T14:48:58.115258831Z" level=info msg="CreateContainer within sandbox \"dce13b3179ea63466a34ca7730a8ffaf4fa553c3c023f243751a2b61ab8502dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:48:58.120091 env[1295]: time="2024-12-13T14:48:58.120053864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-997hs.gb1.brightbox.com,Uid:1049da98cbdbbea43a750169e2e5a59d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2d4c26832d0c0aeb53802a9859ac8f35710838c9fbfb88952a346e3da9b3d02\"" Dec 13 14:48:58.127489 env[1295]: time="2024-12-13T14:48:58.127449544Z" level=info msg="CreateContainer within sandbox \"e2d4c26832d0c0aeb53802a9859ac8f35710838c9fbfb88952a346e3da9b3d02\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:48:58.145084 env[1295]: time="2024-12-13T14:48:58.145010612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-997hs.gb1.brightbox.com,Uid:9b59f3048dc6de3f3ff5b652437ce3cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed6b5450cef3fa747cb53fe8a6e72ba3f7df0fa555c6f1abc097cfaf1eec71af\"" Dec 13 14:48:58.148500 env[1295]: time="2024-12-13T14:48:58.148461085Z" level=info msg="CreateContainer within sandbox \"ed6b5450cef3fa747cb53fe8a6e72ba3f7df0fa555c6f1abc097cfaf1eec71af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:48:58.166804 env[1295]: time="2024-12-13T14:48:58.166702554Z" level=info msg="CreateContainer within sandbox \"e2d4c26832d0c0aeb53802a9859ac8f35710838c9fbfb88952a346e3da9b3d02\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d803eb40757db67f638253406b28d45fe87af1ac9aa5fdefc237fc2384c300f8\"" Dec 13 14:48:58.171723 kubelet[1818]: W1213 14:48:58.168990 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.243.72.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:58.171937 kubelet[1818]: E1213 14:48:58.171798 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.243.72.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:58.173566 kubelet[1818]: E1213 14:48:58.173536 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-997hs.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.102:6443: connect: connection refused" interval="1.6s" Dec 13 14:48:58.174771 env[1295]: time="2024-12-13T14:48:58.174729364Z" level=info msg="StartContainer for \"d803eb40757db67f638253406b28d45fe87af1ac9aa5fdefc237fc2384c300f8\"" Dec 13 14:48:58.176011 env[1295]: time="2024-12-13T14:48:58.175966461Z" level=info msg="CreateContainer within sandbox \"dce13b3179ea63466a34ca7730a8ffaf4fa553c3c023f243751a2b61ab8502dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd652a17c9240e48a6fcbcf649ae8bd5ad1334e62381bec41698682c279a5970\"" Dec 13 14:48:58.177451 env[1295]: time="2024-12-13T14:48:58.177403847Z" level=info msg="StartContainer for \"cd652a17c9240e48a6fcbcf649ae8bd5ad1334e62381bec41698682c279a5970\"" Dec 13 14:48:58.181346 env[1295]: time="2024-12-13T14:48:58.181282233Z" level=info msg="CreateContainer within sandbox \"ed6b5450cef3fa747cb53fe8a6e72ba3f7df0fa555c6f1abc097cfaf1eec71af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a83cc0ebe4a24dc2ee134b86be5b39f1a11278354329864439d2473b62ef571\"" Dec 13 14:48:58.181837 env[1295]: time="2024-12-13T14:48:58.181800607Z" level=info msg="StartContainer for \"7a83cc0ebe4a24dc2ee134b86be5b39f1a11278354329864439d2473b62ef571\"" Dec 13 14:48:58.282032 kubelet[1818]: I1213 14:48:58.281001 1818 kubelet_node_status.go:73] "Attempting to register node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:58.282032 kubelet[1818]: E1213 14:48:58.281604 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.243.72.102:6443/api/v1/nodes\": dial tcp 10.243.72.102:6443: connect: connection refused" node="srv-997hs.gb1.brightbox.com" Dec 13 14:48:58.291485 kubelet[1818]: W1213 14:48:58.291375 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.243.72.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:58.291636 kubelet[1818]: E1213 14:48:58.291495 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.243.72.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:58.348079 env[1295]: time="2024-12-13T14:48:58.348008560Z" level=info msg="StartContainer for \"cd652a17c9240e48a6fcbcf649ae8bd5ad1334e62381bec41698682c279a5970\" returns successfully" Dec 13 14:48:58.364863 env[1295]: time="2024-12-13T14:48:58.364791777Z" level=info msg="StartContainer for \"d803eb40757db67f638253406b28d45fe87af1ac9aa5fdefc237fc2384c300f8\" returns successfully" Dec 13 14:48:58.396757 env[1295]: time="2024-12-13T14:48:58.396682089Z" level=info msg="StartContainer for \"7a83cc0ebe4a24dc2ee134b86be5b39f1a11278354329864439d2473b62ef571\" returns successfully" Dec 13 14:48:58.713630 kubelet[1818]: E1213 14:48:58.713581 1818 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.243.72.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.243.72.102:6443: connect: connection refused Dec 13 14:48:59.884522 kubelet[1818]: I1213 14:48:59.884478 1818 kubelet_node_status.go:73] "Attempting to register node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:49:01.678168 kubelet[1818]: E1213 14:49:01.678060 1818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-997hs.gb1.brightbox.com\" not found" node="srv-997hs.gb1.brightbox.com" Dec 13 14:49:01.726881 kubelet[1818]: I1213 14:49:01.726808 1818 kubelet_node_status.go:76] "Successfully registered node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:49:01.727851 kubelet[1818]: I1213 14:49:01.727697 1818 apiserver.go:52] "Watching apiserver" Dec 13 14:49:01.760602 kubelet[1818]: I1213 14:49:01.760536 1818 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:49:04.697742 systemd[1]: Reloading. Dec 13 14:49:04.809733 /usr/lib/systemd/system-generators/torcx-generator[2132]: time="2024-12-13T14:49:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:49:04.809786 /usr/lib/systemd/system-generators/torcx-generator[2132]: time="2024-12-13T14:49:04Z" level=info msg="torcx already run" Dec 13 14:49:04.928378 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:49:04.928409 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:49:04.961062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:49:05.102489 kubelet[1818]: I1213 14:49:05.102173 1818 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:49:05.102859 systemd[1]: Stopping kubelet.service... Dec 13 14:49:05.119573 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:49:05.120011 systemd[1]: Stopped kubelet.service. Dec 13 14:49:05.128170 systemd[1]: Starting kubelet.service... Dec 13 14:49:06.210234 systemd[1]: Started kubelet.service. Dec 13 14:49:06.331359 sudo[2203]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:49:06.331768 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:49:06.368457 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:49:06.368457 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:49:06.368457 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:49:06.369314 kubelet[2191]: I1213 14:49:06.368589 2191 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:49:06.380025 kubelet[2191]: I1213 14:49:06.379946 2191 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:49:06.380025 kubelet[2191]: I1213 14:49:06.380000 2191 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:49:06.380415 kubelet[2191]: I1213 14:49:06.380323 2191 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:49:06.385566 kubelet[2191]: I1213 14:49:06.385529 2191 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:49:06.393760 kubelet[2191]: I1213 14:49:06.393733 2191 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:49:06.408244 kubelet[2191]: I1213 14:49:06.408210 2191 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:49:06.409025 kubelet[2191]: I1213 14:49:06.408978 2191 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:49:06.409297 kubelet[2191]: I1213 14:49:06.409242 2191 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:49:06.409509 kubelet[2191]: I1213 14:49:06.409342 2191 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:49:06.409509 kubelet[2191]: I1213 14:49:06.409363 2191 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:49:06.409509 kubelet[2191]: I1213 14:49:06.409433 2191 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:49:06.409692 kubelet[2191]: I1213 14:49:06.409598 2191 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:49:06.415443 kubelet[2191]: I1213 14:49:06.415411 2191 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:49:06.415663 kubelet[2191]: I1213 14:49:06.415639 2191 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:49:06.415844 kubelet[2191]: I1213 14:49:06.415821 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:49:06.422741 kubelet[2191]: I1213 14:49:06.419657 2191 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:49:06.422741 kubelet[2191]: I1213 14:49:06.420047 2191 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:49:06.422741 kubelet[2191]: I1213 14:49:06.420977 2191 server.go:1256] "Started kubelet" Dec 13 14:49:06.424425 kubelet[2191]: I1213 14:49:06.424164 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:49:06.435043 kubelet[2191]: I1213 14:49:06.431920 2191 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:49:06.435043 kubelet[2191]: I1213 14:49:06.433160 2191 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:49:06.435208 kubelet[2191]: I1213 14:49:06.435139 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:49:06.435639 kubelet[2191]: I1213 14:49:06.435443 2191 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:49:06.468701 kubelet[2191]: I1213 14:49:06.468502 2191 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:49:06.468976 kubelet[2191]: I1213 14:49:06.468760 2191 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:49:06.469106 kubelet[2191]: I1213 14:49:06.469068 2191 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:49:06.474941 kubelet[2191]: I1213 14:49:06.474889 2191 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:49:06.475081 kubelet[2191]: I1213 14:49:06.475049 2191 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:49:06.475977 kubelet[2191]: E1213 14:49:06.475950 2191 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:49:06.486521 kubelet[2191]: I1213 14:49:06.486455 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:49:06.495499 kubelet[2191]: I1213 14:49:06.495470 2191 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:49:06.498411 kubelet[2191]: I1213 14:49:06.498355 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:49:06.498502 kubelet[2191]: I1213 14:49:06.498433 2191 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:49:06.498502 kubelet[2191]: I1213 14:49:06.498469 2191 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:49:06.498626 kubelet[2191]: E1213 14:49:06.498546 2191 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:49:06.595015 kubelet[2191]: I1213 14:49:06.594973 2191 kubelet_node_status.go:73] "Attempting to register node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.599427 kubelet[2191]: E1213 14:49:06.599014 2191 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:49:06.610231 kubelet[2191]: I1213 14:49:06.610202 2191 kubelet_node_status.go:112] "Node was previously registered" node="srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.610410 kubelet[2191]: I1213 14:49:06.610368 2191 kubelet_node_status.go:76] "Successfully registered node" node="srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.664527 kubelet[2191]: I1213 14:49:06.664480 2191 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:49:06.664527 kubelet[2191]: I1213 14:49:06.664518 2191 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:49:06.664795 kubelet[2191]: I1213 14:49:06.664554 2191 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:49:06.664872 kubelet[2191]: I1213 14:49:06.664811 2191 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:49:06.664872 kubelet[2191]: I1213 14:49:06.664869 2191 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:49:06.664989 kubelet[2191]: I1213 14:49:06.664889 2191 policy_none.go:49] "None policy: Start" Dec 13 14:49:06.666396 kubelet[2191]: I1213 14:49:06.666370 2191 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:49:06.666481 kubelet[2191]: I1213 14:49:06.666412 2191 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:49:06.667879 kubelet[2191]: I1213 14:49:06.667852 2191 state_mem.go:75] "Updated machine memory state" Dec 13 14:49:06.673201 kubelet[2191]: I1213 14:49:06.673163 2191 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:49:06.674744 kubelet[2191]: I1213 14:49:06.674707 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:49:06.805114 kubelet[2191]: I1213 14:49:06.804979 2191 topology_manager.go:215] "Topology Admit Handler" podUID="9b59f3048dc6de3f3ff5b652437ce3cd" podNamespace="kube-system" podName="kube-scheduler-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.805418 kubelet[2191]: I1213 14:49:06.805148 2191 topology_manager.go:215] "Topology Admit Handler" podUID="1049da98cbdbbea43a750169e2e5a59d" podNamespace="kube-system" podName="kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.805418 kubelet[2191]: I1213 14:49:06.805219 2191 topology_manager.go:215] "Topology Admit Handler" podUID="8c5b9af39bb1f3eaff1860bd630fffa7" podNamespace="kube-system" podName="kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.875315 kubelet[2191]: W1213 14:49:06.875246 2191 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:49:06.875571 kubelet[2191]: W1213 14:49:06.875359 2191 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:49:06.875970 kubelet[2191]: I1213 14:49:06.875536 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1049da98cbdbbea43a750169e2e5a59d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-997hs.gb1.brightbox.com\" (UID: \"1049da98cbdbbea43a750169e2e5a59d\") " pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.876080 kubelet[2191]: I1213 14:49:06.876004 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-ca-certs\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.876147 kubelet[2191]: W1213 14:49:06.876128 2191 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:49:06.876607 kubelet[2191]: I1213 14:49:06.876578 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.876701 kubelet[2191]: I1213 14:49:06.876686 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b59f3048dc6de3f3ff5b652437ce3cd-kubeconfig\") pod \"kube-scheduler-srv-997hs.gb1.brightbox.com\" (UID: \"9b59f3048dc6de3f3ff5b652437ce3cd\") " pod="kube-system/kube-scheduler-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.876801 kubelet[2191]: I1213 14:49:06.876779 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1049da98cbdbbea43a750169e2e5a59d-k8s-certs\") pod \"kube-apiserver-srv-997hs.gb1.brightbox.com\" (UID: \"1049da98cbdbbea43a750169e2e5a59d\") " pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.876898 kubelet[2191]: I1213 14:49:06.876842 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-k8s-certs\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.876967 kubelet[2191]: I1213 14:49:06.876930 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-kubeconfig\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.877063 kubelet[2191]: I1213 14:49:06.876965 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1049da98cbdbbea43a750169e2e5a59d-ca-certs\") pod \"kube-apiserver-srv-997hs.gb1.brightbox.com\" (UID: \"1049da98cbdbbea43a750169e2e5a59d\") " pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:49:06.877272 kubelet[2191]: I1213 14:49:06.877230 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c5b9af39bb1f3eaff1860bd630fffa7-flexvolume-dir\") pod \"kube-controller-manager-srv-997hs.gb1.brightbox.com\" (UID: \"8c5b9af39bb1f3eaff1860bd630fffa7\") " pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" Dec 13 14:49:07.252918 sudo[2203]: pam_unix(sudo:session): session closed for user root Dec 13 14:49:07.436208 kubelet[2191]: I1213 14:49:07.436157 2191 apiserver.go:52] "Watching apiserver" Dec 13 14:49:07.469135 kubelet[2191]: I1213 14:49:07.469065 2191 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:49:07.570762 kubelet[2191]: W1213 14:49:07.570615 2191 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:49:07.570762 kubelet[2191]: E1213 14:49:07.570711 2191 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-997hs.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" Dec 13 14:49:07.677468 kubelet[2191]: I1213 14:49:07.677419 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-997hs.gb1.brightbox.com" podStartSLOduration=1.677349972 podStartE2EDuration="1.677349972s" podCreationTimestamp="2024-12-13 14:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:49:07.661156826 +0000 UTC m=+1.419738171" watchObservedRunningTime="2024-12-13 14:49:07.677349972 +0000 UTC m=+1.435931293" Dec 13 14:49:07.686989 kubelet[2191]: I1213 14:49:07.686947 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-997hs.gb1.brightbox.com" podStartSLOduration=1.686900814 podStartE2EDuration="1.686900814s" podCreationTimestamp="2024-12-13 14:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:49:07.686193301 +0000 UTC m=+1.444774628" watchObservedRunningTime="2024-12-13 14:49:07.686900814 +0000 UTC m=+1.445482139" Dec 13 14:49:07.687272 kubelet[2191]: I1213 14:49:07.687246 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-997hs.gb1.brightbox.com" podStartSLOduration=1.687219943 podStartE2EDuration="1.687219943s" podCreationTimestamp="2024-12-13 14:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:49:07.678023635 +0000 UTC m=+1.436604976" watchObservedRunningTime="2024-12-13 14:49:07.687219943 +0000 UTC m=+1.445801270" Dec 13 14:49:09.051665 sudo[1451]: pam_unix(sudo:session): session closed for user root Dec 13 14:49:09.196496 sshd[1447]: pam_unix(sshd:session): session closed for user core Dec 13 14:49:09.202282 systemd[1]: sshd@4-10.243.72.102:22-139.178.68.195:60798.service: Deactivated successfully. Dec 13 14:49:09.203965 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:49:09.204527 systemd-logind[1280]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:49:09.206927 systemd-logind[1280]: Removed session 5. Dec 13 14:49:17.135940 kubelet[2191]: I1213 14:49:17.135875 2191 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:49:17.137659 env[1295]: time="2024-12-13T14:49:17.137588510Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:49:17.138398 kubelet[2191]: I1213 14:49:17.138374 2191 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:49:17.884235 kubelet[2191]: I1213 14:49:17.884181 2191 topology_manager.go:215] "Topology Admit Handler" podUID="2cce8acd-4431-4adf-a7e1-ed9f779e64b7" podNamespace="kube-system" podName="kube-proxy-b52ph" Dec 13 14:49:17.892582 kubelet[2191]: I1213 14:49:17.892525 2191 topology_manager.go:215] "Topology Admit Handler" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" podNamespace="kube-system" podName="cilium-jvh2t" Dec 13 14:49:17.950932 kubelet[2191]: I1213 14:49:17.950867 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-config-path\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.951406 kubelet[2191]: I1213 14:49:17.951382 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-net\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.951598 kubelet[2191]: I1213 14:49:17.951556 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htn6b\" (UniqueName: \"kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-kube-api-access-htn6b\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.951787 kubelet[2191]: I1213 14:49:17.951757 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cce8acd-4431-4adf-a7e1-ed9f779e64b7-kube-proxy\") pod \"kube-proxy-b52ph\" (UID: \"2cce8acd-4431-4adf-a7e1-ed9f779e64b7\") " pod="kube-system/kube-proxy-b52ph" Dec 13 14:49:17.951973 kubelet[2191]: I1213 14:49:17.951943 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cce8acd-4431-4adf-a7e1-ed9f779e64b7-lib-modules\") pod \"kube-proxy-b52ph\" (UID: \"2cce8acd-4431-4adf-a7e1-ed9f779e64b7\") " pod="kube-system/kube-proxy-b52ph" Dec 13 14:49:17.952187 kubelet[2191]: I1213 14:49:17.952141 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f9f849c-c56c-4722-a128-babd68cd3e87-clustermesh-secrets\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.952382 kubelet[2191]: I1213 14:49:17.952346 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-hubble-tls\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.952569 kubelet[2191]: I1213 14:49:17.952539 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swrb9\" (UniqueName: \"kubernetes.io/projected/2cce8acd-4431-4adf-a7e1-ed9f779e64b7-kube-api-access-swrb9\") pod \"kube-proxy-b52ph\" (UID: \"2cce8acd-4431-4adf-a7e1-ed9f779e64b7\") " pod="kube-system/kube-proxy-b52ph" Dec 13 14:49:17.952780 kubelet[2191]: I1213 14:49:17.952758 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-lib-modules\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.952955 kubelet[2191]: I1213 14:49:17.952925 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-kernel\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.953144 kubelet[2191]: I1213 14:49:17.953108 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cce8acd-4431-4adf-a7e1-ed9f779e64b7-xtables-lock\") pod \"kube-proxy-b52ph\" (UID: \"2cce8acd-4431-4adf-a7e1-ed9f779e64b7\") " pod="kube-system/kube-proxy-b52ph" Dec 13 14:49:17.953350 kubelet[2191]: I1213 14:49:17.953329 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cni-path\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.953535 kubelet[2191]: I1213 14:49:17.953505 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-xtables-lock\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.953731 kubelet[2191]: I1213 14:49:17.953687 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-hostproc\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.953896 kubelet[2191]: I1213 14:49:17.953865 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-cgroup\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.954072 kubelet[2191]: I1213 14:49:17.954052 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-run\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.954264 kubelet[2191]: I1213 14:49:17.954243 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-bpf-maps\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:17.954444 kubelet[2191]: I1213 14:49:17.954423 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-etc-cni-netd\") pod \"cilium-jvh2t\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " pod="kube-system/cilium-jvh2t" Dec 13 14:49:18.192736 env[1295]: time="2024-12-13T14:49:18.192542088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b52ph,Uid:2cce8acd-4431-4adf-a7e1-ed9f779e64b7,Namespace:kube-system,Attempt:0,}" Dec 13 14:49:18.207564 env[1295]: time="2024-12-13T14:49:18.207501659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvh2t,Uid:5f9f849c-c56c-4722-a128-babd68cd3e87,Namespace:kube-system,Attempt:0,}" Dec 13 14:49:18.228123 kubelet[2191]: I1213 14:49:18.228062 2191 topology_manager.go:215] "Topology Admit Handler" podUID="92f88151-6c32-4d22-b365-80a6aef05be4" podNamespace="kube-system" podName="cilium-operator-5cc964979-pqk97" Dec 13 14:49:18.258030 kubelet[2191]: I1213 14:49:18.257957 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92f88151-6c32-4d22-b365-80a6aef05be4-cilium-config-path\") pod \"cilium-operator-5cc964979-pqk97\" (UID: \"92f88151-6c32-4d22-b365-80a6aef05be4\") " pod="kube-system/cilium-operator-5cc964979-pqk97" Dec 13 14:49:18.258030 kubelet[2191]: I1213 14:49:18.258050 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22pcm\" (UniqueName: \"kubernetes.io/projected/92f88151-6c32-4d22-b365-80a6aef05be4-kube-api-access-22pcm\") pod \"cilium-operator-5cc964979-pqk97\" (UID: \"92f88151-6c32-4d22-b365-80a6aef05be4\") " pod="kube-system/cilium-operator-5cc964979-pqk97" Dec 13 14:49:18.327386 env[1295]: time="2024-12-13T14:49:18.327274693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:49:18.327386 env[1295]: time="2024-12-13T14:49:18.327349062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:49:18.327866 env[1295]: time="2024-12-13T14:49:18.327805554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:49:18.329242 env[1295]: time="2024-12-13T14:49:18.328579475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:49:18.329242 env[1295]: time="2024-12-13T14:49:18.328617178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:49:18.329242 env[1295]: time="2024-12-13T14:49:18.328657784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:49:18.329242 env[1295]: time="2024-12-13T14:49:18.328945214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b35b419639e90972ee9bb3a4786c5c5c7a0086cc70417cb910f882d69b1b4fd pid=2286 runtime=io.containerd.runc.v2 Dec 13 14:49:18.329242 env[1295]: time="2024-12-13T14:49:18.328244456Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693 pid=2276 runtime=io.containerd.runc.v2 Dec 13 14:49:18.429758 env[1295]: time="2024-12-13T14:49:18.429670838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvh2t,Uid:5f9f849c-c56c-4722-a128-babd68cd3e87,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\"" Dec 13 14:49:18.439605 env[1295]: time="2024-12-13T14:49:18.438231022Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:49:18.446624 env[1295]: time="2024-12-13T14:49:18.445985219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b52ph,Uid:2cce8acd-4431-4adf-a7e1-ed9f779e64b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b35b419639e90972ee9bb3a4786c5c5c7a0086cc70417cb910f882d69b1b4fd\"" Dec 13 14:49:18.451640 env[1295]: time="2024-12-13T14:49:18.451101261Z" level=info msg="CreateContainer within sandbox \"4b35b419639e90972ee9bb3a4786c5c5c7a0086cc70417cb910f882d69b1b4fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:49:18.476256 env[1295]: time="2024-12-13T14:49:18.476199504Z" level=info msg="CreateContainer within sandbox \"4b35b419639e90972ee9bb3a4786c5c5c7a0086cc70417cb910f882d69b1b4fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe545d0515e024b25cae43d3bc1e43ca12a03218b369cb677808704dcd4aa358\"" Dec 13 14:49:18.479473 env[1295]: time="2024-12-13T14:49:18.479437063Z" level=info msg="StartContainer for \"fe545d0515e024b25cae43d3bc1e43ca12a03218b369cb677808704dcd4aa358\"" Dec 13 14:49:18.539010 env[1295]: time="2024-12-13T14:49:18.538950522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pqk97,Uid:92f88151-6c32-4d22-b365-80a6aef05be4,Namespace:kube-system,Attempt:0,}" Dec 13 14:49:18.563758 env[1295]: time="2024-12-13T14:49:18.563649464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:49:18.563758 env[1295]: time="2024-12-13T14:49:18.563714539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:49:18.564119 env[1295]: time="2024-12-13T14:49:18.563731923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:49:18.564465 env[1295]: time="2024-12-13T14:49:18.564375020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364 pid=2394 runtime=io.containerd.runc.v2 Dec 13 14:49:18.572024 env[1295]: time="2024-12-13T14:49:18.571952130Z" level=info msg="StartContainer for \"fe545d0515e024b25cae43d3bc1e43ca12a03218b369cb677808704dcd4aa358\" returns successfully" Dec 13 14:49:18.690327 env[1295]: time="2024-12-13T14:49:18.689325418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pqk97,Uid:92f88151-6c32-4d22-b365-80a6aef05be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\"" Dec 13 14:49:26.423101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878523638.mount: Deactivated successfully. Dec 13 14:49:26.537163 kubelet[2191]: I1213 14:49:26.537105 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-b52ph" podStartSLOduration=9.53703756 podStartE2EDuration="9.53703756s" podCreationTimestamp="2024-12-13 14:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:49:18.598538555 +0000 UTC m=+12.357119891" watchObservedRunningTime="2024-12-13 14:49:26.53703756 +0000 UTC m=+20.295618888" Dec 13 14:49:30.987701 env[1295]: time="2024-12-13T14:49:30.987616746Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:49:30.990774 env[1295]: time="2024-12-13T14:49:30.990721151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:49:30.992680 env[1295]: time="2024-12-13T14:49:30.992617122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:49:30.993557 env[1295]: time="2024-12-13T14:49:30.993506833Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:49:30.998724 env[1295]: time="2024-12-13T14:49:30.998122641Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:49:30.999244 env[1295]: time="2024-12-13T14:49:30.999206255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:49:31.012590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174774035.mount: Deactivated successfully. Dec 13 14:49:31.024914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676478974.mount: Deactivated successfully. Dec 13 14:49:31.032101 env[1295]: time="2024-12-13T14:49:31.032026798Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\"" Dec 13 14:49:31.033113 env[1295]: time="2024-12-13T14:49:31.033079316Z" level=info msg="StartContainer for \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\"" Dec 13 14:49:31.238946 env[1295]: time="2024-12-13T14:49:31.238183608Z" level=info msg="StartContainer for \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\" returns successfully" Dec 13 14:49:31.283228 env[1295]: time="2024-12-13T14:49:31.283142154Z" level=info msg="shim disconnected" id=bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51 Dec 13 14:49:31.283582 env[1295]: time="2024-12-13T14:49:31.283549169Z" level=warning msg="cleaning up after shim disconnected" id=bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51 namespace=k8s.io Dec 13 14:49:31.284798 env[1295]: time="2024-12-13T14:49:31.283678822Z" level=info msg="cleaning up dead shim" Dec 13 14:49:31.297198 env[1295]: time="2024-12-13T14:49:31.297130802Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:49:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2605 runtime=io.containerd.runc.v2\n" Dec 13 14:49:31.751755 env[1295]: time="2024-12-13T14:49:31.751681762Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:49:31.781732 env[1295]: time="2024-12-13T14:49:31.781583672Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\"" Dec 13 14:49:31.783259 env[1295]: time="2024-12-13T14:49:31.782823477Z" level=info msg="StartContainer for \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\"" Dec 13 14:49:31.862615 env[1295]: time="2024-12-13T14:49:31.861486546Z" level=info msg="StartContainer for \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\" returns successfully" Dec 13 14:49:31.873506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:49:31.874012 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:49:31.877613 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:49:31.881044 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:49:31.910261 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:49:31.950753 env[1295]: time="2024-12-13T14:49:31.950674434Z" level=info msg="shim disconnected" id=66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4 Dec 13 14:49:31.950753 env[1295]: time="2024-12-13T14:49:31.950747955Z" level=warning msg="cleaning up after shim disconnected" id=66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4 namespace=k8s.io Dec 13 14:49:31.950753 env[1295]: time="2024-12-13T14:49:31.950764838Z" level=info msg="cleaning up dead shim" Dec 13 14:49:31.964039 env[1295]: time="2024-12-13T14:49:31.963977417Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:49:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2671 runtime=io.containerd.runc.v2\n" Dec 13 14:49:32.008258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51-rootfs.mount: Deactivated successfully. Dec 13 14:49:32.754270 env[1295]: time="2024-12-13T14:49:32.754197774Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:49:32.777006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340670992.mount: Deactivated successfully. Dec 13 14:49:32.790969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141162417.mount: Deactivated successfully. Dec 13 14:49:32.798802 env[1295]: time="2024-12-13T14:49:32.798739833Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\"" Dec 13 14:49:32.801914 env[1295]: time="2024-12-13T14:49:32.801877649Z" level=info msg="StartContainer for \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\"" Dec 13 14:49:32.884144 env[1295]: time="2024-12-13T14:49:32.884075451Z" level=info msg="StartContainer for \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\" returns successfully" Dec 13 14:49:32.920565 env[1295]: time="2024-12-13T14:49:32.920506303Z" level=info msg="shim disconnected" id=b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105 Dec 13 14:49:32.921146 env[1295]: time="2024-12-13T14:49:32.920876766Z" level=warning msg="cleaning up after shim disconnected" id=b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105 namespace=k8s.io Dec 13 14:49:32.921276 env[1295]: time="2024-12-13T14:49:32.921247986Z" level=info msg="cleaning up dead shim" Dec 13 14:49:32.933361 env[1295]: time="2024-12-13T14:49:32.933325734Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:49:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2730 runtime=io.containerd.runc.v2\n" Dec 13 14:49:33.752880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525735090.mount: Deactivated successfully. Dec 13 14:49:33.856340 env[1295]: time="2024-12-13T14:49:33.856223195Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:49:33.921723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490796786.mount: Deactivated successfully. Dec 13 14:49:33.925484 env[1295]: time="2024-12-13T14:49:33.925428906Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\"" Dec 13 14:49:33.928930 env[1295]: time="2024-12-13T14:49:33.928881129Z" level=info msg="StartContainer for \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\"" Dec 13 14:49:34.024107 env[1295]: time="2024-12-13T14:49:34.023710475Z" level=info msg="StartContainer for \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\" returns successfully" Dec 13 14:49:34.058873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916-rootfs.mount: Deactivated successfully. Dec 13 14:49:34.084010 env[1295]: time="2024-12-13T14:49:34.083907686Z" level=info msg="shim disconnected" id=89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916 Dec 13 14:49:34.084010 env[1295]: time="2024-12-13T14:49:34.084004292Z" level=warning msg="cleaning up after shim disconnected" id=89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916 namespace=k8s.io Dec 13 14:49:34.084010 env[1295]: time="2024-12-13T14:49:34.084022681Z" level=info msg="cleaning up dead shim" Dec 13 14:49:34.098742 env[1295]: time="2024-12-13T14:49:34.098687985Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:49:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2786 runtime=io.containerd.runc.v2\n" Dec 13 14:49:34.809535 env[1295]: time="2024-12-13T14:49:34.809466207Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:49:34.831024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98654151.mount: Deactivated successfully. Dec 13 14:49:34.843535 env[1295]: time="2024-12-13T14:49:34.843472293Z" level=info msg="CreateContainer within sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\"" Dec 13 14:49:34.846593 env[1295]: time="2024-12-13T14:49:34.844679819Z" level=info msg="StartContainer for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\"" Dec 13 14:49:34.953543 env[1295]: time="2024-12-13T14:49:34.953456362Z" level=info msg="StartContainer for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" returns successfully" Dec 13 14:49:35.008609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062703645.mount: Deactivated successfully. Dec 13 14:49:35.037241 systemd[1]: run-containerd-runc-k8s.io-1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad-runc.57TZHE.mount: Deactivated successfully. Dec 13 14:49:35.119039 env[1295]: time="2024-12-13T14:49:35.118968031Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:49:35.121109 env[1295]: time="2024-12-13T14:49:35.121073851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:49:35.125623 env[1295]: time="2024-12-13T14:49:35.125568100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:49:35.126404 env[1295]: time="2024-12-13T14:49:35.126338883Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:49:35.133269 env[1295]: time="2024-12-13T14:49:35.133214473Z" level=info msg="CreateContainer within sandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:49:35.307819 env[1295]: time="2024-12-13T14:49:35.307588255Z" level=info msg="CreateContainer within sandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\"" Dec 13 14:49:35.310922 env[1295]: time="2024-12-13T14:49:35.310869957Z" level=info msg="StartContainer for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\"" Dec 13 14:49:35.413135 env[1295]: time="2024-12-13T14:49:35.412686329Z" level=info msg="StartContainer for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" returns successfully" Dec 13 14:49:35.427645 kubelet[2191]: I1213 14:49:35.427596 2191 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:49:35.476539 kubelet[2191]: I1213 14:49:35.476495 2191 topology_manager.go:215] "Topology Admit Handler" podUID="accf6e4b-d154-4919-a76f-1d720c90fb90" podNamespace="kube-system" podName="coredns-76f75df574-t684j" Dec 13 14:49:35.482826 kubelet[2191]: I1213 14:49:35.482796 2191 topology_manager.go:215] "Topology Admit Handler" podUID="7eb444bc-4847-4b06-bab1-ffb7330d187d" podNamespace="kube-system" podName="coredns-76f75df574-l27qk" Dec 13 14:49:35.498335 kubelet[2191]: I1213 14:49:35.498231 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/accf6e4b-d154-4919-a76f-1d720c90fb90-config-volume\") pod \"coredns-76f75df574-t684j\" (UID: \"accf6e4b-d154-4919-a76f-1d720c90fb90\") " pod="kube-system/coredns-76f75df574-t684j" Dec 13 14:49:35.499955 kubelet[2191]: I1213 14:49:35.499929 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rscvw\" (UniqueName: \"kubernetes.io/projected/7eb444bc-4847-4b06-bab1-ffb7330d187d-kube-api-access-rscvw\") pod \"coredns-76f75df574-l27qk\" (UID: \"7eb444bc-4847-4b06-bab1-ffb7330d187d\") " pod="kube-system/coredns-76f75df574-l27qk" Dec 13 14:49:35.500110 kubelet[2191]: I1213 14:49:35.500086 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlht7\" (UniqueName: \"kubernetes.io/projected/accf6e4b-d154-4919-a76f-1d720c90fb90-kube-api-access-zlht7\") pod \"coredns-76f75df574-t684j\" (UID: \"accf6e4b-d154-4919-a76f-1d720c90fb90\") " pod="kube-system/coredns-76f75df574-t684j" Dec 13 14:49:35.501210 kubelet[2191]: I1213 14:49:35.501184 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7eb444bc-4847-4b06-bab1-ffb7330d187d-config-volume\") pod \"coredns-76f75df574-l27qk\" (UID: \"7eb444bc-4847-4b06-bab1-ffb7330d187d\") " pod="kube-system/coredns-76f75df574-l27qk" Dec 13 14:49:35.784534 env[1295]: time="2024-12-13T14:49:35.784371384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t684j,Uid:accf6e4b-d154-4919-a76f-1d720c90fb90,Namespace:kube-system,Attempt:0,}" Dec 13 14:49:35.800644 env[1295]: time="2024-12-13T14:49:35.800074758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l27qk,Uid:7eb444bc-4847-4b06-bab1-ffb7330d187d,Namespace:kube-system,Attempt:0,}" Dec 13 14:49:35.878420 kubelet[2191]: I1213 14:49:35.878236 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pqk97" podStartSLOduration=1.443331887 podStartE2EDuration="17.878175058s" podCreationTimestamp="2024-12-13 14:49:18 +0000 UTC" firstStartedPulling="2024-12-13 14:49:18.691937213 +0000 UTC m=+12.450518534" lastFinishedPulling="2024-12-13 14:49:35.126780378 +0000 UTC m=+28.885361705" observedRunningTime="2024-12-13 14:49:35.817692579 +0000 UTC m=+29.576273923" watchObservedRunningTime="2024-12-13 14:49:35.878175058 +0000 UTC m=+29.636756381" Dec 13 14:49:36.014125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997245786.mount: Deactivated successfully. Dec 13 14:49:36.480190 systemd[1]: Started sshd@5-10.243.72.102:22-218.92.0.235:14018.service. Dec 13 14:49:39.819367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:49:39.823332 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:49:39.824811 systemd-networkd[1072]: cilium_host: Link UP Dec 13 14:49:39.825052 systemd-networkd[1072]: cilium_net: Link UP Dec 13 14:49:39.826170 systemd-networkd[1072]: cilium_net: Gained carrier Dec 13 14:49:39.828183 systemd-networkd[1072]: cilium_host: Gained carrier Dec 13 14:49:39.829008 systemd-networkd[1072]: cilium_net: Gained IPv6LL Dec 13 14:49:39.829921 systemd-networkd[1072]: cilium_host: Gained IPv6LL Dec 13 14:49:39.991135 systemd-networkd[1072]: cilium_vxlan: Link UP Dec 13 14:49:39.991146 systemd-networkd[1072]: cilium_vxlan: Gained carrier Dec 13 14:49:40.610811 kernel: NET: Registered PF_ALG protocol family Dec 13 14:49:40.806612 sshd[2984]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Dec 13 14:49:41.487629 systemd-networkd[1072]: cilium_vxlan: Gained IPv6LL Dec 13 14:49:41.721432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:49:41.755682 systemd-networkd[1072]: lxc_health: Link UP Dec 13 14:49:41.761345 systemd-networkd[1072]: lxc_health: Gained carrier Dec 13 14:49:42.270101 kubelet[2191]: I1213 14:49:42.270027 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jvh2t" podStartSLOduration=12.712517756 podStartE2EDuration="25.269836763s" podCreationTimestamp="2024-12-13 14:49:17 +0000 UTC" firstStartedPulling="2024-12-13 14:49:18.4368892 +0000 UTC m=+12.195470526" lastFinishedPulling="2024-12-13 14:49:30.994208207 +0000 UTC m=+24.752789533" observedRunningTime="2024-12-13 14:49:35.87814781 +0000 UTC m=+29.636729139" watchObservedRunningTime="2024-12-13 14:49:42.269836763 +0000 UTC m=+36.028418084" Dec 13 14:49:42.412820 systemd-networkd[1072]: lxcc6a6fb9d2c44: Link UP Dec 13 14:49:42.430678 kernel: eth0: renamed from tmpde07c Dec 13 14:49:42.443998 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc6a6fb9d2c44: link becomes ready Dec 13 14:49:42.443243 systemd-networkd[1072]: lxcc6a6fb9d2c44: Gained carrier Dec 13 14:49:42.515969 systemd-networkd[1072]: lxcfee452946bf0: Link UP Dec 13 14:49:42.529324 kernel: eth0: renamed from tmp7d981 Dec 13 14:49:42.535375 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfee452946bf0: link becomes ready Dec 13 14:49:42.534956 systemd-networkd[1072]: lxcfee452946bf0: Gained carrier Dec 13 14:49:43.339773 sshd[2984]: Failed password for root from 218.92.0.235 port 14018 ssh2 Dec 13 14:49:43.535642 systemd-networkd[1072]: lxc_health: Gained IPv6LL Dec 13 14:49:44.175842 systemd-networkd[1072]: lxcfee452946bf0: Gained IPv6LL Dec 13 14:49:44.431510 systemd-networkd[1072]: lxcc6a6fb9d2c44: Gained IPv6LL Dec 13 14:49:46.224656 sshd[2984]: Failed password for root from 218.92.0.235 port 14018 ssh2 Dec 13 14:49:48.047323 env[1295]: time="2024-12-13T14:49:48.046615934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:49:48.047323 env[1295]: time="2024-12-13T14:49:48.046757056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:49:48.047323 env[1295]: time="2024-12-13T14:49:48.046804652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:49:48.047323 env[1295]: time="2024-12-13T14:49:48.047149642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d98161f350f12d7a1f923ffdaf2d95b9553107725f0fac4495cf77584d50205 pid=3377 runtime=io.containerd.runc.v2 Dec 13 14:49:48.082320 env[1295]: time="2024-12-13T14:49:48.080341236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:49:48.082320 env[1295]: time="2024-12-13T14:49:48.080555569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:49:48.082320 env[1295]: time="2024-12-13T14:49:48.080604160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:49:48.082320 env[1295]: time="2024-12-13T14:49:48.081340855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de07ca3a1e168ee5776f536b8b2fbf6602892754edd404b36e474dc0d02c1aca pid=3395 runtime=io.containerd.runc.v2 Dec 13 14:49:48.138634 systemd[1]: run-containerd-runc-k8s.io-7d98161f350f12d7a1f923ffdaf2d95b9553107725f0fac4495cf77584d50205-runc.uQoTy9.mount: Deactivated successfully. Dec 13 14:49:48.286028 env[1295]: time="2024-12-13T14:49:48.285938592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l27qk,Uid:7eb444bc-4847-4b06-bab1-ffb7330d187d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d98161f350f12d7a1f923ffdaf2d95b9553107725f0fac4495cf77584d50205\"" Dec 13 14:49:48.303806 env[1295]: time="2024-12-13T14:49:48.302980088Z" level=info msg="CreateContainer within sandbox \"7d98161f350f12d7a1f923ffdaf2d95b9553107725f0fac4495cf77584d50205\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:49:48.309647 env[1295]: time="2024-12-13T14:49:48.309600313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t684j,Uid:accf6e4b-d154-4919-a76f-1d720c90fb90,Namespace:kube-system,Attempt:0,} returns sandbox id \"de07ca3a1e168ee5776f536b8b2fbf6602892754edd404b36e474dc0d02c1aca\"" Dec 13 14:49:48.316707 env[1295]: time="2024-12-13T14:49:48.316500220Z" level=info msg="CreateContainer within sandbox \"de07ca3a1e168ee5776f536b8b2fbf6602892754edd404b36e474dc0d02c1aca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:49:48.346738 env[1295]: time="2024-12-13T14:49:48.346669699Z" level=info msg="CreateContainer within sandbox \"7d98161f350f12d7a1f923ffdaf2d95b9553107725f0fac4495cf77584d50205\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62c3de05482b776c34f973c63648e0cab674eb9a995eb4751aef4bfd2aa9dd55\"" Dec 13 14:49:48.350374 env[1295]: time="2024-12-13T14:49:48.350303685Z" level=info msg="StartContainer for \"62c3de05482b776c34f973c63648e0cab674eb9a995eb4751aef4bfd2aa9dd55\"" Dec 13 14:49:48.360666 env[1295]: time="2024-12-13T14:49:48.360596727Z" level=info msg="CreateContainer within sandbox \"de07ca3a1e168ee5776f536b8b2fbf6602892754edd404b36e474dc0d02c1aca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95dc23bde5d5dbf2b26f78f95c78f3f80f1bc5bfb83ad4c2b34aed50192c9e54\"" Dec 13 14:49:48.363865 env[1295]: time="2024-12-13T14:49:48.363823736Z" level=info msg="StartContainer for \"95dc23bde5d5dbf2b26f78f95c78f3f80f1bc5bfb83ad4c2b34aed50192c9e54\"" Dec 13 14:49:48.465361 env[1295]: time="2024-12-13T14:49:48.465274474Z" level=info msg="StartContainer for \"95dc23bde5d5dbf2b26f78f95c78f3f80f1bc5bfb83ad4c2b34aed50192c9e54\" returns successfully" Dec 13 14:49:48.473930 env[1295]: time="2024-12-13T14:49:48.473880022Z" level=info msg="StartContainer for \"62c3de05482b776c34f973c63648e0cab674eb9a995eb4751aef4bfd2aa9dd55\" returns successfully" Dec 13 14:49:48.914872 kubelet[2191]: I1213 14:49:48.914796 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l27qk" podStartSLOduration=30.914718224 podStartE2EDuration="30.914718224s" podCreationTimestamp="2024-12-13 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:49:48.912497763 +0000 UTC m=+42.671079087" watchObservedRunningTime="2024-12-13 14:49:48.914718224 +0000 UTC m=+42.673299556" Dec 13 14:49:48.944390 kubelet[2191]: I1213 14:49:48.941547 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t684j" podStartSLOduration=30.941478954 podStartE2EDuration="30.941478954s" podCreationTimestamp="2024-12-13 14:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:49:48.94091441 +0000 UTC m=+42.699495744" watchObservedRunningTime="2024-12-13 14:49:48.941478954 +0000 UTC m=+42.700060275" Dec 13 14:49:49.062215 systemd[1]: run-containerd-runc-k8s.io-de07ca3a1e168ee5776f536b8b2fbf6602892754edd404b36e474dc0d02c1aca-runc.SgFuSk.mount: Deactivated successfully. Dec 13 14:49:49.644924 sshd[2984]: Failed password for root from 218.92.0.235 port 14018 ssh2 Dec 13 14:49:50.819761 sshd[2984]: Received disconnect from 218.92.0.235 port 14018:11: [preauth] Dec 13 14:49:50.819761 sshd[2984]: Disconnected from authenticating user root 218.92.0.235 port 14018 [preauth] Dec 13 14:49:50.820435 sshd[2984]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Dec 13 14:49:50.822166 systemd[1]: sshd@5-10.243.72.102:22-218.92.0.235:14018.service: Deactivated successfully. Dec 13 14:49:51.084760 systemd[1]: Started sshd@6-10.243.72.102:22-218.92.0.235:54364.service. Dec 13 14:49:54.083882 sshd[3536]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Dec 13 14:49:56.206664 sshd[3536]: Failed password for root from 218.92.0.235 port 54364 ssh2 Dec 13 14:49:58.030149 sshd[3536]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 14:50:00.035624 sshd[3536]: Failed password for root from 218.92.0.235 port 54364 ssh2 Dec 13 14:50:03.120925 sshd[3536]: Failed password for root from 218.92.0.235 port 54364 ssh2 Dec 13 14:50:04.700450 sshd[3536]: Received disconnect from 218.92.0.235 port 54364:11: [preauth] Dec 13 14:50:04.700450 sshd[3536]: Disconnected from authenticating user root 218.92.0.235 port 54364 [preauth] Dec 13 14:50:04.701263 sshd[3536]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Dec 13 14:50:04.702882 systemd[1]: sshd@6-10.243.72.102:22-218.92.0.235:54364.service: Deactivated successfully. Dec 13 14:50:04.981459 systemd[1]: Started sshd@7-10.243.72.102:22-218.92.0.235:15504.service. Dec 13 14:50:09.558574 sshd[3541]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Dec 13 14:50:11.739061 sshd[3541]: Failed password for root from 218.92.0.235 port 15504 ssh2 Dec 13 14:50:15.221251 sshd[3541]: Failed password for root from 218.92.0.235 port 15504 ssh2 Dec 13 14:50:18.964106 sshd[3541]: Failed password for root from 218.92.0.235 port 15504 ssh2 Dec 13 14:50:20.154416 sshd[3541]: Received disconnect from 218.92.0.235 port 15504:11: [preauth] Dec 13 14:50:20.154416 sshd[3541]: Disconnected from authenticating user root 218.92.0.235 port 15504 [preauth] Dec 13 14:50:20.154553 sshd[3541]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Dec 13 14:50:20.155790 systemd[1]: sshd@7-10.243.72.102:22-218.92.0.235:15504.service: Deactivated successfully. Dec 13 14:50:33.721090 systemd[1]: Started sshd@8-10.243.72.102:22-139.178.68.195:36172.service. Dec 13 14:50:34.617496 sshd[3549]: Accepted publickey for core from 139.178.68.195 port 36172 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:34.619993 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:34.634711 systemd[1]: Started session-6.scope. Dec 13 14:50:34.636456 systemd-logind[1280]: New session 6 of user core. Dec 13 14:50:35.424717 sshd[3549]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:35.428994 systemd[1]: sshd@8-10.243.72.102:22-139.178.68.195:36172.service: Deactivated successfully. Dec 13 14:50:35.430728 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:50:35.431999 systemd-logind[1280]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:50:35.433430 systemd-logind[1280]: Removed session 6. Dec 13 14:50:40.572423 systemd[1]: Started sshd@9-10.243.72.102:22-139.178.68.195:60486.service. Dec 13 14:50:41.461973 sshd[3563]: Accepted publickey for core from 139.178.68.195 port 60486 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:41.464818 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:41.472535 systemd[1]: Started session-7.scope. Dec 13 14:50:41.473026 systemd-logind[1280]: New session 7 of user core. Dec 13 14:50:42.183045 sshd[3563]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:42.187197 systemd-logind[1280]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:50:42.187643 systemd[1]: sshd@9-10.243.72.102:22-139.178.68.195:60486.service: Deactivated successfully. Dec 13 14:50:42.188839 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:50:42.189952 systemd-logind[1280]: Removed session 7. Dec 13 14:50:47.327909 systemd[1]: Started sshd@10-10.243.72.102:22-139.178.68.195:46074.service. Dec 13 14:50:48.212878 sshd[3576]: Accepted publickey for core from 139.178.68.195 port 46074 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:48.215971 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:48.222358 systemd-logind[1280]: New session 8 of user core. Dec 13 14:50:48.224036 systemd[1]: Started session-8.scope. Dec 13 14:50:48.926610 sshd[3576]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:48.931453 systemd-logind[1280]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:50:48.931779 systemd[1]: sshd@10-10.243.72.102:22-139.178.68.195:46074.service: Deactivated successfully. Dec 13 14:50:48.933059 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:50:48.934753 systemd-logind[1280]: Removed session 8. Dec 13 14:50:54.073884 systemd[1]: Started sshd@11-10.243.72.102:22-139.178.68.195:46080.service. Dec 13 14:50:54.962410 sshd[3592]: Accepted publickey for core from 139.178.68.195 port 46080 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:54.964647 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:54.972138 systemd-logind[1280]: New session 9 of user core. Dec 13 14:50:54.973017 systemd[1]: Started session-9.scope. Dec 13 14:50:55.670972 sshd[3592]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:55.677691 systemd[1]: sshd@11-10.243.72.102:22-139.178.68.195:46080.service: Deactivated successfully. Dec 13 14:50:55.678880 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:50:55.680637 systemd-logind[1280]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:50:55.684712 systemd-logind[1280]: Removed session 9. Dec 13 14:50:55.816505 systemd[1]: Started sshd@12-10.243.72.102:22-139.178.68.195:46094.service. Dec 13 14:50:56.046997 systemd[1]: Started sshd@13-10.243.72.102:22-139.19.117.129:58544.service. Dec 13 14:50:56.187201 sshd[3607]: Invalid user udatabase from 139.19.117.129 port 58544 Dec 13 14:50:56.705954 sshd[3605]: Accepted publickey for core from 139.178.68.195 port 46094 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:56.708225 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:56.715393 systemd-logind[1280]: New session 10 of user core. Dec 13 14:50:56.717774 systemd[1]: Started session-10.scope. Dec 13 14:50:57.502003 sshd[3605]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:57.506927 systemd[1]: sshd@12-10.243.72.102:22-139.178.68.195:46094.service: Deactivated successfully. Dec 13 14:50:57.508210 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:50:57.509863 systemd-logind[1280]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:50:57.511101 systemd-logind[1280]: Removed session 10. Dec 13 14:50:57.647131 systemd[1]: Started sshd@14-10.243.72.102:22-139.178.68.195:42466.service. Dec 13 14:50:58.534327 sshd[3618]: Accepted publickey for core from 139.178.68.195 port 42466 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:50:58.536406 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:50:58.546806 systemd[1]: Started session-11.scope. Dec 13 14:50:58.547101 systemd-logind[1280]: New session 11 of user core. Dec 13 14:50:59.248114 sshd[3618]: pam_unix(sshd:session): session closed for user core Dec 13 14:50:59.252233 systemd-logind[1280]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:50:59.252688 systemd[1]: sshd@14-10.243.72.102:22-139.178.68.195:42466.service: Deactivated successfully. Dec 13 14:50:59.253904 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:50:59.254574 systemd-logind[1280]: Removed session 11. Dec 13 14:51:04.388213 systemd[1]: Started sshd@15-10.243.72.102:22-139.178.68.195:42482.service. Dec 13 14:51:05.270731 sshd[3631]: Accepted publickey for core from 139.178.68.195 port 42482 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:05.273869 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:05.281368 systemd-logind[1280]: New session 12 of user core. Dec 13 14:51:05.282705 systemd[1]: Started session-12.scope. Dec 13 14:51:05.978549 sshd[3631]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:05.982478 systemd[1]: sshd@15-10.243.72.102:22-139.178.68.195:42482.service: Deactivated successfully. Dec 13 14:51:05.983675 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:51:05.985124 systemd-logind[1280]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:51:05.986444 systemd-logind[1280]: Removed session 12. Dec 13 14:51:06.042657 sshd[3607]: Connection closed by invalid user udatabase 139.19.117.129 port 58544 [preauth] Dec 13 14:51:06.045007 systemd[1]: sshd@13-10.243.72.102:22-139.19.117.129:58544.service: Deactivated successfully. Dec 13 14:51:11.123727 systemd[1]: Started sshd@16-10.243.72.102:22-139.178.68.195:46130.service. Dec 13 14:51:12.009181 sshd[3647]: Accepted publickey for core from 139.178.68.195 port 46130 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:12.011996 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:12.019081 systemd-logind[1280]: New session 13 of user core. Dec 13 14:51:12.020421 systemd[1]: Started session-13.scope. Dec 13 14:51:12.706690 sshd[3647]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:12.711177 systemd[1]: sshd@16-10.243.72.102:22-139.178.68.195:46130.service: Deactivated successfully. Dec 13 14:51:12.713055 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:51:12.713488 systemd-logind[1280]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:51:12.715138 systemd-logind[1280]: Removed session 13. Dec 13 14:51:12.850633 systemd[1]: Started sshd@17-10.243.72.102:22-139.178.68.195:46144.service. Dec 13 14:51:13.733050 sshd[3659]: Accepted publickey for core from 139.178.68.195 port 46144 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:13.735910 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:13.743759 systemd-logind[1280]: New session 14 of user core. Dec 13 14:51:13.744462 systemd[1]: Started session-14.scope. Dec 13 14:51:14.814851 sshd[3659]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:14.819361 systemd-logind[1280]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:51:14.820429 systemd[1]: sshd@17-10.243.72.102:22-139.178.68.195:46144.service: Deactivated successfully. Dec 13 14:51:14.822267 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:51:14.823218 systemd-logind[1280]: Removed session 14. Dec 13 14:51:14.961023 systemd[1]: Started sshd@18-10.243.72.102:22-139.178.68.195:46152.service. Dec 13 14:51:15.845098 sshd[3670]: Accepted publickey for core from 139.178.68.195 port 46152 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:15.847186 sshd[3670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:15.854455 systemd-logind[1280]: New session 15 of user core. Dec 13 14:51:15.854889 systemd[1]: Started session-15.scope. Dec 13 14:51:18.785826 sshd[3670]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:18.794089 systemd[1]: sshd@18-10.243.72.102:22-139.178.68.195:46152.service: Deactivated successfully. Dec 13 14:51:18.795443 systemd-logind[1280]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:51:18.796070 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:51:18.797154 systemd-logind[1280]: Removed session 15. Dec 13 14:51:18.930710 systemd[1]: Started sshd@19-10.243.72.102:22-139.178.68.195:40440.service. Dec 13 14:51:19.820613 sshd[3690]: Accepted publickey for core from 139.178.68.195 port 40440 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:19.821368 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:19.828941 systemd[1]: Started session-16.scope. Dec 13 14:51:19.829334 systemd-logind[1280]: New session 16 of user core. Dec 13 14:51:20.778629 sshd[3690]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:20.782616 systemd[1]: sshd@19-10.243.72.102:22-139.178.68.195:40440.service: Deactivated successfully. Dec 13 14:51:20.784526 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:51:20.784933 systemd-logind[1280]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:51:20.787079 systemd-logind[1280]: Removed session 16. Dec 13 14:51:20.925341 systemd[1]: Started sshd@20-10.243.72.102:22-139.178.68.195:40448.service. Dec 13 14:51:21.814771 sshd[3701]: Accepted publickey for core from 139.178.68.195 port 40448 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:21.816771 sshd[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:21.823882 systemd-logind[1280]: New session 17 of user core. Dec 13 14:51:21.824767 systemd[1]: Started session-17.scope. Dec 13 14:51:22.524933 sshd[3701]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:22.528843 systemd-logind[1280]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:51:22.529228 systemd[1]: sshd@20-10.243.72.102:22-139.178.68.195:40448.service: Deactivated successfully. Dec 13 14:51:22.530650 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:51:22.531374 systemd-logind[1280]: Removed session 17. Dec 13 14:51:27.670590 systemd[1]: Started sshd@21-10.243.72.102:22-139.178.68.195:46424.service. Dec 13 14:51:28.559406 sshd[3715]: Accepted publickey for core from 139.178.68.195 port 46424 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:28.560886 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:28.568316 systemd[1]: Started session-18.scope. Dec 13 14:51:28.569794 systemd-logind[1280]: New session 18 of user core. Dec 13 14:51:29.257180 sshd[3715]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:29.261596 systemd[1]: sshd@21-10.243.72.102:22-139.178.68.195:46424.service: Deactivated successfully. Dec 13 14:51:29.262924 systemd-logind[1280]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:51:29.263022 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:51:29.264602 systemd-logind[1280]: Removed session 18. Dec 13 14:51:34.403989 systemd[1]: Started sshd@22-10.243.72.102:22-139.178.68.195:46440.service. Dec 13 14:51:35.286999 sshd[3731]: Accepted publickey for core from 139.178.68.195 port 46440 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:35.289935 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:35.297541 systemd[1]: Started session-19.scope. Dec 13 14:51:35.297863 systemd-logind[1280]: New session 19 of user core. Dec 13 14:51:35.986840 sshd[3731]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:35.990939 systemd[1]: sshd@22-10.243.72.102:22-139.178.68.195:46440.service: Deactivated successfully. Dec 13 14:51:35.992465 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:51:35.992893 systemd-logind[1280]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:51:35.995137 systemd-logind[1280]: Removed session 19. Dec 13 14:51:41.133059 systemd[1]: Started sshd@23-10.243.72.102:22-139.178.68.195:51640.service. Dec 13 14:51:42.018374 sshd[3744]: Accepted publickey for core from 139.178.68.195 port 51640 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:42.021071 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:42.029066 systemd[1]: Started session-20.scope. Dec 13 14:51:42.030378 systemd-logind[1280]: New session 20 of user core. Dec 13 14:51:42.728375 sshd[3744]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:42.732237 systemd[1]: sshd@23-10.243.72.102:22-139.178.68.195:51640.service: Deactivated successfully. Dec 13 14:51:42.733625 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:51:42.733657 systemd-logind[1280]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:51:42.737361 systemd-logind[1280]: Removed session 20. Dec 13 14:51:42.872370 systemd[1]: Started sshd@24-10.243.72.102:22-139.178.68.195:51644.service. Dec 13 14:51:43.754681 sshd[3757]: Accepted publickey for core from 139.178.68.195 port 51644 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:43.756609 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:43.764788 systemd[1]: Started session-21.scope. Dec 13 14:51:43.765864 systemd-logind[1280]: New session 21 of user core. Dec 13 14:51:45.878209 env[1295]: time="2024-12-13T14:51:45.878105612Z" level=info msg="StopContainer for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" with timeout 30 (s)" Dec 13 14:51:45.880255 env[1295]: time="2024-12-13T14:51:45.880210934Z" level=info msg="Stop container \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" with signal terminated" Dec 13 14:51:45.905186 systemd[1]: run-containerd-runc-k8s.io-1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad-runc.UfJoiu.mount: Deactivated successfully. Dec 13 14:51:45.967410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9-rootfs.mount: Deactivated successfully. Dec 13 14:51:45.974759 env[1295]: time="2024-12-13T14:51:45.974642182Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:51:45.979272 env[1295]: time="2024-12-13T14:51:45.979218556Z" level=info msg="shim disconnected" id=7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9 Dec 13 14:51:45.979272 env[1295]: time="2024-12-13T14:51:45.979276101Z" level=warning msg="cleaning up after shim disconnected" id=7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9 namespace=k8s.io Dec 13 14:51:45.981768 env[1295]: time="2024-12-13T14:51:45.980631206Z" level=info msg="cleaning up dead shim" Dec 13 14:51:45.986017 env[1295]: time="2024-12-13T14:51:45.985972586Z" level=info msg="StopContainer for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" with timeout 2 (s)" Dec 13 14:51:45.987560 env[1295]: time="2024-12-13T14:51:45.987526021Z" level=info msg="Stop container \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" with signal terminated" Dec 13 14:51:45.999949 systemd-networkd[1072]: lxc_health: Link DOWN Dec 13 14:51:45.999961 systemd-networkd[1072]: lxc_health: Lost carrier Dec 13 14:51:46.004161 env[1295]: time="2024-12-13T14:51:46.004101021Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3804 runtime=io.containerd.runc.v2\n" Dec 13 14:51:46.006305 env[1295]: time="2024-12-13T14:51:46.006204905Z" level=info msg="StopContainer for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" returns successfully" Dec 13 14:51:46.007532 env[1295]: time="2024-12-13T14:51:46.007494320Z" level=info msg="StopPodSandbox for \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\"" Dec 13 14:51:46.007627 env[1295]: time="2024-12-13T14:51:46.007593476Z" level=info msg="Container to stop \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.010702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364-shm.mount: Deactivated successfully. Dec 13 14:51:46.074883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad-rootfs.mount: Deactivated successfully. Dec 13 14:51:46.083562 env[1295]: time="2024-12-13T14:51:46.083460600Z" level=info msg="shim disconnected" id=1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad Dec 13 14:51:46.083967 env[1295]: time="2024-12-13T14:51:46.083926266Z" level=warning msg="cleaning up after shim disconnected" id=1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad namespace=k8s.io Dec 13 14:51:46.085350 env[1295]: time="2024-12-13T14:51:46.084154628Z" level=info msg="cleaning up dead shim" Dec 13 14:51:46.085634 env[1295]: time="2024-12-13T14:51:46.085586440Z" level=info msg="shim disconnected" id=0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364 Dec 13 14:51:46.086909 env[1295]: time="2024-12-13T14:51:46.086867089Z" level=warning msg="cleaning up after shim disconnected" id=0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364 namespace=k8s.io Dec 13 14:51:46.086909 env[1295]: time="2024-12-13T14:51:46.086903986Z" level=info msg="cleaning up dead shim" Dec 13 14:51:46.104501 env[1295]: time="2024-12-13T14:51:46.104417990Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3860 runtime=io.containerd.runc.v2\n" Dec 13 14:51:46.106606 env[1295]: time="2024-12-13T14:51:46.106514623Z" level=info msg="StopContainer for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" returns successfully" Dec 13 14:51:46.108521 env[1295]: time="2024-12-13T14:51:46.107730524Z" level=info msg="StopPodSandbox for \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\"" Dec 13 14:51:46.108521 env[1295]: time="2024-12-13T14:51:46.107885965Z" level=info msg="Container to stop \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.108521 env[1295]: time="2024-12-13T14:51:46.107913802Z" level=info msg="Container to stop \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.108521 env[1295]: time="2024-12-13T14:51:46.107960072Z" level=info msg="Container to stop \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.108521 env[1295]: time="2024-12-13T14:51:46.107998148Z" level=info msg="Container to stop \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.108521 env[1295]: time="2024-12-13T14:51:46.108041790Z" level=info msg="Container to stop \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:46.130491 env[1295]: time="2024-12-13T14:51:46.127659364Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" Dec 13 14:51:46.130491 env[1295]: time="2024-12-13T14:51:46.128554947Z" level=info msg="TearDown network for sandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" successfully" Dec 13 14:51:46.130491 env[1295]: time="2024-12-13T14:51:46.128590646Z" level=info msg="StopPodSandbox for \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" returns successfully" Dec 13 14:51:46.176621 env[1295]: time="2024-12-13T14:51:46.176540515Z" level=info msg="shim disconnected" id=3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693 Dec 13 14:51:46.176621 env[1295]: time="2024-12-13T14:51:46.176616369Z" level=warning msg="cleaning up after shim disconnected" id=3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693 namespace=k8s.io Dec 13 14:51:46.176621 env[1295]: time="2024-12-13T14:51:46.176633889Z" level=info msg="cleaning up dead shim" Dec 13 14:51:46.198354 env[1295]: time="2024-12-13T14:51:46.197143002Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3906 runtime=io.containerd.runc.v2\n" Dec 13 14:51:46.198354 env[1295]: time="2024-12-13T14:51:46.197732938Z" level=info msg="TearDown network for sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" successfully" Dec 13 14:51:46.198354 env[1295]: time="2024-12-13T14:51:46.197783424Z" level=info msg="StopPodSandbox for \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" returns successfully" Dec 13 14:51:46.221440 kubelet[2191]: I1213 14:51:46.219021 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f9f849c-c56c-4722-a128-babd68cd3e87-clustermesh-secrets\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.221440 kubelet[2191]: I1213 14:51:46.219124 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-bpf-maps\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.221440 kubelet[2191]: I1213 14:51:46.219165 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-etc-cni-netd\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.221440 kubelet[2191]: I1213 14:51:46.219214 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-kernel\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.221440 kubelet[2191]: I1213 14:51:46.219245 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-cgroup\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.221440 kubelet[2191]: I1213 14:51:46.219313 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htn6b\" (UniqueName: \"kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-kube-api-access-htn6b\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.222555 kubelet[2191]: I1213 14:51:46.219343 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-xtables-lock\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.222555 kubelet[2191]: I1213 14:51:46.219391 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92f88151-6c32-4d22-b365-80a6aef05be4-cilium-config-path\") pod \"92f88151-6c32-4d22-b365-80a6aef05be4\" (UID: \"92f88151-6c32-4d22-b365-80a6aef05be4\") " Dec 13 14:51:46.222555 kubelet[2191]: I1213 14:51:46.219427 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-run\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.222555 kubelet[2191]: I1213 14:51:46.219471 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-lib-modules\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.222555 kubelet[2191]: I1213 14:51:46.219503 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-hostproc\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.222555 kubelet[2191]: I1213 14:51:46.219562 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cni-path\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.223395 kubelet[2191]: I1213 14:51:46.219598 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-hubble-tls\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.223395 kubelet[2191]: I1213 14:51:46.222780 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22pcm\" (UniqueName: \"kubernetes.io/projected/92f88151-6c32-4d22-b365-80a6aef05be4-kube-api-access-22pcm\") pod \"92f88151-6c32-4d22-b365-80a6aef05be4\" (UID: \"92f88151-6c32-4d22-b365-80a6aef05be4\") " Dec 13 14:51:46.223395 kubelet[2191]: I1213 14:51:46.222874 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-config-path\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.223395 kubelet[2191]: I1213 14:51:46.222927 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-net\") pod \"5f9f849c-c56c-4722-a128-babd68cd3e87\" (UID: \"5f9f849c-c56c-4722-a128-babd68cd3e87\") " Dec 13 14:51:46.228564 kubelet[2191]: I1213 14:51:46.227343 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.241490 kubelet[2191]: I1213 14:51:46.226702 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.248219 kubelet[2191]: I1213 14:51:46.229668 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.250480 kubelet[2191]: I1213 14:51:46.229696 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.255970 kubelet[2191]: I1213 14:51:46.229734 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.255970 kubelet[2191]: I1213 14:51:46.230035 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.255970 kubelet[2191]: I1213 14:51:46.236182 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.255970 kubelet[2191]: I1213 14:51:46.236240 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.255970 kubelet[2191]: I1213 14:51:46.236259 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-hostproc" (OuterVolumeSpecName: "hostproc") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.256666 kubelet[2191]: I1213 14:51:46.236316 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cni-path" (OuterVolumeSpecName: "cni-path") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:46.256666 kubelet[2191]: I1213 14:51:46.244733 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f88151-6c32-4d22-b365-80a6aef05be4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92f88151-6c32-4d22-b365-80a6aef05be4" (UID: "92f88151-6c32-4d22-b365-80a6aef05be4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:51:46.256666 kubelet[2191]: I1213 14:51:46.255584 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f88151-6c32-4d22-b365-80a6aef05be4-kube-api-access-22pcm" (OuterVolumeSpecName: "kube-api-access-22pcm") pod "92f88151-6c32-4d22-b365-80a6aef05be4" (UID: "92f88151-6c32-4d22-b365-80a6aef05be4"). InnerVolumeSpecName "kube-api-access-22pcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:46.256666 kubelet[2191]: I1213 14:51:46.255744 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f9f849c-c56c-4722-a128-babd68cd3e87-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:51:46.256905 kubelet[2191]: I1213 14:51:46.256438 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:51:46.259549 kubelet[2191]: I1213 14:51:46.259516 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-kube-api-access-htn6b" (OuterVolumeSpecName: "kube-api-access-htn6b") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "kube-api-access-htn6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:46.260612 kubelet[2191]: I1213 14:51:46.259722 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5f9f849c-c56c-4722-a128-babd68cd3e87" (UID: "5f9f849c-c56c-4722-a128-babd68cd3e87"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:46.279845 kubelet[2191]: I1213 14:51:46.279797 2191 scope.go:117] "RemoveContainer" containerID="1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad" Dec 13 14:51:46.291328 env[1295]: time="2024-12-13T14:51:46.290345243Z" level=info msg="RemoveContainer for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\"" Dec 13 14:51:46.306083 env[1295]: time="2024-12-13T14:51:46.306002225Z" level=info msg="RemoveContainer for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" returns successfully" Dec 13 14:51:46.311323 kubelet[2191]: I1213 14:51:46.310446 2191 scope.go:117] "RemoveContainer" containerID="89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916" Dec 13 14:51:46.314167 env[1295]: time="2024-12-13T14:51:46.313656061Z" level=info msg="RemoveContainer for \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\"" Dec 13 14:51:46.319332 env[1295]: time="2024-12-13T14:51:46.318886877Z" level=info msg="RemoveContainer for \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\" returns successfully" Dec 13 14:51:46.319564 kubelet[2191]: I1213 14:51:46.319523 2191 scope.go:117] "RemoveContainer" containerID="b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105" Dec 13 14:51:46.320952 env[1295]: time="2024-12-13T14:51:46.320872500Z" level=info msg="RemoveContainer for \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\"" Dec 13 14:51:46.324838 kubelet[2191]: I1213 14:51:46.323441 2191 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-xtables-lock\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.324838 kubelet[2191]: I1213 14:51:46.323598 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92f88151-6c32-4d22-b365-80a6aef05be4-cilium-config-path\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.324838 kubelet[2191]: I1213 14:51:46.323691 2191 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-lib-modules\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.324838 kubelet[2191]: I1213 14:51:46.323719 2191 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-hostproc\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.326439 env[1295]: time="2024-12-13T14:51:46.326367840Z" level=info msg="RemoveContainer for \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\" returns successfully" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326611 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-run\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326691 2191 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cni-path\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326782 2191 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-22pcm\" (UniqueName: \"kubernetes.io/projected/92f88151-6c32-4d22-b365-80a6aef05be4-kube-api-access-22pcm\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326805 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-config-path\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326853 2191 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-net\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326879 2191 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-hubble-tls\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326942 2191 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f9f849c-c56c-4722-a128-babd68cd3e87-clustermesh-secrets\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.328970 kubelet[2191]: I1213 14:51:46.326967 2191 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-bpf-maps\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.329516 kubelet[2191]: I1213 14:51:46.326985 2191 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-etc-cni-netd\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.329516 kubelet[2191]: I1213 14:51:46.327034 2191 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-htn6b\" (UniqueName: \"kubernetes.io/projected/5f9f849c-c56c-4722-a128-babd68cd3e87-kube-api-access-htn6b\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.329516 kubelet[2191]: I1213 14:51:46.327069 2191 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-host-proc-sys-kernel\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.329516 kubelet[2191]: I1213 14:51:46.327119 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f9f849c-c56c-4722-a128-babd68cd3e87-cilium-cgroup\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:46.329516 kubelet[2191]: I1213 14:51:46.327546 2191 scope.go:117] "RemoveContainer" containerID="66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4" Dec 13 14:51:46.330835 env[1295]: time="2024-12-13T14:51:46.330452385Z" level=info msg="RemoveContainer for \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\"" Dec 13 14:51:46.333573 env[1295]: time="2024-12-13T14:51:46.333539878Z" level=info msg="RemoveContainer for \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\" returns successfully" Dec 13 14:51:46.333910 kubelet[2191]: I1213 14:51:46.333883 2191 scope.go:117] "RemoveContainer" containerID="bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51" Dec 13 14:51:46.335747 env[1295]: time="2024-12-13T14:51:46.335703071Z" level=info msg="RemoveContainer for \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\"" Dec 13 14:51:46.340855 env[1295]: time="2024-12-13T14:51:46.340799070Z" level=info msg="RemoveContainer for \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\" returns successfully" Dec 13 14:51:46.341276 kubelet[2191]: I1213 14:51:46.341246 2191 scope.go:117] "RemoveContainer" containerID="1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad" Dec 13 14:51:46.341642 env[1295]: time="2024-12-13T14:51:46.341517045Z" level=error msg="ContainerStatus for \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\": not found" Dec 13 14:51:46.342584 kubelet[2191]: E1213 14:51:46.342557 2191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\": not found" containerID="1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad" Dec 13 14:51:46.344402 kubelet[2191]: I1213 14:51:46.344354 2191 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad"} err="failed to get container status \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d6c39bb5b3775fe4e75f024d56ced486fb780cc8e0bef7dcc0f412a1cb249ad\": not found" Dec 13 14:51:46.344746 kubelet[2191]: I1213 14:51:46.344408 2191 scope.go:117] "RemoveContainer" containerID="89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916" Dec 13 14:51:46.344816 env[1295]: time="2024-12-13T14:51:46.344621246Z" level=error msg="ContainerStatus for \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\": not found" Dec 13 14:51:46.344895 kubelet[2191]: E1213 14:51:46.344801 2191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\": not found" containerID="89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916" Dec 13 14:51:46.344895 kubelet[2191]: I1213 14:51:46.344842 2191 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916"} err="failed to get container status \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\": rpc error: code = NotFound desc = an error occurred when try to find container \"89081c094c3b6ca10fed65bbf7b4fbf9cc520d68a48146582fb77b5dac159916\": not found" Dec 13 14:51:46.344895 kubelet[2191]: I1213 14:51:46.344860 2191 scope.go:117] "RemoveContainer" containerID="b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105" Dec 13 14:51:46.345122 env[1295]: time="2024-12-13T14:51:46.345062104Z" level=error msg="ContainerStatus for \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\": not found" Dec 13 14:51:46.345274 kubelet[2191]: E1213 14:51:46.345248 2191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\": not found" containerID="b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105" Dec 13 14:51:46.345378 kubelet[2191]: I1213 14:51:46.345310 2191 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105"} err="failed to get container status \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\": rpc error: code = NotFound desc = an error occurred when try to find container \"b884eeb4f205eb894fc9ed0f4f49f6aa560607324017273221ce9641eab8d105\": not found" Dec 13 14:51:46.345378 kubelet[2191]: I1213 14:51:46.345329 2191 scope.go:117] "RemoveContainer" containerID="66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4" Dec 13 14:51:46.345578 env[1295]: time="2024-12-13T14:51:46.345520069Z" level=error msg="ContainerStatus for \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\": not found" Dec 13 14:51:46.345818 kubelet[2191]: E1213 14:51:46.345794 2191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\": not found" containerID="66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4" Dec 13 14:51:46.345982 kubelet[2191]: I1213 14:51:46.345960 2191 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4"} err="failed to get container status \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"66876e93d4e76872a89096b842a0092c8c9c70e9b194ee7d644145ff2228e6c4\": not found" Dec 13 14:51:46.346151 kubelet[2191]: I1213 14:51:46.346129 2191 scope.go:117] "RemoveContainer" containerID="bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51" Dec 13 14:51:46.346621 env[1295]: time="2024-12-13T14:51:46.346541364Z" level=error msg="ContainerStatus for \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\": not found" Dec 13 14:51:46.346856 kubelet[2191]: E1213 14:51:46.346831 2191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\": not found" containerID="bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51" Dec 13 14:51:46.346949 kubelet[2191]: I1213 14:51:46.346894 2191 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51"} err="failed to get container status \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\": rpc error: code = NotFound desc = an error occurred when try to find container \"bde5549ee764475d059cb9f7d8e3ed3c5cd9bc0db8e7ed909c4d543464b57f51\": not found" Dec 13 14:51:46.346949 kubelet[2191]: I1213 14:51:46.346921 2191 scope.go:117] "RemoveContainer" containerID="7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9" Dec 13 14:51:46.348189 env[1295]: time="2024-12-13T14:51:46.348154529Z" level=info msg="RemoveContainer for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\"" Dec 13 14:51:46.351161 env[1295]: time="2024-12-13T14:51:46.351108519Z" level=info msg="RemoveContainer for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" returns successfully" Dec 13 14:51:46.351351 kubelet[2191]: I1213 14:51:46.351316 2191 scope.go:117] "RemoveContainer" containerID="7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9" Dec 13 14:51:46.351594 env[1295]: time="2024-12-13T14:51:46.351527963Z" level=error msg="ContainerStatus for \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\": not found" Dec 13 14:51:46.351736 kubelet[2191]: E1213 14:51:46.351710 2191 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\": not found" containerID="7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9" Dec 13 14:51:46.351825 kubelet[2191]: I1213 14:51:46.351752 2191 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9"} err="failed to get container status \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c1c7af76cb20ec31bfc00c8fa27c4469587eed200453f93afa9af5868da01e9\": not found" Dec 13 14:51:46.507015 kubelet[2191]: I1213 14:51:46.503628 2191 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" path="/var/lib/kubelet/pods/5f9f849c-c56c-4722-a128-babd68cd3e87/volumes" Dec 13 14:51:46.507015 kubelet[2191]: I1213 14:51:46.505988 2191 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92f88151-6c32-4d22-b365-80a6aef05be4" path="/var/lib/kubelet/pods/92f88151-6c32-4d22-b365-80a6aef05be4/volumes" Dec 13 14:51:46.789404 kubelet[2191]: E1213 14:51:46.789159 2191 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:51:46.898022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364-rootfs.mount: Deactivated successfully. Dec 13 14:51:46.898234 systemd[1]: var-lib-kubelet-pods-92f88151\x2d6c32\x2d4d22\x2db365\x2d80a6aef05be4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22pcm.mount: Deactivated successfully. Dec 13 14:51:46.898426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693-rootfs.mount: Deactivated successfully. Dec 13 14:51:46.898593 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693-shm.mount: Deactivated successfully. Dec 13 14:51:46.898738 systemd[1]: var-lib-kubelet-pods-5f9f849c\x2dc56c\x2d4722\x2da128\x2dbabd68cd3e87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhtn6b.mount: Deactivated successfully. Dec 13 14:51:46.898900 systemd[1]: var-lib-kubelet-pods-5f9f849c\x2dc56c\x2d4722\x2da128\x2dbabd68cd3e87-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:51:46.899096 systemd[1]: var-lib-kubelet-pods-5f9f849c\x2dc56c\x2d4722\x2da128\x2dbabd68cd3e87-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:51:47.928580 sshd[3757]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:47.934037 systemd[1]: sshd@24-10.243.72.102:22-139.178.68.195:51644.service: Deactivated successfully. Dec 13 14:51:47.936688 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:51:47.937362 systemd-logind[1280]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:51:47.939329 systemd-logind[1280]: Removed session 21. Dec 13 14:51:48.074623 systemd[1]: Started sshd@25-10.243.72.102:22-139.178.68.195:53804.service. Dec 13 14:51:48.964361 sshd[3928]: Accepted publickey for core from 139.178.68.195 port 53804 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:48.965974 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:48.974082 systemd-logind[1280]: New session 22 of user core. Dec 13 14:51:48.975150 systemd[1]: Started session-22.scope. Dec 13 14:51:49.994341 kubelet[2191]: I1213 14:51:49.994266 2191 topology_manager.go:215] "Topology Admit Handler" podUID="cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" podNamespace="kube-system" podName="cilium-bx4dj" Dec 13 14:51:49.995314 kubelet[2191]: E1213 14:51:49.995267 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" containerName="mount-cgroup" Dec 13 14:51:49.995464 kubelet[2191]: E1213 14:51:49.995442 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" containerName="apply-sysctl-overwrites" Dec 13 14:51:49.995604 kubelet[2191]: E1213 14:51:49.995583 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" containerName="cilium-agent" Dec 13 14:51:49.995736 kubelet[2191]: E1213 14:51:49.995714 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92f88151-6c32-4d22-b365-80a6aef05be4" containerName="cilium-operator" Dec 13 14:51:49.995873 kubelet[2191]: E1213 14:51:49.995851 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" containerName="mount-bpf-fs" Dec 13 14:51:49.996089 kubelet[2191]: E1213 14:51:49.996068 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" containerName="clean-cilium-state" Dec 13 14:51:49.996323 kubelet[2191]: I1213 14:51:49.996301 2191 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9f849c-c56c-4722-a128-babd68cd3e87" containerName="cilium-agent" Dec 13 14:51:49.996501 kubelet[2191]: I1213 14:51:49.996480 2191 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f88151-6c32-4d22-b365-80a6aef05be4" containerName="cilium-operator" Dec 13 14:51:50.056005 kubelet[2191]: I1213 14:51:50.055934 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hostproc\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.056368 kubelet[2191]: I1213 14:51:50.056342 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-ipsec-secrets\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.056598 kubelet[2191]: I1213 14:51:50.056576 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-run\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.056843 kubelet[2191]: I1213 14:51:50.056820 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-cgroup\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.057066 kubelet[2191]: I1213 14:51:50.057044 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cni-path\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.057285 kubelet[2191]: I1213 14:51:50.057264 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-xtables-lock\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.057536 kubelet[2191]: I1213 14:51:50.057514 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-bpf-maps\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.057745 kubelet[2191]: I1213 14:51:50.057723 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hubble-tls\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.057974 kubelet[2191]: I1213 14:51:50.057952 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vms\" (UniqueName: \"kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-kube-api-access-48vms\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.058197 kubelet[2191]: I1213 14:51:50.058176 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-config-path\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.058416 kubelet[2191]: I1213 14:51:50.058395 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-net\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.058650 kubelet[2191]: I1213 14:51:50.058628 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-etc-cni-netd\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.058881 kubelet[2191]: I1213 14:51:50.058860 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-clustermesh-secrets\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.059126 kubelet[2191]: I1213 14:51:50.059104 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-lib-modules\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.059351 kubelet[2191]: I1213 14:51:50.059330 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-kernel\") pod \"cilium-bx4dj\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " pod="kube-system/cilium-bx4dj" Dec 13 14:51:50.154477 sshd[3928]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:50.158967 systemd[1]: sshd@25-10.243.72.102:22-139.178.68.195:53804.service: Deactivated successfully. Dec 13 14:51:50.162851 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:51:50.162891 systemd-logind[1280]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:51:50.165803 systemd-logind[1280]: Removed session 22. Dec 13 14:51:50.299912 systemd[1]: Started sshd@26-10.243.72.102:22-139.178.68.195:53820.service. Dec 13 14:51:50.314927 env[1295]: time="2024-12-13T14:51:50.314005653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bx4dj,Uid:cb1e7e30-8176-4f9c-b7fb-15bc135aa28f,Namespace:kube-system,Attempt:0,}" Dec 13 14:51:50.344969 env[1295]: time="2024-12-13T14:51:50.344607210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:50.344969 env[1295]: time="2024-12-13T14:51:50.344687953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:50.344969 env[1295]: time="2024-12-13T14:51:50.344706341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:50.345361 env[1295]: time="2024-12-13T14:51:50.345028884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526 pid=3955 runtime=io.containerd.runc.v2 Dec 13 14:51:50.406757 env[1295]: time="2024-12-13T14:51:50.406688987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bx4dj,Uid:cb1e7e30-8176-4f9c-b7fb-15bc135aa28f,Namespace:kube-system,Attempt:0,} returns sandbox id \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\"" Dec 13 14:51:50.412564 env[1295]: time="2024-12-13T14:51:50.412237441Z" level=info msg="CreateContainer within sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:51:50.425162 env[1295]: time="2024-12-13T14:51:50.425065601Z" level=info msg="CreateContainer within sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd\"" Dec 13 14:51:50.427506 env[1295]: time="2024-12-13T14:51:50.427447466Z" level=info msg="StartContainer for \"08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd\"" Dec 13 14:51:50.471992 kubelet[2191]: I1213 14:51:50.471921 2191 setters.go:568] "Node became not ready" node="srv-997hs.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:51:50Z","lastTransitionTime":"2024-12-13T14:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:51:50.529107 env[1295]: time="2024-12-13T14:51:50.528889580Z" level=info msg="StartContainer for \"08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd\" returns successfully" Dec 13 14:51:50.577611 env[1295]: time="2024-12-13T14:51:50.576869510Z" level=info msg="shim disconnected" id=08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd Dec 13 14:51:50.577926 env[1295]: time="2024-12-13T14:51:50.577894819Z" level=warning msg="cleaning up after shim disconnected" id=08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd namespace=k8s.io Dec 13 14:51:50.578114 env[1295]: time="2024-12-13T14:51:50.578086487Z" level=info msg="cleaning up dead shim" Dec 13 14:51:50.589408 env[1295]: time="2024-12-13T14:51:50.589344539Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4037 runtime=io.containerd.runc.v2\n" Dec 13 14:51:51.184475 sshd[3945]: Accepted publickey for core from 139.178.68.195 port 53820 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:51.186635 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:51.194213 systemd[1]: Started session-23.scope. Dec 13 14:51:51.194916 systemd-logind[1280]: New session 23 of user core. Dec 13 14:51:51.320345 env[1295]: time="2024-12-13T14:51:51.317830010Z" level=info msg="CreateContainer within sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:51:51.340182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215614987.mount: Deactivated successfully. Dec 13 14:51:51.350899 env[1295]: time="2024-12-13T14:51:51.350787644Z" level=info msg="CreateContainer within sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe\"" Dec 13 14:51:51.351934 env[1295]: time="2024-12-13T14:51:51.351821759Z" level=info msg="StartContainer for \"4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe\"" Dec 13 14:51:51.439552 env[1295]: time="2024-12-13T14:51:51.436421327Z" level=info msg="StartContainer for \"4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe\" returns successfully" Dec 13 14:51:51.480384 env[1295]: time="2024-12-13T14:51:51.480287048Z" level=info msg="shim disconnected" id=4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe Dec 13 14:51:51.480384 env[1295]: time="2024-12-13T14:51:51.480383492Z" level=warning msg="cleaning up after shim disconnected" id=4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe namespace=k8s.io Dec 13 14:51:51.480384 env[1295]: time="2024-12-13T14:51:51.480400842Z" level=info msg="cleaning up dead shim" Dec 13 14:51:51.500716 env[1295]: time="2024-12-13T14:51:51.500521748Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4100 runtime=io.containerd.runc.v2\n" Dec 13 14:51:51.791466 kubelet[2191]: E1213 14:51:51.791264 2191 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:51:51.949838 sshd[3945]: pam_unix(sshd:session): session closed for user core Dec 13 14:51:51.954466 systemd[1]: sshd@26-10.243.72.102:22-139.178.68.195:53820.service: Deactivated successfully. Dec 13 14:51:51.955568 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:51:51.956117 systemd-logind[1280]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:51:51.957403 systemd-logind[1280]: Removed session 23. Dec 13 14:51:52.095254 systemd[1]: Started sshd@27-10.243.72.102:22-139.178.68.195:53836.service. Dec 13 14:51:52.178477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe-rootfs.mount: Deactivated successfully. Dec 13 14:51:52.312525 env[1295]: time="2024-12-13T14:51:52.312451815Z" level=info msg="StopPodSandbox for \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\"" Dec 13 14:51:52.312890 env[1295]: time="2024-12-13T14:51:52.312856490Z" level=info msg="Container to stop \"08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:52.313066 env[1295]: time="2024-12-13T14:51:52.313029753Z" level=info msg="Container to stop \"4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:51:52.316156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526-shm.mount: Deactivated successfully. Dec 13 14:51:52.367270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526-rootfs.mount: Deactivated successfully. Dec 13 14:51:52.374119 env[1295]: time="2024-12-13T14:51:52.374050360Z" level=info msg="shim disconnected" id=092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526 Dec 13 14:51:52.375007 env[1295]: time="2024-12-13T14:51:52.374973850Z" level=warning msg="cleaning up after shim disconnected" id=092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526 namespace=k8s.io Dec 13 14:51:52.375125 env[1295]: time="2024-12-13T14:51:52.375097694Z" level=info msg="cleaning up dead shim" Dec 13 14:51:52.388397 env[1295]: time="2024-12-13T14:51:52.388321318Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" Dec 13 14:51:52.389157 env[1295]: time="2024-12-13T14:51:52.389114648Z" level=info msg="TearDown network for sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" successfully" Dec 13 14:51:52.389325 env[1295]: time="2024-12-13T14:51:52.389277099Z" level=info msg="StopPodSandbox for \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" returns successfully" Dec 13 14:51:52.478776 kubelet[2191]: I1213 14:51:52.478718 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-run\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.479164 kubelet[2191]: I1213 14:51:52.479129 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-cgroup\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.479363 kubelet[2191]: I1213 14:51:52.479340 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48vms\" (UniqueName: \"kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-kube-api-access-48vms\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.480584 kubelet[2191]: I1213 14:51:52.480562 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-config-path\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.480825 kubelet[2191]: I1213 14:51:52.480793 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-clustermesh-secrets\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.480992 kubelet[2191]: I1213 14:51:52.480960 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cni-path\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.481157 kubelet[2191]: I1213 14:51:52.481117 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-xtables-lock\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.481315 kubelet[2191]: I1213 14:51:52.481276 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hubble-tls\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.481482 kubelet[2191]: I1213 14:51:52.481450 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hostproc\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.481691 kubelet[2191]: I1213 14:51:52.481669 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-bpf-maps\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.481843 kubelet[2191]: I1213 14:51:52.481821 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-lib-modules\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.482018 kubelet[2191]: I1213 14:51:52.481987 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-kernel\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.482167 kubelet[2191]: I1213 14:51:52.482144 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-net\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.482382 kubelet[2191]: I1213 14:51:52.482322 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-etc-cni-netd\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.482558 kubelet[2191]: I1213 14:51:52.482526 2191 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-ipsec-secrets\") pod \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\" (UID: \"cb1e7e30-8176-4f9c-b7fb-15bc135aa28f\") " Dec 13 14:51:52.483461 kubelet[2191]: I1213 14:51:52.479158 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.483614 kubelet[2191]: I1213 14:51:52.479189 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.487342 systemd[1]: var-lib-kubelet-pods-cb1e7e30\x2d8176\x2d4f9c\x2db7fb\x2d15bc135aa28f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:51:52.492633 kubelet[2191]: I1213 14:51:52.492391 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:51:52.492803 kubelet[2191]: I1213 14:51:52.492516 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cni-path" (OuterVolumeSpecName: "cni-path") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.492803 kubelet[2191]: I1213 14:51:52.492549 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.493241 kubelet[2191]: I1213 14:51:52.493210 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:51:52.493525 kubelet[2191]: I1213 14:51:52.493480 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.493680 kubelet[2191]: I1213 14:51:52.493654 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hostproc" (OuterVolumeSpecName: "hostproc") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.493863 kubelet[2191]: I1213 14:51:52.493838 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.494194 kubelet[2191]: I1213 14:51:52.494131 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.494612 kubelet[2191]: I1213 14:51:52.494391 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.494820 kubelet[2191]: I1213 14:51:52.494492 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:51:52.501283 systemd[1]: var-lib-kubelet-pods-cb1e7e30\x2d8176\x2d4f9c\x2db7fb\x2d15bc135aa28f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48vms.mount: Deactivated successfully. Dec 13 14:51:52.509898 kubelet[2191]: I1213 14:51:52.506744 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-kube-api-access-48vms" (OuterVolumeSpecName: "kube-api-access-48vms") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "kube-api-access-48vms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:52.510561 kubelet[2191]: I1213 14:51:52.510525 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:51:52.520613 kubelet[2191]: I1213 14:51:52.520558 2191 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" (UID: "cb1e7e30-8176-4f9c-b7fb-15bc135aa28f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:51:52.584005 kubelet[2191]: I1213 14:51:52.583914 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-run\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.584421 kubelet[2191]: I1213 14:51:52.584397 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-cgroup\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.584585 kubelet[2191]: I1213 14:51:52.584562 2191 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-48vms\" (UniqueName: \"kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-kube-api-access-48vms\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.584742 kubelet[2191]: I1213 14:51:52.584719 2191 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-clustermesh-secrets\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.584934 kubelet[2191]: I1213 14:51:52.584912 2191 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cni-path\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.585085 kubelet[2191]: I1213 14:51:52.585064 2191 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-xtables-lock\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.585235 kubelet[2191]: I1213 14:51:52.585214 2191 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hubble-tls\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.585398 kubelet[2191]: I1213 14:51:52.585378 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-config-path\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.585560 kubelet[2191]: I1213 14:51:52.585539 2191 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-hostproc\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.585702 kubelet[2191]: I1213 14:51:52.585682 2191 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-bpf-maps\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.585849 kubelet[2191]: I1213 14:51:52.585829 2191 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-lib-modules\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.586040 kubelet[2191]: I1213 14:51:52.586019 2191 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-net\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.586181 kubelet[2191]: I1213 14:51:52.586159 2191 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-etc-cni-netd\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.586330 kubelet[2191]: I1213 14:51:52.586309 2191 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-host-proc-sys-kernel\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.586483 kubelet[2191]: I1213 14:51:52.586462 2191 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f-cilium-ipsec-secrets\") on node \"srv-997hs.gb1.brightbox.com\" DevicePath \"\"" Dec 13 14:51:52.984515 sshd[4122]: Accepted publickey for core from 139.178.68.195 port 53836 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 14:51:52.987009 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:51:52.995047 systemd[1]: Started session-24.scope. Dec 13 14:51:52.995374 systemd-logind[1280]: New session 24 of user core. Dec 13 14:51:53.178161 systemd[1]: var-lib-kubelet-pods-cb1e7e30\x2d8176\x2d4f9c\x2db7fb\x2d15bc135aa28f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:51:53.178452 systemd[1]: var-lib-kubelet-pods-cb1e7e30\x2d8176\x2d4f9c\x2db7fb\x2d15bc135aa28f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:51:53.316095 kubelet[2191]: I1213 14:51:53.315936 2191 scope.go:117] "RemoveContainer" containerID="4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe" Dec 13 14:51:53.323319 env[1295]: time="2024-12-13T14:51:53.323061456Z" level=info msg="RemoveContainer for \"4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe\"" Dec 13 14:51:53.330066 env[1295]: time="2024-12-13T14:51:53.330008606Z" level=info msg="RemoveContainer for \"4d1c4808b8be96e002faa532069d88bff125363303258e6a28e5a5ac91ed2afe\" returns successfully" Dec 13 14:51:53.330531 kubelet[2191]: I1213 14:51:53.330499 2191 scope.go:117] "RemoveContainer" containerID="08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd" Dec 13 14:51:53.332884 env[1295]: time="2024-12-13T14:51:53.332829577Z" level=info msg="RemoveContainer for \"08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd\"" Dec 13 14:51:53.351323 env[1295]: time="2024-12-13T14:51:53.349944920Z" level=info msg="RemoveContainer for \"08c62046f302365b2463c03a9e18bc41a7b412c9f73b4f27926190fd245523bd\" returns successfully" Dec 13 14:51:53.368774 kubelet[2191]: I1213 14:51:53.368711 2191 topology_manager.go:215] "Topology Admit Handler" podUID="e93f7099-2cf1-48c6-b809-60dcde56faab" podNamespace="kube-system" podName="cilium-5cx2t" Dec 13 14:51:53.369042 kubelet[2191]: E1213 14:51:53.368803 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" containerName="mount-cgroup" Dec 13 14:51:53.369042 kubelet[2191]: E1213 14:51:53.368824 2191 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" containerName="apply-sysctl-overwrites" Dec 13 14:51:53.369042 kubelet[2191]: I1213 14:51:53.368862 2191 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" containerName="apply-sysctl-overwrites" Dec 13 14:51:53.391693 kubelet[2191]: I1213 14:51:53.391637 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-xtables-lock\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.391693 kubelet[2191]: I1213 14:51:53.391702 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-cilium-cgroup\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392012 kubelet[2191]: I1213 14:51:53.391737 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-etc-cni-netd\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392012 kubelet[2191]: I1213 14:51:53.391770 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e93f7099-2cf1-48c6-b809-60dcde56faab-hubble-tls\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392012 kubelet[2191]: I1213 14:51:53.391804 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-cni-path\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392012 kubelet[2191]: I1213 14:51:53.391836 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btj2j\" (UniqueName: \"kubernetes.io/projected/e93f7099-2cf1-48c6-b809-60dcde56faab-kube-api-access-btj2j\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392012 kubelet[2191]: I1213 14:51:53.391868 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-cilium-run\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392012 kubelet[2191]: I1213 14:51:53.391898 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-hostproc\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392332 kubelet[2191]: I1213 14:51:53.391942 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e93f7099-2cf1-48c6-b809-60dcde56faab-cilium-ipsec-secrets\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392332 kubelet[2191]: I1213 14:51:53.391980 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e93f7099-2cf1-48c6-b809-60dcde56faab-clustermesh-secrets\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392332 kubelet[2191]: I1213 14:51:53.392011 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-host-proc-sys-net\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392332 kubelet[2191]: I1213 14:51:53.392041 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-host-proc-sys-kernel\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392332 kubelet[2191]: I1213 14:51:53.392076 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-bpf-maps\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392580 kubelet[2191]: I1213 14:51:53.392106 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e93f7099-2cf1-48c6-b809-60dcde56faab-lib-modules\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.392580 kubelet[2191]: I1213 14:51:53.392136 2191 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e93f7099-2cf1-48c6-b809-60dcde56faab-cilium-config-path\") pod \"cilium-5cx2t\" (UID: \"e93f7099-2cf1-48c6-b809-60dcde56faab\") " pod="kube-system/cilium-5cx2t" Dec 13 14:51:53.679642 env[1295]: time="2024-12-13T14:51:53.679082083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5cx2t,Uid:e93f7099-2cf1-48c6-b809-60dcde56faab,Namespace:kube-system,Attempt:0,}" Dec 13 14:51:53.696372 env[1295]: time="2024-12-13T14:51:53.696109632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:51:53.696372 env[1295]: time="2024-12-13T14:51:53.696168518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:51:53.696372 env[1295]: time="2024-12-13T14:51:53.696185314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:51:53.696994 env[1295]: time="2024-12-13T14:51:53.696916803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45 pid=4180 runtime=io.containerd.runc.v2 Dec 13 14:51:53.748340 env[1295]: time="2024-12-13T14:51:53.748221741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5cx2t,Uid:e93f7099-2cf1-48c6-b809-60dcde56faab,Namespace:kube-system,Attempt:0,} returns sandbox id \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\"" Dec 13 14:51:53.752664 env[1295]: time="2024-12-13T14:51:53.752629811Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:51:53.766733 env[1295]: time="2024-12-13T14:51:53.766687777Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e25a55095089523d72b405f1fd430a6b64332535a54909970bfd35460edcc1b\"" Dec 13 14:51:53.768975 env[1295]: time="2024-12-13T14:51:53.768940090Z" level=info msg="StartContainer for \"2e25a55095089523d72b405f1fd430a6b64332535a54909970bfd35460edcc1b\"" Dec 13 14:51:53.840851 env[1295]: time="2024-12-13T14:51:53.839702096Z" level=info msg="StartContainer for \"2e25a55095089523d72b405f1fd430a6b64332535a54909970bfd35460edcc1b\" returns successfully" Dec 13 14:51:53.876976 env[1295]: time="2024-12-13T14:51:53.876879329Z" level=info msg="shim disconnected" id=2e25a55095089523d72b405f1fd430a6b64332535a54909970bfd35460edcc1b Dec 13 14:51:53.877359 env[1295]: time="2024-12-13T14:51:53.877319843Z" level=warning msg="cleaning up after shim disconnected" id=2e25a55095089523d72b405f1fd430a6b64332535a54909970bfd35460edcc1b namespace=k8s.io Dec 13 14:51:53.877494 env[1295]: time="2024-12-13T14:51:53.877466595Z" level=info msg="cleaning up dead shim" Dec 13 14:51:53.889783 env[1295]: time="2024-12-13T14:51:53.889714808Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4264 runtime=io.containerd.runc.v2\n" Dec 13 14:51:54.339462 env[1295]: time="2024-12-13T14:51:54.331477482Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:51:54.358395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484181773.mount: Deactivated successfully. Dec 13 14:51:54.367474 env[1295]: time="2024-12-13T14:51:54.367386418Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85a21126d055e7b24a73314eb0114e493017bbbf5152aae79c3b6327b455f7c8\"" Dec 13 14:51:54.370128 env[1295]: time="2024-12-13T14:51:54.368633202Z" level=info msg="StartContainer for \"85a21126d055e7b24a73314eb0114e493017bbbf5152aae79c3b6327b455f7c8\"" Dec 13 14:51:54.443626 env[1295]: time="2024-12-13T14:51:54.443509110Z" level=info msg="StartContainer for \"85a21126d055e7b24a73314eb0114e493017bbbf5152aae79c3b6327b455f7c8\" returns successfully" Dec 13 14:51:54.474940 env[1295]: time="2024-12-13T14:51:54.474847827Z" level=info msg="shim disconnected" id=85a21126d055e7b24a73314eb0114e493017bbbf5152aae79c3b6327b455f7c8 Dec 13 14:51:54.474940 env[1295]: time="2024-12-13T14:51:54.474940748Z" level=warning msg="cleaning up after shim disconnected" id=85a21126d055e7b24a73314eb0114e493017bbbf5152aae79c3b6327b455f7c8 namespace=k8s.io Dec 13 14:51:54.474940 env[1295]: time="2024-12-13T14:51:54.474959431Z" level=info msg="cleaning up dead shim" Dec 13 14:51:54.508891 kubelet[2191]: I1213 14:51:54.508306 2191 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cb1e7e30-8176-4f9c-b7fb-15bc135aa28f" path="/var/lib/kubelet/pods/cb1e7e30-8176-4f9c-b7fb-15bc135aa28f/volumes" Dec 13 14:51:54.509676 env[1295]: time="2024-12-13T14:51:54.508964160Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4326 runtime=io.containerd.runc.v2\n" Dec 13 14:51:55.178439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85a21126d055e7b24a73314eb0114e493017bbbf5152aae79c3b6327b455f7c8-rootfs.mount: Deactivated successfully. Dec 13 14:51:55.327630 env[1295]: time="2024-12-13T14:51:55.327572781Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:51:55.351598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905207884.mount: Deactivated successfully. Dec 13 14:51:55.360872 env[1295]: time="2024-12-13T14:51:55.360812451Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eef287fd56e5a0ed16ccb23a5bf272e794e90463cccb807eb83503e1a4841490\"" Dec 13 14:51:55.366809 env[1295]: time="2024-12-13T14:51:55.366769094Z" level=info msg="StartContainer for \"eef287fd56e5a0ed16ccb23a5bf272e794e90463cccb807eb83503e1a4841490\"" Dec 13 14:51:55.459308 env[1295]: time="2024-12-13T14:51:55.459155239Z" level=info msg="StartContainer for \"eef287fd56e5a0ed16ccb23a5bf272e794e90463cccb807eb83503e1a4841490\" returns successfully" Dec 13 14:51:55.507217 env[1295]: time="2024-12-13T14:51:55.507094615Z" level=info msg="shim disconnected" id=eef287fd56e5a0ed16ccb23a5bf272e794e90463cccb807eb83503e1a4841490 Dec 13 14:51:55.507640 env[1295]: time="2024-12-13T14:51:55.507210717Z" level=warning msg="cleaning up after shim disconnected" id=eef287fd56e5a0ed16ccb23a5bf272e794e90463cccb807eb83503e1a4841490 namespace=k8s.io Dec 13 14:51:55.507640 env[1295]: time="2024-12-13T14:51:55.507271664Z" level=info msg="cleaning up dead shim" Dec 13 14:51:55.529458 env[1295]: time="2024-12-13T14:51:55.527633596Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4385 runtime=io.containerd.runc.v2\n" Dec 13 14:51:56.178594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef287fd56e5a0ed16ccb23a5bf272e794e90463cccb807eb83503e1a4841490-rootfs.mount: Deactivated successfully. Dec 13 14:51:56.333693 env[1295]: time="2024-12-13T14:51:56.333615642Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:51:56.348429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556703905.mount: Deactivated successfully. Dec 13 14:51:56.358037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012929712.mount: Deactivated successfully. Dec 13 14:51:56.375850 env[1295]: time="2024-12-13T14:51:56.375781801Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5661e49b4ee2815c45cdd6232dd26f4b43776b64b97e23f884b27afacb6e22e6\"" Dec 13 14:51:56.380661 env[1295]: time="2024-12-13T14:51:56.380614578Z" level=info msg="StartContainer for \"5661e49b4ee2815c45cdd6232dd26f4b43776b64b97e23f884b27afacb6e22e6\"" Dec 13 14:51:56.471620 env[1295]: time="2024-12-13T14:51:56.471473092Z" level=info msg="StartContainer for \"5661e49b4ee2815c45cdd6232dd26f4b43776b64b97e23f884b27afacb6e22e6\" returns successfully" Dec 13 14:51:56.506056 env[1295]: time="2024-12-13T14:51:56.505960715Z" level=info msg="shim disconnected" id=5661e49b4ee2815c45cdd6232dd26f4b43776b64b97e23f884b27afacb6e22e6 Dec 13 14:51:56.506399 env[1295]: time="2024-12-13T14:51:56.506050312Z" level=warning msg="cleaning up after shim disconnected" id=5661e49b4ee2815c45cdd6232dd26f4b43776b64b97e23f884b27afacb6e22e6 namespace=k8s.io Dec 13 14:51:56.506399 env[1295]: time="2024-12-13T14:51:56.506093095Z" level=info msg="cleaning up dead shim" Dec 13 14:51:56.522099 env[1295]: time="2024-12-13T14:51:56.522016042Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:51:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4442 runtime=io.containerd.runc.v2\n" Dec 13 14:51:56.793388 kubelet[2191]: E1213 14:51:56.792928 2191 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:51:57.345801 env[1295]: time="2024-12-13T14:51:57.345723632Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:51:57.376536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983168798.mount: Deactivated successfully. Dec 13 14:51:57.384884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263989454.mount: Deactivated successfully. Dec 13 14:51:57.388436 env[1295]: time="2024-12-13T14:51:57.388373759Z" level=info msg="CreateContainer within sandbox \"0782489e03b01c91dddef260091cbec232d06177ab4914f862628cee444bac45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb\"" Dec 13 14:51:57.391107 env[1295]: time="2024-12-13T14:51:57.389540999Z" level=info msg="StartContainer for \"67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb\"" Dec 13 14:51:57.466038 env[1295]: time="2024-12-13T14:51:57.465975172Z" level=info msg="StartContainer for \"67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb\" returns successfully" Dec 13 14:51:58.151340 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:51:58.377512 kubelet[2191]: I1213 14:51:58.377431 2191 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5cx2t" podStartSLOduration=5.377249869 podStartE2EDuration="5.377249869s" podCreationTimestamp="2024-12-13 14:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:51:58.374707646 +0000 UTC m=+172.133288984" watchObservedRunningTime="2024-12-13 14:51:58.377249869 +0000 UTC m=+172.135831202" Dec 13 14:51:59.887751 systemd[1]: run-containerd-runc-k8s.io-67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb-runc.D53Rom.mount: Deactivated successfully. Dec 13 14:52:01.770542 systemd-networkd[1072]: lxc_health: Link UP Dec 13 14:52:01.786360 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:52:01.789009 systemd-networkd[1072]: lxc_health: Gained carrier Dec 13 14:52:02.132502 systemd[1]: run-containerd-runc-k8s.io-67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb-runc.gHbYOf.mount: Deactivated successfully. Dec 13 14:52:03.055556 systemd-networkd[1072]: lxc_health: Gained IPv6LL Dec 13 14:52:04.418041 systemd[1]: run-containerd-runc-k8s.io-67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb-runc.Du17qx.mount: Deactivated successfully. Dec 13 14:52:06.488282 env[1295]: time="2024-12-13T14:52:06.488197940Z" level=info msg="StopPodSandbox for \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\"" Dec 13 14:52:06.489670 env[1295]: time="2024-12-13T14:52:06.489555368Z" level=info msg="TearDown network for sandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" successfully" Dec 13 14:52:06.489789 env[1295]: time="2024-12-13T14:52:06.489662551Z" level=info msg="StopPodSandbox for \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" returns successfully" Dec 13 14:52:06.490917 env[1295]: time="2024-12-13T14:52:06.490876125Z" level=info msg="RemovePodSandbox for \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\"" Dec 13 14:52:06.491032 env[1295]: time="2024-12-13T14:52:06.490923513Z" level=info msg="Forcibly stopping sandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\"" Dec 13 14:52:06.491107 env[1295]: time="2024-12-13T14:52:06.491042887Z" level=info msg="TearDown network for sandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" successfully" Dec 13 14:52:06.495688 env[1295]: time="2024-12-13T14:52:06.495635548Z" level=info msg="RemovePodSandbox \"0a6991ebefa49b3d76dc6b87236d2a671bc56fe79a1a2aa2984005f62ee53364\" returns successfully" Dec 13 14:52:06.497503 env[1295]: time="2024-12-13T14:52:06.497452903Z" level=info msg="StopPodSandbox for \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\"" Dec 13 14:52:06.497668 env[1295]: time="2024-12-13T14:52:06.497615865Z" level=info msg="TearDown network for sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" successfully" Dec 13 14:52:06.497748 env[1295]: time="2024-12-13T14:52:06.497670369Z" level=info msg="StopPodSandbox for \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" returns successfully" Dec 13 14:52:06.498210 env[1295]: time="2024-12-13T14:52:06.498168364Z" level=info msg="RemovePodSandbox for \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\"" Dec 13 14:52:06.498319 env[1295]: time="2024-12-13T14:52:06.498208488Z" level=info msg="Forcibly stopping sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\"" Dec 13 14:52:06.498391 env[1295]: time="2024-12-13T14:52:06.498348731Z" level=info msg="TearDown network for sandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" successfully" Dec 13 14:52:06.503436 env[1295]: time="2024-12-13T14:52:06.503394622Z" level=info msg="RemovePodSandbox \"092ce0d943a1bbf1450ff65165244fbe94e5ec80bbb4540bc553c4cba9ae9526\" returns successfully" Dec 13 14:52:06.503928 env[1295]: time="2024-12-13T14:52:06.503891174Z" level=info msg="StopPodSandbox for \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\"" Dec 13 14:52:06.504088 env[1295]: time="2024-12-13T14:52:06.504027771Z" level=info msg="TearDown network for sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" successfully" Dec 13 14:52:06.504183 env[1295]: time="2024-12-13T14:52:06.504090384Z" level=info msg="StopPodSandbox for \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" returns successfully" Dec 13 14:52:06.505727 env[1295]: time="2024-12-13T14:52:06.504638077Z" level=info msg="RemovePodSandbox for \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\"" Dec 13 14:52:06.505727 env[1295]: time="2024-12-13T14:52:06.504702165Z" level=info msg="Forcibly stopping sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\"" Dec 13 14:52:06.505727 env[1295]: time="2024-12-13T14:52:06.504828092Z" level=info msg="TearDown network for sandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" successfully" Dec 13 14:52:06.508819 env[1295]: time="2024-12-13T14:52:06.508775718Z" level=info msg="RemovePodSandbox \"3c6dd9a38a71739efc5b2ddeba1377b389bf7aa57bf6aacf22a847e15816e693\" returns successfully" Dec 13 14:52:06.697243 systemd[1]: run-containerd-runc-k8s.io-67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb-runc.JZCEXo.mount: Deactivated successfully. Dec 13 14:52:08.900540 systemd[1]: run-containerd-runc-k8s.io-67ce5601593531099ef13def142e1cdf3d5f2378b4f83d4306a02a6cb501c7cb-runc.bvkRcp.mount: Deactivated successfully. Dec 13 14:52:09.167457 sshd[4122]: pam_unix(sshd:session): session closed for user core Dec 13 14:52:09.173235 systemd[1]: sshd@27-10.243.72.102:22-139.178.68.195:53836.service: Deactivated successfully. Dec 13 14:52:09.174600 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:52:09.176141 systemd-logind[1280]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:52:09.177560 systemd-logind[1280]: Removed session 24.